Michael Shellenberger to Speak at 18th Annual POWER Conference on March 22

Breakthrough Does the Impossible

As a forward-looking, pro-technology think tank, Breakthrough is interested in ways that technology can decouple economic goods from environmental impacts. A lot of people (including many of the Breakthrough team) love eating meat, but also recognize that producing meat—especially beef—creates a large environmental footprint in terms of land and water use, greenhouse gas emissions, and local pollution. Can we harness cutting-edge food technologies to create such a close facsimile to a burger that true meat lovers can’t tell the difference?

That’s the mission Impossible Foods set itself with this burger, and we went to see for ourselves.

The Preparation

Cockscomb restaurant in San Francisco is one of two Bay Area restaurants now offering the Impossible Burger. They serve it on their lunch menu topped with Gruyere cheese, pickles, caramelized onions, and Dijon mustard and accompanied by a small green salad. It costs $19.  

The Eaters

Breakthrough staff members range from meat-lovers to vegetarians.

The Verdict

Marian Swain, Senior Analyst at Breakthrough tries her first bite of the Impossible Burger 

There was a range of opinion in our group, but the consensus seems to be “best veggie burger I’ve ever had, but not the same as a meat burger.”

The outward appearance of the Impossible Burger is remarkably close to a meat burger. It is pink in the middle and browned on the edges. The mouthfeel and texture are quite close to a meat burger, and it is surprisingly greasy and juicy. However, the patty is softer than a meat burger and falls apart pretty easily (it slides out of the sides of the bun as you bite down).

I thought the flavor of the burger itself was definitely blander than meat. With a burger, the meat is the main event, so a lackluster patty is not ideal. I think in a different preparation, like chili or tacos, I wouldn’t have been able to tell the difference nearly as much, since the meat would be more heavily seasoned and not the main ingredient. I see a lot of potential for using the Impossible Burger meat in those types of preparations.

I felt quite satisfied and full after eating it, and not slightly ill the way I often feel after devouring an entire pub-style burger. I would be happy to eat an Impossible Burger again, although I can’t say I’d always choose it over the real meat version if given the choice. I would love to try tacos or a bolognese sauce made with the Impossible Burger meat—I think I would enjoy those even more and miss real meat even less. One other melioristic option could be a burger that is half beef for flavor, half Impossible Burger meat for footprint.

Innovations like the Impossible Burger could help move fake meat from the lonely end of the freezer aisle into the mainstream. Cost is still a barrier to this transition (a $19 burger is definitely a splurge). But if fake meat or cultured meat can become as tasty and cheap as the real thing, it could have a huge effect on the global environment. Technological substitutes are an important way that humans spare nature, as outlined in the Breakthrough report Nature Unbound.   


Read more reactions to the Impossible Burger from the rest of the Breakthrough team (I asked them to rate it 1-5, where 5 is the best):

Ted Nordaus, Executive Director: A very good veggie burger. A mediocre hamburger. The meat-like fibers were more like very long cooked shredded beef than hamburger. Had a really good fat/grease feel, like a burger but the patty fell apart as you ate it and it didn’t have that beef tallow aroma that a good burger has. A promising effort. It still needs work. But already better than the actual burger at the Daniel Patterson/Roy Choi LocoL healthy fast food chain. Rating: 5 for a veggie burger, 2 for a hamburger.

Emma Brush, Staff Writer: The Impossible Burger experience gets a 4 from me. Much anticipation, much digestion, and some decent “meat” in the middle. As with most burgers, the first bite was gloriously greasy and flavorful, and I would now enjoy nothing more than a nap.

Alex Trembath, Communications Director: What surprised me is that the texture and mouthfeel felt closest to an actual beef burger, while the taste and the aroma were more distinct. Cockscomb’s Impossible Burger would benefit from a crisp tomato slice, a sharper cheese, and a nice sauce. But if I encountered this Impossible beef in a bolognese sauce or a taco, I don’t think I would notice it’s not real meat. Rating: 3.5.

Michael Goff, Energy Analyst: The texture was good, taste was OK, aroma was off-putting, and appearance was in the uncanny valley. Rating: 3.

Joanna Calabrese, Office Manager: Prepare to be tricked by the texture into the true belief that this is actually beef.  Miraculously juicy, savory, and mildly flavored, the "meat" itself stands alone as a noteworthy feat, but Cockscomb's take on it was less than so. The full burger package was largely mushy in texture and could use a more creative condiments combination than the classic melted cheese and grilled onions. Rating: 3.

James McNamara, Conservation Analyst: I was impressed with the rich, savoury flavour. The colour and texture were similar to ground beef, although the burger itself crumbled more easily than a beef burger normally would. My one criticism would actually be that it was almost too greasy, which all things considered is a pretty impressive feat for a plant-based burger! I’d give it a 3.5 overall, but on the scale of a veggie burger, I'd give it 5.

Hafeezah Abdullah, Events and Development Manager: Hands down the best veggie burger I've ever had. Would have been even better if it came with french fries and chipotle mayo. Totally would eat it again. Rating: 3.5 overall, but 5 for a veggie burger.   

Grace Emery, Production Manager: I’d give the burger 2.5, but I think the patty could have potential for a 3 or 3.5 with the right adjustments. It could use some more spices to “beef” up the mild mushroomy flavor. It had the same texture and color as a rare, melt-in-your-mouth beef patty but the taste didn’t match up. Paired with a toasted brioche bun (soft enough not to smash the delicate patty, but toasted so it stands up to the juiciness), some fresh onion or lettuce for a crunch, and an aioli, this burger would have been more convincing and enjoyable to eat. I’m looking forward to the "Impossidog" (impossible hot dog) next.

Whitney Caruso, Director, Third Plateau: While the burger was intriguing at first, talk of the rubbery texture, pungent smell, and grease began to take over my senses. After a few bites of crumbling rubbery grease, I put it down and went for my salad instead. Rating: 1.5.

Mike Berkowitz, Principal, Third Plateau: The Impossible Burger didn't quite do the impossible. The best part about the burger is the way in which it really, truly looks and acts like ground beef. But that's about where the comparison ends. What I most missed in this experience was something that smelled like a hamburger and that had the depth of flavor that a burger does. The Impossible Burger had no discernible smell and its flavor was a bit dulled - though it certainly approximates the flavor of beef better than any veggie burger I've ever had. It was fun to try though, and I can always say that I ate modern technology. Rating: 2.5.


Many thanks to Cockscomb restaurant for hosting our group to try the Impossible Burger. Cockscomb restaurant and Impossible Foods did not contribute financially to this taste test or read this article before it was published.

Breakthrough Does the Impossible



Senior Fellows

Breakthrough Senior Fellows collaborate with and advise Breakthrough Institute research staff in the areas of energy, conservation, innovation, and other fields essential to advancing the ecomodernist project. Leading thinkers, writers, and scholars in the study of society and the environment, senior fellows serve as indispensable partners and champions of Breakthrough’s work and research.

Senior fellows are invited to join Breakthrough’s network each year. Their contributions take the form of peer-reviewed research, long-form essays, journal articles, and workshops. More broadly, senior fellows expand and diversify Breakthrough’s research agenda, performing the long-term work of building and broadcasting ecomodern ideas in the world.


Applications for the Breakthrough Research Fellowship 2017 are closed and will open in November 2017.

Each summer, Breakthrough seeks a small number of outstanding researchers and writers for the Breakthrough Research Fellowship. Fellows will submit a research proposal aligned with Breakthrough's interest areas, and work on their project remotely or at the Breakthrough office for two months. See below for more information on how to apply.

The Research Fellowship runs for 8 weeks between June and August and pays in installments, totaling $5,000 over the course of the Fellowship.

Who can apply?

Applicants interested in the Research Fellowship should possess a Master's degree at minimum and have a well-scoped research proposal. Excellent proposals will define goals, methods, and align with Breakthrough’s interest areas and our mission statement (see details below). If you think your research can contribute to or deepen an ecomodern understanding of society and the environment, then this is an opportunity for you.

How do I apply?

Apply using our online application form. You will need to submit a CV/resume, a research proposal, and 1-3 three writing samples. See details below. All uploaded documents must be in PDF format. Incomplete applications will not be considered.

When is the application deadline?

Applications for the Breakthrough Research Fellowship 2017 are due at 11:59pm PST on Wednesday, February 15, 2017.

When will I hear back?

Applications are reviewed and interviews conducted over 3-4 weeks after the application deadline. Decisions will be announced within this period.

What is the duration and timing of the program?

Research Fellowships will be completed remotely and are designed to last 2 months. Research Fellows are also invited to attend the Breakthrough Dialogue, which will take place June 21 through June 23, 2017. If you have a scheduling conflict with these dates, please do not be discouraged from applying.

Contact Breakthrough staff with any questions and concerns: fellowships [at] thebreakthrough.org

How are Breakthrough Research Fellows compensated?

The Breakthrough Research Fellowship offers a $5,000 stipend to be delivered in installments upon demonstrated progress.


Requirements for Research Fellowship Application


Please limit your resume or CV to two pages maximum.

Research Proposal

Research proposals will make up the core of the Research Fellowship application. Research proposals should include:

  • A brief summary and literature review of the topic at hand;

  • The goals of the project (what unanswered questions are you addressing?);

  • Type of analysis and methods;

  • Expected results;

  • Indication of your own experience with the subject matter and/or methods;

  • Intended product (peer-reviewed paper or white paper).

Research proposals can be up to 1500 words.

Writing Samples (1-3)

At least one and no more than three writing samples are required for the application. Excellent writing samples will demonstrate familiarity with the subject matter and methods of the research being proposed.

Letters of Recommendation

At least one letter of recommendation is required, preferably from a supervisor who has worked with you on similar or related research in the past. Recommendations will only be accepted if the author of the letter, not the applicant, sends the letter directly to Breakthrough. Letters should be be sent to fellowships [at] thebreakthrough [dot] org.


Breakthrough Institute is an equal opportunity employer.

Research Fellowship

Breakthrough Research Fellowships are awarded to non-resident research collaborators with the Breakthrough Institute’s research program. Launched in 2016, the program offers opportunities to the brightest and most talented thinkers to address unanswered questions related to energy, conservation, agriculture, growth, and innovation, and to change the way society approaches major environmental and development challenges.

Scholars with experience ranging from a Master’s degree to a postdoctoral appointment to a senior professorship may apply. These paid fellowships, completed in collaboration with research staff, advance Breakthrough’s research and reach, fostering partnerships with experts around the world at the cutting-edge of scholarship in these spaces.

For all the progress afforded to humanity by modernization and growth, there remain many unanswered questions. The Breakthrough Institute exists in part to answer these questions, and to change the way society approaches major environmental and development challenges. In service of this mission, we seek to work with the brightest and most talented thinkers on issues related to energy, conservation, agriculture, growth, and innovation. The Research Fellowship allows us to partner with experts all over the world doing cutting-edge scholarship in these spaces.

Research fellows are also invited to join us for our annual Breakthrough Dialogue, an opportunity to interact with the leading thinkers, writers, and scholars in the study of society and environment, as well as attend talks, debates, and working groups within Breakthrough’s different research interests.


Breakthrough Research fellowships are awarded to non-resident research collaborators with the Breakthrough Institute’s research program. Launched in 2016, the program offers opportunities to the brightest and most talented thinkers to address unanswered questions around energy, conservation, agriculture, growth, and innovation, and to change the way society approaches major environmental and development challenges.

Scholars with experience ranging from a Master’s degree to a postdoctoral appointment to a senior professorship may apply. These paid fellowships, completed in collaboration with research staff, advance Breakthrough’s research and reach, fostering partnerships with experts around the world at the cutting-edge of scholarship in these spaces.

Generation Fellowship

Breakthrough Fellows have produced research and reports covered in the New York Times, Washington Post, and Time magazine. Above, Breakthrough Fellows past and present at the 2015 Breakthrough Dialogue.


Program Overview

Breakthrough Generation is an initiative in the Breakthrough Institute’s research program, founded in 2008 to foster the development of a new generation of thinkers and writers capable of finding pragmatic new solutions to today’s greatest challenges in the areas of energy, economy, and environment.

Every summer from June to August, Generation offers a small number of paid, highly competitive, ten-week fellowships to recent college graduates and postgraduates from around the world.

The first two weeks of the summer are dedicated to Breakthrough Bootcamp, an intellectual crash course involving intensive reading, writing, and an expert lecture series designed to provide a grounding in the broad-spectrum thinking that informs Breakthrough's policy agenda. Topics covered include modernization theory, social psychology, aspirational politics and philosophy, economics and innovation policy, and technology policy.

For more advanced and independent research, see Breakthrough's new Research Fellowship.

Breakthrough Fellows were invited to tour the Advanced Light Source (ALS) research facility at Lawrence Berkeley National Laboratories in August 2014.

For the remainder of the fellowship, fellows work in small teams divided between three program areas: Energy, Conservation, and Innovation. Supervised by policy staff, fellows produce policy white papers, reports, and memos. Previous projects (for a full list, click here) have been featured in the New York Times, Newsweek, Time Magazine, the Financial Times, the Wall Street Journal, the Harvard Law and Policy Journal, among others, as well as in Congressional testimony.

In addition to research and analysis, fellows attend the Breakthrough Dialogue -an opportunity to interact with the leading thinkers, writers, and scholars in the study of society and the environment, as well as attend talks, debates, and working groups within Breakthrough’s different program areas.

2015 Fellows in Yosemite on the Breakthrough Generation camping trip.

Over 75 young experts and analysts have completed Breakthrough Generation since its inception in 2008 and moved on to important positions in government, academia, and the nonprofit sector. Together with career support from the Breakthrough staff and the wider network of Senior Fellows and associates, Breakthrough Generation makes for many avenues to continue an exciting career. To see what some of the former fellows are doing now, click here.

Generation Bootcamp

The first two weeks of the Breakthrough Generation Fellowship consist of an intensive intellectual crash course called Breakthrough Bootcamp. Over the course of Bootcamp, fellows acquire a strong foundation in the theoretical and philosophical underpinnings of Breakthrough's outlook and work, plus the skills required to successfully undertake an independent research project. Bootcamp not only involves lengthy readings and vigorous discussions, but also guest lectures from Senior Fellows and other experts, field trips to cutting-edge institutions, and ample socializing and networking opportunities with other young public policy professionals.

Readings and Discussions

Every day of Bootcamp, fellows have a set of readings covering themes such as pragmatism, risk and the precautionary principle, modernization and development, energy systems and transitions, conservation in the Anthropocene, natural resources, and futurism. Highlights from the syllabus include excerpts from Why Nations Fail by Daron Acemoglu and James Robinson, Development as Freedom by Amartya Sen, "The Entrepreneurial State," by Mariana Mazzucato, and Rambunctious Garden by Emma Marris. These readings form the basis for a variety of group discussions, presentations, and workshops designed to facilitate learning and hone participants' skills in analysis, presentation, and debating.

Guest Speakers

Bootcamp features a set of regular presentations and Q&As with people from Breakthrough's network of Senior Fellows and associated experts. Examples from the past include:

Picture of Roger Pielke Jr.

Roger Pielke, Jr., Breakthrough Senior Fellow, gave a presentation on the role of science in politics and policy.

Sarah Evanega, PhD, Director of the Cornell Alliance for Science spoke to the 2016 Generation Fellows on Genetically Modified Organisms and agricultural resiliancy in the developing world.

Bill Bonvillian, Director of MIT's Washington Office and Professor at Georgetown University, presented a history of the federal government's involvement in technological innovation.

Joyashree Roy, Professor of Economics at Jadavpur University, Kolkata in India, spoke to 2015 fellows about development and energy in India. 

Rasmus Karlsson, Professor at Hankuk University of Foreign Studies in South Korea, answered questions about his work on futurism, space colonization, and technological solutions to climate change.

Fred Block, Research Professor at UC Davis, presented on the historical role of government investment in energy technology innovation and the outsized influence of the Small Business Innovation Research (SBIR) program.

Fieldtrips and Socials

Bootcamp offers unique opportunities to visit local Bay Area think tanks, research labs, and companies. Some of the popular sites visited in past summers include:

Lawrence Livermore National Laboratory Breakthrough’s 2013 Fellows were invited to Lawrence Livermore National Lab in Livermore, CA to visit the National Ignition Facility, the world’s foremost research facility for nuclear fusion. Fellows met with the lab’s staff to discuss the potential of nuclear fusion energy.

Lawrence Berkeley National Lab Fellows learned about different research projects developing clean energy technologies and briefed the Lab Director on their summer research projects. We also visited the Advance Light Source to see how new materials are designed and tested.

Greenstart is a cleantech start-up incubator. Fellows learned about the process of launching a cleantech start-up.

Advanced Energy Economy is a clean energy industry association. Fellows of Breakthrough Generation and AEE briefed each other on their respective projects and shared other experiences from the sector.

Diablo Canyon Nuclear Power Plant Breakthrough Generation 2014 fellows visited the California nuclear power station.

The Energy Institute at UC Berkeley is a multi-disciplinary research group focusing on energy economics. We met with the director, affiliated professors, and graduate students at EI and shared the projects we were working on and heard about their current research.

Brightsource Fellows visited Brightsource's US headquarters to learn about their Ivanpah solar project, the largest concentrated solar power project in the world. We also discussed their experience as a recipient of the DOE's loan guarantees.

Young Professionals in Energy We went to several happy hours and panel discussions with the San Francisco chapter of Young Professionals in Energy. Fellows got to meet their peers working for public utilities, renewable energy developers, and other clean tech entrepeneurs.






Senior Fellows


Britain’s Civilian Nuclear Program Is Not a Stealth Military Program

Britain’s Civilian Nuclear Program Is Not a Stealth Military Program

While the study offers up self-described circumstantial evidence for links between British civilian and military nuclear suppliers, their main argument is that there can be no other explanation for the United Kingdom’s support for nuclear power.

This seems, frankly, a little thin. Both Kirby’s Op-Ed and the SPRU paper ignore the complex energy and environmental challenges facing the United Kingdom that could warrant a renewed interest in domestic nuclear power: energy security, carbon emissions, reducing electricity and gas imports, domestic industrial jobs.

While many UK nuclear vendors are involved both in military and civilian projects, the government chose a reactor designed by German and French companies rather than investing in developing their own design with domestic suppliers. While the New York Times Op-Ed assumes the military-civilian cover-up is a fact, the actual SPRU report says this in the middle of its 95 pages:

The overall picture is a complete absence of any acknowledgement of formative links between commitments to military nuclear submarine capabilities and attachments to civil nuclear power.

The SPRU working paper makes several claims that the United Kingdom plans “unparallel” support for nuclear power and remains “internationally distinct” for its nuclear policies, and notes an “unprecedented turnaround” in policy from 2003 to 2006. But there’s more than one good reason for why the United Kingdom might have picked up interest in nuclear power at this time. The United States passed the Energy Act of 2005, which provided significant financial support for new nuclear builds and advanced nuclear R&D. Both France and Finland finalized plans for their own new EPR builds, which began construction in 2005 and 2007. Over this time period, global construction starts of nuclear power began to grow, with dozens of new builds in China and South Korea. The United Kingdom may have simply been trying to maintain relevance in a fast-paced global nuclear power industry that was leaving them behind.

Not to mention, in 2005, the Kyoto Protocol entered into force and the EU emissions trading scheme began. In 2003, nuclear made up 83% of the United Kingdom’s low-carbon electricity, and the average age of a reactor was 19 years, perhaps causing concern for how they would meet reductions in carbon emissions.

Maybe, just maybe, those caused a shift in UK energy policies. It’s at least worth looking into; however, the SPRU paper doesn’t investigate any alternative explanations.

Yet there is a stark dichotomy in how the New York Times Op-Ed was received by various audiences, highlighting whom this report was directed towards. People who work in the civilian nuclear industry laughed, noting that this conspiracy theory runs counter to conventional wisdom: often governments hide funding for civilian programs in military spending, whose budget is rarely questioned. On the other side, the buzz on Twitter ignored the circumstantial aspect of the SPRU working paper (which they mostly likely did not read) and accepted Wynn’s hypothesis as fact: the United Kingdom used the Hinkley EPR project to hide funding for Trident submarines. Many noted the irony of the United Kingdom accepting investment from Chinese firms to build the French EPR, as the new submarines will be defending UK sovereignty. And of course this is in stark contrast to the conclusions in the SPRU report, which concluded a soft connection at best:

Of course, this holds no necessary implications for any definite links (let alone directions) of causality. It is possible, for instance, that the extraordinary expense of both civil and military nuclear capabilities simply makes a reflection of national economic capacities.

The alternative explanation may seem unthinkable to anti-nuclear pundits, but requires a lesser leap of imagination: that the British government has a genuine concern for reducing carbon emissions, stabilizing electricity prices, and reducing gas imports. More importantly, the United Kingdom may see a benefit in maintaining leadership in civilian nuclear power, because there is a global nuclear renaissance, and they don’t want to be left behind.

Calestous Juma Receives 2017 Breakthrough Paradigm Award

Calestous Juma Receives 2017 Breakthrough Paradigm Award

The Breakthrough Institute has named Calestous Juma the recipient of the 2017 Breakthrough Paradigm Award. Professor Juma will accept the prize on stage at the Breakthrough Dialogue in Sausalito, California, next June.

The Paradigm Award recognizes accomplishment and leadership in the effort to make the future secure, free, prosperous, and fulfilling for all the world’s inhabitants on an ecologically vibrant planet. Past recipients of the award include Mark Lynas, Emma Marris, Jesse Ausubel, Ruth DeFries, and David MacKay.

Calestous Juma is Professor of the Practice of International Development at the Harvard Kennedy School and Director of the Science, Technology, and Globalization Project at the Belfer Center for Science and International Affairs.

Professor Juma was chosen in recognition of his scholarship and thought leadership in biotechnology and innovation. Of all global impacts on the environment, none has a bigger footprint than food and agriculture, and few scholars are better prepared to discuss and advise our agricultural future. With his acclaimed 2011 book, The New Harvest: Agricultural Innovation in Africa, Juma offered an essential and refreshing look at agriculture in emerging economies. Technology, entrepreneurship, and emerging regional markets, he wrote, would combine to create an economic, social, and environmental revolution in sub-Saharan Africa.

This year, Oxford University Press published Professor Juma’s new book, Innovation and Its Enemies: Why People Resist New Technologies, which chronicles 600 years of case studies on emerging technologies and the social resistance they ignite. Those familiar with modern discussions around nuclear power, transgenic crops, vaccines, and other controversial technologies have likely experienced frustration with what can seem at times to be regressive opposition to new technologies. But what is fascinating about Juma’s new book is the respect, curiosity, and skill with which he diagnoses these social tensions. In our bitterly divided debates about new technologies, his emergence as a voice of reason, wisdom, and civility is most welcome. Adam Thierer of George Mason University called Innovation and Its Enemies “the best book on technology policy of the past decade." "It takes one of the leading lights on innovation—Calestous Juma—to truly understand the forces that oppose it,” said the Scripps Research Institute’s Eric Topol.

Professor Juma’s ground-breaking research on science and technology has been recognized by the United Nations Environment Programme and the Royal Academy of Engineering. He has been elected to several scientific academies including the Royal Society of London, the US National Academy of Sciences, the World Academy of Sciences, the UK Royal Academy of Engineering, and the African Academy of Sciences.  He is a former Executive Secretary of the UN Convention on Biological Diversity and the founder of the African Centre for Technology Studies in Nairobi.

So it is only fitting that Professor Juma will join us for next summer’s Breakthrough Dialogue, the theme of which is “Democracy in the Anthropocene.” In this seventh iteration of the Dialogue, we will confront the question of achieving progress and innovation at a time when many voices are questioning both the benefits of new technologies and the efficacy of the institutions that have historically driven human progress. For ecomodernists, the question becomes not only whether we can overcome these democratic hurdles to progress, but what will be necessary for democratic institutions and civil society to embrace the ongoing processes of modernization and technological change that will be necessary to accelerate the transition to an equitable, modern, low-impact future. (You can read more about our vision for next year’s Dialogue here.)


For media inquiries, please contact Alex Trembath, Communications Director at the Breakthrough Institute: alex@thebreakthrough.org.


Can Industrial Food Be Part of the Food Movement?

Can Industrial Food Be Part of the Food Movement?

Lusk is an expert on food and agricultural policy and his op-ed presents research directly related to the environmental impacts of farming. The food system he describes produces the vast majority of food, measured in both calories and dollars, to Americans and export markets.

Yet his Times piece reads a bit man-bites-dog. We’re not geared to think of industrial farming in a positive light.

More familiar is Michael Pollan’s latest essay in the New York Times Magazine. Pollan divides America into “Little Food” and “Big Food,” contrasting the “food movement” against “processed,” “packaged,” and “industrial” food. It’s a divide we’ve read about for at least two decades now, since the advent of slow food and the skyrocketing public interest in nutrition, cuisine, and farming.

But thinking about it, it’s unclear why industrial agriculture would be excluded from the food movement. Again, why couldn’t majority of the food eaten in America be part of the “food movement?”

After all, questions of resource use and environmental impact don’t fall neatly into Pollan’s binaries. Organic and conventional agriculture both use pesticides, so which ones do we need to be concerned about? Large farms may use more fossil fuels at the aggregate level, but they also produce more food, so what systems are actually the most efficient on a per-unit basis? Scale has become falsely conflated with impacts.

Commercial farmers who use technology and best practice to grow large amounts of food while minimizing environmental impacts deserve as much (or perhaps more) praise as small-scale farmers producing only for local customers. Unfortunately, we’ve come to see “Big Food’s” impacts as categorically worse precisely because they are, by definition, bigger than the absolute impacts of smaller-scale farming.

UC Berkeley’s David Zilberman put it well last week:

There is a place for both industrial and naturalized agricultural systems. The naturalization paradigm is leading to the emergence of higher-end restaurants and fresh food supply linking the farmer to the consumer, each of which have limited reach but are important source of income and innovation in agriculture. At the same time, the majority of people will be dependent on industrialized agriculture. The two can coexist and coevolve.

As long as we have billions of mouths to feed around the world, we’re going to need lots of land and resources to produce their food. Agricultural systems will necessarily always have environmental impacts. If the “food movement” is about making agriculture as safe and environmentally friendly as possible, Big Food should be able to march alongside Little Food.


Lorem ipsum dolor sit amet, consectetur adipiscing elit. Morbi aliquam laoreet metus, id elementum tellus vulputate et. Mauris eros est, pulvinar vitae condimentum ac, laoreet tincidunt sem. Proin vel lacus mattis libero varius iaculis. Morbi rutrum porta tristique. Pellentesque pharetra vehicula ante, sed aliquam nibh condimentum ut. Morbi sapien sapien, convallis vitae aliquet eu, tempor nec purus. Quisque at dui quis turpis tempus facilisis non eget arcu. Cras rhoncus tincidunt lorem id pharetra. Morbi dignissim purus nec lacus convallis pretium.

Senior Fellows

Research Fellowships

“We Are All Lukewarmists”

So where does that leave us? With an adjusted stance toward future projections and present decarbonizing technologies—the whole suite of them—for one. Some humility is involved in this; both social and technological barriers stand in the way of a zero-carbon future. But there is also room for optimism, says Nordhaus: “A prosperous and equitable world, a low-carbon future, and a manageable and accountable energy system are all possible.” To get there, we’ll need better technologies and fewer feuds—“less heat and more light in our energy and climate politics.”

Many others, from policy pundits to ag wonks to “climate-conscious conservatives,” are warming to new realities and new technologies. We’ve highlighted some of these thinkers and practitioners below, many of whom come bearing ecomodern-ish news.

The Elephant in the Room

James Pethokoukis takes on the “dystopian picture” painted by the Republican presidential candidate and provides an optimistic, and conservative, counterview … Eric Holthaus features the rise of “eco-conservatives”—groups such as RepublicEn and Citizens for Responsible Energy Solutions—that are attempting to “change the narrative from one of a dire emergency to an opportunity for solving a challenge” … Greg Ip of The Wall Street Journal reviews Washington state’s proposed revenue-neutral carbon tax, which “stands the best chance of appealing to people across the political spectrum” (if only the left would get on board) … William Ruckelshaus and William Reilly, former administrators of the EPA under Republican presidents, defend the Clean Power Plan as an example of “American exceptionalism” …

What’s Nuclear

The Bipartisan Policy Center’s nuclear waste report describes the ongoing federal and regional efforts to resolve “the nuclear waste problem,” concluding that “a new path forward is needed” with consent-based siting … Matt Wald lists developments in Washington and New York, and even the tumultuous presidential campaign, as evidence for nuclear’s bipartisan appeal … According to Michael Scott, the growth of nuclear power in China is well outpacing that of the rest of the world … MZConsulting weighs in on Europe’s future in nuclear, pointing to France, Finland, and the UK as positive examples of new nuclear development …

Fortunately, Unfortunately

Oliver Milman reports for The Guardian on new research indicating that the U.S. will fail to meet its emissions goals with the policies currently in place—which “doesn’t mean we are doomed,” says the study’s lead author Jeffery Greenblatt, but should simply serve further policy action and innovation; New York City, for one, which faces increasing pressure from sea level rise, has laid out its own proposal for mitigation and adaptation … Mike Orcutt highlights the finding that transportation emissions have overtaken those of the electricity sector, largely as a result of the shift in the U.S. from coal to gas … With regard to the transportation sector, fortunately, the Rocky Mountain Institute predicts that electric, shared, and autonomous vehicles are poised to fully disrupt the status quo, writes Chris Mooney; unfortunately, whether or not “peak car” and a cleaner, cheaper future will come to pass depends on a number of factors, including public perception and regulation …

The Machine in the Garden

Jayson Lusk points our attention to some of “the most progressive, technologically savvy growers on the planet”—namely, conventional farmers, whose farms produce the vast amount of food sold in the U.S. on decreasing amounts of land, and whose “technology has helped make them far gentler on the environment than at any time in history” … Andrew Porterfield delves into some of the many questions that currently surround food production, including the notion of “sustainable intensification,” the role of technological innovation and knowledge transfer, and the advantages of conventional farming … Tamar Haspel lays out “eight gloriously wonky ways to improve ag policy” in the wake of complex issues and reductive public perception … Brad Plumer reviews the innovations intended to reduce and capture the methane emissions that stem from from meat production, and specifically from enteric fermentation (i.e., cow belches) … Richard Forman and Jianguo Wu provide a potential complement to the land-sparing approach of Breakthrough’s Nature Unbound, advocating for global and regional urban planning to “maximally sustain farmland and nature” …

Rice and CRISPR Treats

Aneela Mirchandani provides a thorough historical overview of “golden rice” in an effort to address specific misconceptions about the genetically modified product … Sharon Begley and David Dittman cover Monsanto’s licensing of CRISPR-Cas9 genome-editing technology, which will not be subject to the regulations transgenics face … Pediatrician Emiliano Tatar emphasizes that “after almost 30 years of widespread use (such as corn and soybeans), GM foods have never, even once, been linked to disease or any harm in humans” …

Positively Trending

Nicholas Kristof steps back from the negative narratives to review the remarkable decline in global poverty, illiteracy, and inequality in recent years—a little-discussed trend, he says, that, once recognized, might be accelerated … Tyler Cowen brings social progress and technological advances to bear on present political turmoil … Cassie Werber and Jason Karaian report on the global decoupling of carbon emissions from economic growth, a development most pronounced in the world’s wealthiest countries and “a hopeful sign for the planet—up to a point” …


Postscript: Prospective Perspectives

Jenny Seifert, self-described “futurist" and science writer for the the Water Sustainability and Climate Project at UW-Madison, discusses the importance of long-term thinking, planning, and storytelling when it comes to climate change and water challenges; “to build a good Anthropocene,” she concludes, “we need just our imagination” … Jeffrey Sachs lays out his own proposal for long-term thinking and planning of decarbonized infrastructure, a challenge that “combines the technological complexity of the moon shot and the organizational complexity of building the Interstate Highway System” … “Dream of Mars, by all means,” The Economist admonishes Elon Musk, on his plan to colonize the planet in the face of Earthly apocalypse, “but do so in a spirit of hope for new life, not fear of death.”

Modern Pope

The Pope and Climate Change

Last year, with Mark Lynas and Michael Shellenberger, I criticized Laudato Si for its apparent rejection of modernity, its skepticism toward technology, and its simplistic posture towards markets. "Laudato Si is very relevant to the emerging ecomodernist movement,” we wrote, "because it makes explicit the asceticism, romanticism and reactionary paternalism inherent in many aspects of traditional environmentalist thinking."

Not so fast, says Dr. Sally Vance-Trembath, a theologian at Santa Clara University. In a new essay for the Breakthrough Journal, Vance-Trembath argues that Francis’s climate encyclical “can only be understood in the context of his broader effort to drag the Church, once and for all, out of its feudal traditions, authoritarian hierarchy, and hostility toward the modern world and into dialogue with the broader human community."

An expert in the papacy and the centuries-long evolution of the Church, Vance-Trembath explains the backdrop behind Francis’s long, complex, sometimes contradictory text. She contrasts Francis to his predecessors Benedict XVI and John Paul II, whom she writes, were committed to an old-world regal, feudal, and paternalistic Church. Fortunately, she observes, Francis follows much more closely the footsteps of John XXIII and Paul VI, who led the 1960s Second Vatican Council to make the Church more egalitarian and progressive.

Vance-Trembath places Laudato Si in the context of this evolution, with all its fits and starts. Above all else, she writes, the text is inductive—an explicit gesture towards a flexible framework and open dialogue over how to solve environmental problems in a practical, human world. 

Flexible as it is, though, Laudato Si certainly has its flaws. “Catholic teaching texts are very often internally inconsistent,” writes Vance-Trembath. As a result, she finds her own faults with the encyclical, particularly Francis’s tendency to unhelpfully diagnose the societal problem of climate change as a personal “moral choice.”

Vance-Trembath is right to celebrate Francis’s rejection of the reactionary papacies of John Paul and Benedict. If the Catholic Church is going continue as a major social force, it must reconcile itself with a modernizing world, a process that Francis has renewed. But if the Church is going to constructively contribute to solving environmental problems, Francis will also have to reconcile the Church’s evolving views about the environment with the scale and complexity of modern social, economic, and technological arrangements. A modernizing church will need to embrace ecological modernization if it is to have much useful to say about our common home.

Democracy in the Anthropocene

Much in recent news corresponds with the questions the Dialogue seeks to confront. What happens, Breakthrough asks, when modernization results in inequity? Or when urbanization, which generally enables socioeconomic advancement, fails to provide opportunity? On the other hand, when the tools of ecological modernization are exactly what we need, both for the environment and for human development—nuclear plants that supply clean, abundant power, for instance, and agricultural advancement that provides crucial increases in yields—how can we engage with various stakeholders in a civic-minded way?

Finally, as these technologies come to the fore of policy discussion, can we ensure that they are applied in a just and democratic manner? How should we assess and implement those advanced technologies that will disrupt our modes of living and understanding?

We look forward to grappling with these questions next June, with our interlocutors, fellow pragmatists, and futurists in attendance. In the meantime, here’s what we’ve been reading to prepare for the conversations to come:

Climate Policy ...

Nate Johnson and Heather Smith discuss the likelihood of California meeting the goals of its “triple-dog-dare legislation” recently passed into law, which will require the state to reduce its emissions to sub-1990 levels by 2030; despite “California’s tradition of feeling smug about how green it is compared to other states,” they write, many, many changes will need to come to pass in order for the state to break from its current trajectory—one that has been derailed, notably, by its rebuff of nuclear … Varun Sivaram of the Council on Foreign Relations cites the failures of Germany’s Energiewende, California’s cap-and-trade system, and global lock-in of clean-energy technologies in outlining the three pitfalls that hamstring “well-intentioned climate policy” … Chris Mooney reviews a report released by the International Energy Agency, which highlights the need for greater investment in nuclear and CCS technologies … David Roberts interviews Energy Secretary Ernest Moniz, who highlights Mission Innovation and the Breakthrough Energy Coalition on the topic of “cleantech 2.0” …

… and the Political Climate

Amanda Hoover reports on a new poll conducted by the University of Chicago’s Center for Public Affairs Research, which reveals surprising consensus among Americans that the country should play a leadership role on climate change, courting “progress even if other countries do not,” according to center director Trevor Tompson … Carl Cannon discusses the politicization of conservation and energy, directing blame toward not only Republicans and Democrats but also the Sierra Club, all of which seem intent, he says, on coloring the discussion of energy and the environment as a zero-sum game …

Around the World in Eighty Seconds

Fracking, says Bjørn Lomborg in The Telegraph, holds the potential for key emissions reductions in Britain, a conclusion drawn from the U.S.’s coal-to-gas transition … Mayumi Negishi relates the woes of Japan’s renewables sector and quotes Nobuo Tanaka on the importance of revamping the country’s nuclear industry … unfortunately, as Nancy Slater-Thompson reports, nuclear restoration faces arduous regulatory and political obstacles in the wake of Fukushima … Carbon Brief’s new interactive map and timeline place Germany a long way from its emissions reduction goals, due to the nation’s persistent reliance on coal … Tom Morton, meanwhile, writes on “Germany’s dirty little coal secret” … China’s recent release of a plan to double its current nuclear fleet brings to the fore the relative slog of the regulatory process in the U.S., according to Andrew Follett and ClearPath’s Jay Faison … and Stuart Smyth tracks the environmental costs of Australia’s ban on GM canola …

Much Ado About Genetic Engineering

Speaking of, Tyler Cowen asks everyone to simmer down over the proposed Bayer-Monsanto merger, which “is a classic example of how vociferous public debate can disguise or even reverse the true issues at stake”—namely, antitrust law on the one hand and anti-GMO sentiment on the other … less politely, Kavin Senapathy excoriates Vandana Shiva and other anti-GMO parties for their resistance to the efforts of #Nobels4GMOs led by Sir Richard Roberts … Kevin Folta faults the New York State PTA for its misinformed take on genetically engineered food products … on a happier note, Peter Singer’s most recent book contains an essay entitled “A Clear Case for Golden Rice”—a reversal of his former stance on GM crops, according to The Economist’s review … Sarah Zhang takes up the thread on CRISPR food, “an entirely new category of GMOs” without, perhaps, the stigma … Elizabeth Pennisi also features cutting-edge, gene-cutting technologies in a piece in Science on plant engineering pioneer Dan Folta …

Finally, Nuclear!

“Finally—finally!—a leading Democrat has acknowledged that we need nuclear energy,” celebrates Robert Bryce in the National Review, on Hillary Clinton’s recent endorsement of nuclear; our own Jessica Lovering attributes this development to the changing narrative on nuclear since COP21 … National Geographic spotlights Leslie Dewan of Transatomic Power (a company developing an advanced molten salt reactor) in a feature of individuals who “possess the courage and conviction to take on major challenges to improve lives” … James Taylor of the Spark of Freedom Foundation contributes to Forbes on the bipartisan potential for nuclear power (although the singer-songwriter has yet to comment on such “common-ground climate policy”) …

Postscript: Storing nuclear waste, posthaste

This week’s second-to-last word goes to Ernest Moniz, who would like to see the private sector take on nuclear waste storage in order to expedite the interim process prior to geologic disposal, as well as to advance new nuclear projects … Indeed, according to the University of Tennessee’s Stephen Skutnik, the “future of nuclear energy depends on if it’s viewed as trash or a treasure.”

Breakthrough Dialogue 2017 Announced

Breakthrough Dialogue 2016

Breakthrough Dialogue 2016: Great Transformations took place on June 22 – 24, 2016

Inspired by the profound challenges and opportunities afforded by modernization, the theme of Breakthrough Dialogue 2016 is “Great Transformations.” Over the course of the dialogue, we will consider the complex processes of urbanization, agricultural modernization, and industrialization and ask tough questions: Are cities really green?  Can industrial agriculture save nature?  Can countries modernize without manufacturing?  Can we end poverty and unleash more abundant nature in this century?

2016 marks the sixth year in which the Dialogue has offered scholars, journalists, philanthropists, policymakers, and friends the opportunity to come together to engage with the Breakthrough Institute in conversation about the world’s most wicked problems.

The Dialogue is presented in service of Breakthrough’s mission to transition to a future where all the world’s inhabitants can lead prosperous lives on an ecologically vibrant planet. It has become a hub for the  burgeoning ecomodernist movement, offering a positive vision that includes thriving cities, more space for wild nature, and a future in which everyone in what is now the developing world can choose to live a modern life.

The Dialogue program will consist of four panels on the main stage and three sets of concurrent sessions that allow for more focused discussions.  There will be  opportunities to interact in all the sessions as well as time for individual and small group conversations as we share meals and free time in the beautiful setting at Cavallo Point in Sausalito, California.

Breakthrough Paradigm Award 2015: David MacKay

Economist editor Oliver Morton honors David MacKay.

Plenary Sessions

Progress Problems

Humans have made extraordinary progress over the last several centuries. Increasing numbers of people, in both absolute and percentage terms, live longer and have reliable access to basic needs like food, primary education, and health care. In wealthy countries, people are able to focus on higher-level needs, like what to do, who to be, and who to love. Simultaneously, there are billions of people in poorer countries still striving to live the sorts of modern lives we enjoy. So why are so many of the richest and most privileged people on earth, despite reaping such extraordinary benefits,  convinced that progress is a mirage and modernity must inevitably end badly? What does modernization actually mean today for poor families as they leave subsistence agrarian livelihoods and move to cities? And what are the consequences for them of declining faith in progress in the rich world?

  • Max Roser, data-visualization historian and economist
  • Lydia Powell, head, Centre for Resources Management, Observer Research Foundation
  • Dan Kahan, professor of law and professor of psychology, Yale University
  • Moderator: Ted Nordhaus, director of research and cofounder, Breakthrough Institute


Is Industrialization Still Possible in the 21st Century?

We know that traditional modernization “worked” for the rich countries of the world: incomes are much higher, infrastructure is much more reliable, and social indicators like public health, education, and well-being show the undeniable benefits of modernity. But are the processes that drove modernization in the past -- urbanization, agricultural intensification, and industrialization -- available to developing countries today? Prominent scholars have argued that the blue-collar manufacturing jobs that powered 19th and 20th century industrialization will not materialize for the world’s poor today, thanks to globalization and automation. There is also disagreement over the role of governance and industrial policy. Can emerging economies follow the rich world’s path, and if not, is there some other road to modernity?

  • Michael Lind, cofounder and cofellow at the New America Foundation
  • Samir Saran, vice president of the Observer Research Foundation
  • Vijaya Ramachandran, senior fellow at the Center for Global Development
  • Moderator: Eduardo Porter, journalist, the New York Times


Is Peak Farmland in Sight?

Improved yields in agriculture have spared tens of millions of hectares of natural habitat from conversion to farmland over the last 50 years, and the expansion of farmland has slowed down in the last two decades. Do we have the technological capacity to reach peak farmland in the next few decades? Even if we do, will the peaking of global farmland be accompanied by a shift of agricultural production to the tropics, with potentially devastating consequences for tropical forests, which harbor a huge proportion of global biodiversity? What will it take to intensify agricultural production globally while ensuring that we protect critical tropical habitat? 

  • Kenneth Cassman, professor of agronomy and horticulture, University of Nebraska
  • Nathalie Walker, senior manager of the tropical forest & agriculture project, National Wildlife Federation
  • David Douglas, partner at Applied Invention
  • Moderator: Linus Blomqvist, director of conservation, Breakthrough Institute


Ecomodernism In Action

In some ways, just a year removed from the release of ‘An Ecomodernist Manifesto,' the work of ecomodernism has just begun. But efforts to uplift humanity and use technology to spare nature have been around for generations. For the final plenary session of this Dialogue, Breakthrough cofounder and president of Environmental Progress Michael Shellenberger will interview several inspiring experts and activists working to improve their communities through ecomodernist approaches and technologies. Panelists include a professor who has studied how liquid petroleum gas (LPG) helped save the forests in Indonesia, an expert in Indian development and modernization, a physicist building a bridge between nuclear innovation in the United States and China, and two “mothers for nuclear power” marching to save Diablo Canyon and other threatened nuclear plants in the United States.

  • Heather Matteson, Procedure Writer, Diablo Canyon Nuclear Plant, Mothers for Nuclear
  • Kristin Zaitz, Senior Consulting Engineer, Diablo Canyon Nuclear Plan, Mothers for Nuclear
  • Sunil Nautiyal, Professor, Institute for Social and Economic Change
  • Ning Li, Dean and Professor, Xiamen University
  • Sunjoy Joshi, Director, Observer Research Foundation
  • Moderator: Michael Shellenberger, senior fellow, Breakthrough Institute


Concurrent Session Topics:

Thursday Morning

Ecomodernism and the Left

Much of the substance of ecomodernism rejects sacrosanct convictions held by the modern Left: ecomodernism’s preference for more energy, not less; an appreciation of progress, accomplished not by corrupt crony capitalists but through public-private partnerships; and an enthusiasm for symbolic liberal bugaboos like nuclear power and GMOs. These distinctions being the case, what does ecomodernism share with the Left? Is ecomodernism something entirely new — an “up-winger” movement as opposed to a left-winger one, for instance — or do the political and intellectual traditions of the left still matter for ecomodernism? 

  • Rasmus Karlsson, associate professor, Political Science Department, Umeå University
  • Leigh Phillips, science writer, Pacific Institute for Climate Solutions, University of Victoria
  • Amy Levy, activist, An Ecomodernist Mom
  • Moderator: Alex Trembath, communications director, Breakthrough Institute

The Future of Meat

Livestock are responsible for 14 percent of global greenhouse gas emissions and take up a quarter of the world’s land area. Meanwhile, meat consumption is on the rise in developing countries as people grow wealthier. What agricultural practices and new technologies can help reduce the environmental impacts of meat production? Are cultured meats or genetically-engineered animals part of the solution?

How to Make Nuclear Innovative

While the first GenIII+ nuclear reactors in the world come online this year in China, most energy experts agree that innovation isn’t happening fast enough in nuclear power. The old state-led, top-down nuclear innovation model may still be relevant in some places, such as China and Russia, but is unlikely to represent a plausible route forward in the United States given the present political, economic, and institutional climate. The alternative is to move toward a more distributed, networked, and bottom-up innovation model in the nuclear sector. This panel will discuss what’s required to get us there: a range of new policies and institutional changes beyond the initial, and very important, measures we are currently advocating.

  • Todd Allen, senior visiting fellow, Third Way
  • Caroline Cochran, cofounder and COO, Oklo, Inc.
  • Joe Lassiter, senior fellow, Harvard Business School
  • Sam Brinton, senior policy analyst, Bipartisan Policy Center
  • Jessica Lovering, energy director, Breakthrough Institute

Geoengineering the Planet

Geoengineering sits uncomfortably within ecomodernism. On the one hand, it offers a (at least partial) technological solution to an environmental problem. On the other hand, it inserts humanity deeply and perhaps irrevocably into natural systems. How should ecomodernism consider geoengineering? And most concretely, is geoengineering best thought of as an emergency “last-ditch” response to global climate change, or as a more regular and less severe tool to be deployed along with mitigation and adaptation?

  • Oliver Morton, author, The Planet Remade and special briefings editor, The Economist
  • Jane Long, senior fellow, Breakthrough Institute
  • Moderator: Brad Plumer, senior editor, Vox

Thursday Afternoon

Energy sprawl: can we have low-footprint decarbonization?

Renewable energy sources like wind and solar are low-carbon, but they require more land area than fossil fuels. Scaling these sources up to the level needed for serious decarbonization could have major land-use consequences. What trade-offs are we willing to accept between conservation and clean energy? How can we encourage policy-makers to prioritize low-footprint technologies?

  • Rebecca Hernandez, assistant professor of land, air and water resources, University of California, Davis
  • Robert Bryce, senior fellow, Manhattan Institute
  • Janine Blaeloch, founder and director, Western Lands Project
  • Steve Brick, senior fellow, Clean Air Task Force
  • Moderator: Marian Swain, conservation analyst, Breakthrough Institute

Can Conservatives Get Back in the Environmental Game?

Conservatives and liberals compete over the best education agenda — why don’t they compete over the best environmental one? As the movement of reform conservatives builds up steam, some of the country’s leading conservative thinkers will discuss what conservatives should be for — not just against — when it comes to protecting the environment and saving nature.

  • Steve Hayward, professor of public policy, Pepperdine University
  • Reihan Salam, executive editor, National Review
  • Julie Kelly, food policy writer
  • Jeremy Carl, Research Fellow, Hoover Institute, Stanford University

Ecomodern Education

What does it mean to say that ecomodernist thought should be a part of modern American environmental education? This session will serve as a shared space to brainstorm a new open-minded pedagogical framework. How is ecomodernism situated amongst other contemporary approaches in environmental thought, and how may its points of convergence and divergence be most fruitfully explored in educational settings, given that these contemporary approaches have not yet gained a solid foothold in U.S. environmental education?  What products and processes of engagement will be most effective at leveraging ecomodernism into an open-minded and productive environmental education framework, and what tools and materials do academics need the most?

  • Jenn Bernstein, PhD candidate at Hawaii Pacific University, lecturer at UC Santa Barbara
  • Jim Proctor, professor of environmental studies, Lewis and Clark College
  • Eric Kennedy, PhD candidate at CSPO, Arizona State University

Wilderness in the Anthropocene

Wilderness has long been a cornerstone of American conservation ethics. But in recent years, this ideal has been put into question by the concept of the Anthropocene – a world where no ecosystem escapes human influence. Can “wild" be divorced from “pristine,” and thus still be relevant at a time when baselines are elusive and many ecosystems are novel? Should conservationists always intervene to save endangered populations and species, or should some ecosystems be left entirely to their own devices? If interventions are made, can these places still be considered wild? And does a focus on remote, wild places detract from nature closer to the cities where most people live?

Friday morning

The New Countryside: Evolving Livelihoods and Landscapes in Latin America

We all know that cities are dynamic and fast growing. But rural areas and livelihoods, particularly in developing countries, are rapidly evolving too – with big and often unexpected consequences for both human development and conservation. This “agrarian transition,” whereby livelihoods are becoming de-linked from farming and the land, is far from a linear process. Migration between rural areas and cities goes both ways; people who remain in the countryside often rely largely on non-farm employment; and remittances from abroad create new livelihood opportunities. In the process, forests are regrowing on abandoned farmland in many regions, while intensive agriculture to feed growing cities is expanding elsewhere. What do these new dynamics mean for conservation and for poverty alleviation? Hear leading scholars discuss how the agrarian transition is playing out in Latin America, and how societies can deal with the new perils and opportunities it presents.

  • Ricardo Grau, professor, Universidad Nacional de Tucumán
  • David Lansing, associate professor of geography, University of Maryland, Baltimore County
  • Susanna Hecht, geographer and professor of urban planning at University of California, Los Angeles
  • Moderator: Linus Blomqvist, director of conservation, Breakthrough Institute

Normalizing Nuclear Risk

Accidents are inevitable — and perhaps so are nuclear meltdowns. The worst effects of Chernobyl and Fukushima were all psychological. Instead of doing “shelter in place,” nuclear radiation panic in Japan resulted in an unnecessarily large evacuation that resulted in injury and death. People around Chernobyl suffered from depression and related problems including alcoholism. What can be done so humans can have a more accurate view of radiation and meltdown risk? What preparations must be made today to prepare for future accidents including in foreign nations?

  • Dan Kahan, professor of law and professor of psychology, Yale University
  • Dr. Gerry Thomas, professor of molecular pathology, Imperial College London
  • Woody Epstein, research associate, Garrick Institute for the Risk Sciences, UCLA Engineering
  • Moderator: Jessica Lovering, director of energy, Breakthrough Insitute

Amazing Grace: Ecomodernism and Religion   

A number of prominent thinkers have observed that environmentalism is the religion of western secular elites; in the striking imageryof French Philosopher Pascal Bruckner, environmentalism places the Earth on the cross, dying for humanity's sins. Pope Francis'2015 Encyclical, Laudato Si', absorbs and reflects some of this thinking, focusing not just on modernity's threat to the planet, but also to the global poor. In this session, scholars from different faith traditions will be in dialogue about ecomodernism's radically divergent proposition, that modernity and technological innovation can actually be good for people and nature.  What does religion have to say about ecomodernism, and how might ecomodernism inform faith?

  • Sally Vance Trembath, professor of theology, Santa Clara University
  • Sam Brinton, senior policy analyst, Bipartisan Policy Center
  • Iddo Wernick, Research Associate, Program for Human Environment

Breakthrough Dialogue 2017 Announced: Democracy in the Anthropocene

Breakthrough Institute is excited to announce that the 2017 Breakthrough Dialogue will take place Wednesday, June 21, through Friday, June 23, at Cavallo Point in Sausalito, California. Breakthrough Dialogue is the research organization’s signature annual event, where its international network of Senior Fellows, Generation Fellows, scholars, policy makers, and allies gather to build an optimistic and pragmatic vision of the future. The theme of this year’s event is “Democracy in the Anthropocene.”

Democracy in the Anthropocene

In a world in which humans have become the dominant ecological force on the planet, good outcomes for people and the environment increasingly depend upon the decisions we collectively make. How we grow food, produce energy, utilize natural resources, and organize human settlements and economic enterprises will largely determine what kind of planet we leave to future generations. Depending upon those many decisions, the future earth could be hotter or cooler; host more or less biodiversity; be more or less urbanized, connected, and cosmopolitan; and be characterized by vast tracts of wild lands, where human influences are limited, or virtually none at all.

If the promise of the Anthropocene is, to paraphrase Stewart Brand’s famous coinage, that “we are as gods,” and might get good at it, the risk is that we are not very good at it and might be getting worse. A “Good Anthropocene” will require foresight, planning, and well-managed institutions. But what happens when the planners and institutions lose their social license? When utopian civil society ideals conflict with practical measures needed to assure better outcomes for people and the environment? When the large-scale and long-term social and economic transformations associated with ecological modernization fail to accommodate the losers in those processes in a just and equitable manner?

If the enormous global ecological challenges that human societies face today profoundly challenge small-is-beautiful, soft energy, and romantic agrarian environmentalism, the checkered history of top-down technocratic modernization challenges its ecomodern alternative. It is easy enough to advocate that everybody live in cities, much harder to achieve that transition in fair and non-coercive fashion. Nuclear energy has mostly been successfully deployed by state fiat. It is less clear that it can succeed in a world that has increasingly liberalized economically and decentralized politically. Global conservation efforts have become expert at mapping biodiversity hotspots but still struggle to reconcile global conservation objectives with local priorities, diverse stakeholders, and development imperatives in poor economies. Rich-world prejudices about food and agricultural systems, meanwhile, frequently undermine agricultural modernization in the poor world.

Where contemporary environmentalism was borne of civil society reaction to the unintended consequences of industrialization and modernity, the great environmental accomplishments of modernity—the Green Revolution, the development and deployment of a global nuclear energy fleet, the rewilding and reforestation of vast areas thanks to energy transitions, and rising agricultural productivity—proceeded either out of view or over the objections of civil society environmental discourse. Today, the Green Revolution, nuclear energy, and the transition from biomass to fossil energy are broadly viewed as ecological disasters in many quarters, despite their not insignificant environmental benefits.

This year at the Breakthrough Dialogue, we tackle those questions head-on. Attitudes towards urbanization, nuclear energy, GMOs, and agricultural modernization are beginning to shift, as the magnitude of change needed to reconcile ecological concerns with global development imperatives has begun to come fully into view. Can a Good Anthropocene be achieved in bottom-up, decentralized fashion? Can there be a robust and vocal civil society constituency for ecomodernization? What should we do when not everyone wants to be modern, and what is to be done when political identities and ideological commitments trump facts on the ground? If it turns out, in short, that we’re not very good at being gods, is it possible to get better at it? 

Science and Politics

Ecomodernism is, of course, built on a similar premise — that a wise embrace of technology is essential for social and environmental progress.

With this orientation in mind, here’s what we’ve been reading these past few weeks:

Hello, Anthropocene

David Biello, the science curator for TED, explores the implications of the Anthropocene, noting that its christening reminds us that we “can choose to do better” … Alexa Erickson and Shreya Dasgupta highlight the recent finding, by a research group led by the Wildlife Conservation Society, that environmental impact has decoupled from economic growth … Johan Norberg reminds us of the progress the world continues to undergo — that “as we become richer, we have become cleaner and greener” … Brad Plumer interviews John Fleck on his new book Water Is for Fighting Over, which emphasizes adaptation in the face of scarcity, and optimism over fearmongering …

Wilderness and Wildlife

Lorraine Boissoneault, writing for The Atlantic, discusses new research on the surprising level of species diversity found at high elevations, and what such a finding might mean for the scientific “puzzle of biodiversity” and related conservation efforts … Chelsea Harvey of The Washington Post, Christina Beck of The Christian Science Monitor, and Brad Plumer of Vox each discuss a new study indicating that 10 percent of global wilderness has been lost since the early 1990s; Plumer, for one, points to substitution and land-sparing as detailed in Breakthrough’s “Nature Unbound” for potential solutions … John Vidal underscores the failures of protected areas, which are “not working for people or for wildlife,” displacing indigenous groups and missing conservation targets …


Cornell professors David Just and Harry Kaiser outline the environmental and societal benefits offered by GMOs, and warn that opposition to GM foods will hinder essential agricultural research and development … Miriam Horn draws from her recent book Rancher, Farmer, Fisherman: Conservation Heroes of the American Heartland to illuminate the many benefits of industrial farming and precision agriculture, pointing out that “high-yield farms, like cities, concentrate” human impact … Pallava Bagla reports for Science on the slow advance of India’s first potential transgenic food crop, GM mustard, which has passed the environment ministry’s initial safety review … Julian Adams, professor of biology at the University of Michigan, tells CNBC that such GM crops will serve to combat both hunger and global warming … Jon Cohen describes a potential first: a meal featuring a CRISPR-modified plant … Will Chu, of Food Navigator, covers the research behind the greater salt tolerance, and thus higher yields, of genetically engineered barley …

Nuclear Discussions

Writing for The Guardian, Debbie Carlson relates the conversations surrounding nuclear’s zero-emission capacity and New York’s new subsidy program, while Fiona Harvey quotes economist Jeffrey Sachs on the need for nuclear … Lauri Virkkunen speaks to the successes of the “Nordic way with nuclear,” including those of reliable operation and waste management, and concludes that “pragmatism is the key” … Stephen Tindale and Suzanna Hinson respond to a recent study correlating pro-nuclear tendencies with emissions reduction failures in the EU; the study’s authors, they find, gloss over essential differences among countries and conflate the promotion of renewables with emissions reductions … Stephen Castle reports for the New York Times on Britain’s Hinkley Point nuclear power plant, which has received governmental go-ahead … Nuclear Energy Institute releases a piece centered on the benefits of nuclear, and particularly advanced nuclear … Reuters reports on a recent nuclear development deal between South Korea and Kenya, which hopes to ramp up to 4,000 megawatts of nuclear power by 2033 … Arthur Motta of Penn State argues for a U.S. energy policy that would credit nuclear plants for the clean, reliable, and stable power they provide …

Postscript: Post-Truth Politics?

The Economist poses the problem of “post-truth politics” and gestures toward the democratic institutions designed to protect against it … James Fallows reviews Mark Thompson’s recent book on rhetoric and politics, an adequate contribution, he finds, to “the continuing debates about ‘bias’ and ‘objectivity,’ the separation of the public into distinct fact universes,” and “the imperiled concept of ‘truth’” … The New Yorker’s Jill Lepore outlines the historical decline of political debate … A statement signed by 177 European civil society organizations and led by the WWF articulates the need for “genuine, democratic and inclusive dialogue on the future of Europe” … Graham Allison and Niall Ferguson of the Harvard Kennedy School propose that the next U.S. presidential administration employ a council of historians, uniquely equipped to handle questions such as “perhaps the biggest one of all: Is the U.S. in decline?” … and Roger Pielke, Jr. counters, observing the need for integrated political decision-making but discounting the efficacy of “philosopher kings and queens.”

A Desert Stand-Off

When it comes to energy and the environment, there is no free lunch. All energy technologies have environmental impacts. Having an honest conversation about the trade-offs associated with the state’s renewable energy commitments and its nuclear energy moratorium will be necessary if we hope to meet the state’s climate commitments while minimizing associated impacts on the natural environment.

This week, Secretary of the Interior Sally Jewell announced the final approval of the plan. Suffice it to say, the Secretary’s announcement will not assuage the tension between two environmental interests that have found themselves increasingly at odds in the Golden State: clean energy development and land conservation.

The DRECP represents a grudging compromise between the two camps. Nearly 11 million acres of public land are covered by the plan, about 400,000 acres of which are set aside for potential renewable energy development.

California prides itself as a leader in tackling climate change, with a target of 50% renewable energy by 2030, up from 30% today. But the question of where those renewable energy projects will go has brought developers into conflict with conservationists. Renewable energy development in the Mojave has already disturbed important habitat for endangered species like the desert tortoise, and conservationists object to the idea of industrializing natural landscapes with infrastructure projects.

Neither conservationists nor energy developers seem totally content with the finalized plan, but the wind and solar industries are expressing more indignance. They condemned the plan in a press release on Wednesday, arguing that much of the land is not suitable for energy development and that the limitations could “hamstring” the state’s ability to meet its climate goals. A statement from Nancy Rader, executive director of the California Wind Energy Association, summarized the problem succinctly: “No one is saying that utility-scale renewable energy should go everywhere, but done responsibly and with safeguards, it does have to go somewhere if we are to meet state, national, and global carbon-reduction goals.”

As Ted and I argued in the Chronicle last year, overhauling an electricity system is not a small project, and even with widespread public support for renewable energy, it is clear that new land use demands are creating uncomfortable trade-offs. Land-neutral options like rooftop solar offer a way around this conflict, as does siting projects on previously developed and degraded land. But there are technical and economic limits to both of these, which is why industry associations are pushing for utility-scale projects in high-resource areas in the desert. We also pointed out that California is only making its job harder by shutting down its last nuclear power plant, Diablo Canyon, which provides 22% of the state’s zero-carbon electricity.

Two of Breakthrough’s core goals are sparing more land for nature and decarbonizing the energy sector. Although the DRECP is an attempt at compromise, it also reveals the limits of the conservation-decarbonization tradeoff. After all, California has already come up against conservation obstacles to its renewable energy goals with only a handful of large solar plants. Contrast that with one proposed plan to build thousands of solar farms in California’s deserts, which would require more than double the amount of land set aside in the DRECP.  

Our ongoing research is focused on how to reconcile decarbonization and land use - stay tuned for a forthcoming paper on this subject.

Twenty-First Century Nuclear Innovation

Twenty-First Century Nuclear Innovation

Until very recently, there wasn’t agreement on the end goal for tackling climate change. Different camps have used different yardsticks for measuring: renewable growth, emission caps, temperature limits, you name it … But these frameworks haven’t been robust enough to bring everyone together and move a solution forward.

And just within the past six months, after decades of negotiation and deliberation, we know what the framework must be. It must be deep decarbonization. It’s technology agnostic. And the timing is urgent.

As a venture capitalist I’m always looking for challenges that are ripe for technology-driven disruption. Climate change—as much as it’s a scientific and political challenge—is also a technology and business challenge. It is a particular business challenge because most people think it will cost us money. In fact, decarbonizing does not cost more money, if you look at the problem correctly.

You all are going to play a big role in solving climate change. And in fact, even reverse it in due course. This situation presents an enormous opportunity for all of us to work together and deploy real solutions to save the world.

Roadmap of Today’s Talk

I am going to talk about three things today:

(1) Climate change and the challenge posed by the imminent retirement of the existing U.S. nuclear fleet. There is an urgent need for clean energy solutions.  

(2) I’m going to talk about my lessons as a venture capitalist. How applying Silicon Valley methods to nuclear innovation is a new way forward. NASA faced a similar inflection point when the government turned to the private sector for smart, fast, cost-effective innovation. And it succeeded. A unique private/public partnership was formed and the private space industry was born.

(3) I’m going to talk about the innovation that is taking place right now. The need for leadership in supporting the new way forward. For nuclear to play a role in this climate crisis, as a country we must do all we can to support the budding new nuclear industry.  

Climate Change

It’s clear that we need MAJOR advances in every form of clean energy technology RIGHT NOW, to meet our climate commitments here in the U.S. and globally. Deep decarbonization is the watchword. It requires an “all of the above” mobilization.

Time is now our enemy in addressing climate change. Most of the experts and their models suggest that unless we truly make progress in the next 10 to 15 years, we will be unable to keep our multinational agreed goal of limiting Earth heating to two degrees Celsius.

And in light of climate change concerns, you all know that we’re at risk of losing nearly half of our nuclear fleet in the next 20 years.  

Just recently we lost Fort Calhoun and now Diablo Canyon, making a total of 8 plants in recent times—approximately 10 gigawatts of carbon-free electricity leaving the U.S. grid.

This is a HUGE loss of the U.S. carbon-free electricity production—we’re talking upwards of 30 percent of the total U.S. carbon-free electricity production going offline!

Not losing more nuclear plants is essential to avoid backsliding on our climate commitments right out of the gate.

Therefore, we must be pushing for new, advanced reactors with competitive economics, so we and the world have options ready to go to replace these plants when the current fleet does eventually retire, and to fill global market needs for electricity with U.S. technologies. This is a perfect economic opportunity for the United States, with a huge humanitarian impact of bringing 2 billion people out of energy poverty, and stopping climate change in its tracks.

This cannot be done in a vacuum. No one company, one country, one industry can pull this off. It will take a partnership of the federal government and the private sector to do this. We need new creative strategies for how federal programs interact and support the budding private nuclear sector in meeting climate goals.

I believe this can be achieved with nuclear innovation and venture capital.

But before I explain how we can do this, let me provide some background on my experience as a venture capitalist and how I apply a lifetime of venture wisdom to new nuclear innovation.  

I am a venture capitalist, before that a start-up entrepreneur, and before that a nuclear engineer.  I have invested in over 70 startup companies over the last 28 years, all at their earliest stages, often just an idea.

I know it’s rather bold, but I claim that the three greatest American inventions in the latter half of the 20th century are:

The startup ecosystem and company.

Professional venture capital.

And nuclear power. Well, I suppose the transistor is probably the greatest invention, but it doesn’t work without electricity!

Sometime I pinch myself when I see this pattern but I believe that all three of these enormous inventions have come together, and are ready to attack climate change.

Why do I say this?

I have witnessed and assisted ambitious, dedicated entrepreneurs make things happen on time scales that are short, with dollars that are few—and yet have an enormous impact that touch each of our lives. This is very exciting. These entrepreneurs were backed by venture capital. Venture capitalists are people who believe in change and are willing to risk all their money to enable great things to come to market.

I have personally participated and observed whole industries created, and others completely transformed, with venture-capital-backed companies.

As to climate change, nuclear power is simply the fastest path to large-scale, zero-carbon production of power for everyone. I say this not to exclude solar, wind, and other zero-carbon sources, but in terms of scale and impact—nuclear can’t be beat.

I cannot think of a more powerful solution to stop climate change in its tracks than these three capabilities combining forces.

Want proof? Results do matter, so let me give you a few stats.

Venture Capital Results

If you were to sum up all the revenue from all the venture-backed companies in the last 60 years, you’d find that their output represents 20 percent of the U.S. GDP, or $3.4 trillion.

You know these companies. They are Uber, Hewlett Packard, Apple, Intel, Cisco, Starbucks, Federal Express, Amazon, Netflix, Google, Facebook, Genentech, Amgen, Genetics Institute, and many, many, many more. The whole biotech industry was created by venture-capital-backed entrepreneurs.

Further, these same companies account for 11 percent of all the non-government jobs in the United States. 11 percent is about 12 million jobs.

While these statistics are incredibly impressive, the one statistic that really impresses people is that all of these results happened with an investment of 0.5 percent of all the private capital available in the United States—ONE HALF OF ONE PERCENT. That’s a sliver of the capital available in our economy.

These results are a true testament to the commitment of the innovator and the power of innovation in the American economy. It was true sixty years ago, and it’s true today.

The Silicon Valley Innovation Model

The Silicon Valley was born just after World War II. The idea of combining innovation with funding to develop new capabilities such as microwaves, semiconductors, and ultimately microcomputers was well on its way in the 1950s in Northern California, and Boston, too. In 1948, Shockley and others perfected the transistor at Bell Labs. In 1956, Shockley moved back to Palo Alto to take care of his ailing mother and opened Shockley Labs in Mountain View. Soon thereafter, the semiconductor industry was born with Fairchild in 1957 and Intel in 1969. This put the silicon in the Santa Clara Valley. In the 1970s came personal computing with Apple in 1977 and many, many others. It wasn’t until 1982 when National Geographic coined the term, Silicon Valley. The rest, as they say, is history.

Over the decades the Silicon Valley has perfected the innovation cycle of entrepreneurs and venture capital. It’s a complex system of people, innovation, education, inspiration, and forgiveness. However, today, THE VALLEY runs on two simple yet very powerful principles.

These are:

(1) Many shots on goal. And …

(2) Fast, fast, fast.

Let me explain.

“Many shots on goal” is a term borrowed from hockey’s Wayne Gretzky. The thought is: if you don’t shoot, you don’t score.

Therefore, the more you shoot, the more you increase your chances of scoring.

Makes perfect sense. Right?

In the Valley, every idea, good or bad, is copied 20 times over and sometimes in the matter of a year or so.

Twenty copies of the same idea is a lot of shots on goal and a lot of competition emerges.

This is actually a very healthy thing. It allows THE MARKET to see choices and make decisions about what matters.

Also, with 20 companies working on the same problem at the same time, you get a lot of competitive juices flowing, lots of unanticipated sharing, and lots of innovation. Facebook wasn’t the first social media company. Intel wasn’t the first semiconductor company. And Apple wasn’t the first personal computer company. They just happened to build it best on the shoulders of others, and the market liked it.

The second principle of moving fast cannot be understated.

In a market where ideas are copied quickly, the best competitive advantage is speed. Invention and patents are important in rich technical markets, but not all.

And being first means you have to learn fast and innovate even more quickly.

Since everyone knows this in Silicon Valley, it is completely normal to be asked to work long hours, work smartly, and to borrow ideas from each other, thereby reducing the number of mistakes.

It’s about solving very hard problems, as fast as possible, with as little resources as necessary and as little of time as practical.

This is today’s Silicon Valley. Honed over decades, hailed as a great economic engine, and where new things happen every day. But not everything.

With my venture experience in hand, and the rise of climate change as a global issue, I and others decided to begin a campaign to change how we do nuclear in the United States. Our beginning was making the documentary Pandora’s Promise in 2012. A movie—who’s seen it? Thank you. About 2 million people have seen it so far.

Then what? It became very clear to us all that the broader nuclear energy community—led by industry and supported by the federal government—needs to change. It needs to do what the Silicon Valley has done so well.

First, it needed to start taking many shots on goal.

For 38 years since Three Mile Island, not many new reactors have been designed or even tested. The reactors that were built were very much standard and incremental to the existing fleet. And as such, they became more and more expensive over time, not cheaper. In the world of nuclear power, over decades the innovation cycle morphed to one of government top-down decision-making and no real risk-taking. Top-down decision-making is not an innovative system. And today’s reactor plants are just too expensive. Too expensive.

But folks, the landscape is shifting and this shift is being led by entrepreneurs—entrepreneurs who are passionate about solving the climate-change problem with nuclear energy. Tapping this passion is our country’s opportunity.

People say change is hard. In my world change is opportunity. And right now it’s staring new nuclear right in the face. But we need public support that allows innovators to capitalize on that opportunity. Currently, I don’t believe we have that support needed for complete success, but it is improving. More can and must be done. Oddly, and fortunately, we have been here before in other industries.

Other federal agencies and programs have evolved to capitalize on opportunities presented by changing conditions—and they’ve done this by enabling private sector innovators to blossom.

The NASA Example

NASA faced a similar situation in the early 2000s.

The space shuttles that moved people and cargo to the International Space Station were set to retire in 2011.

Not to mention—the shuttle program was more expensive than NASA thought, the shuttles were flying less than they’d planned, and there were two BIG accidents resulting in death of people—Challenger and Columbia. Sound familiar?

So they were in a similar position on a lot of levels. Actually, given the stellar safety record of the U.S. nuclear industry, NASA’s position was a lot more difficult.

Then, in 2004, the White House announced a major paradigm shift in NASA’s mission.

President Bush basically said to NASA and to industry, we need you to support the development of a vibrant commercial space flight sector—and we’ll work with you to build it from the ground up.

To stimulate early interest, NASA funded many projects at relatively low amounts of capital. This sent a very clear message to the private sector where NASA thought to invest its time in innovation, and frankly what was possible. As a result, a fledgling private space industry was born. NASA was suddenly enabling many shots on goal.

As the companies demonstrated success—as defined by clear technology readiness levels and additional funding—the private capital market began to sort out the winners by providing more private capital to its chosen ones.

The government’s role shifted from one of top-down technology design and selection, to that of a customer. NASA began letting contracts to these innovative companies, thereby also participating in market selection of the best solutions. NASA was moving decisively … and FAST.

In parallel to this staged funding, NASA laid the groundwork for a modernized regulatory system—including a certification program, engineering standards, testing, and analyses for these companies’ products.

By enabling the private sector, NASA leveraged its limited resources but considerable experience, revitalized its programs and got the spaceships it needed—and in doing so it created a thriving world-class private space industry.  

Ten years down the road, NASA has succeeded. America has succeeded. Again.

So, now it is possible to design, build, and launch a reusable spacecraft in four years—and actually deliver cargo to the ISS—all from the ground up—IN SEVEN YEARS. While most people know Boeing and SpaceX, few know that today at the Mojave Air and Space Port in southern California, there are over 25 companies backed by over $3 billion private dollars working on space flight including space tourism.

New Nuclear Day Is Here

I’d argue the new nuclear industry is further along than NASA was when it began its transition. But we have to push hard and work fast.

In 2014 our little group started talking to the White House, the DOE, senators, congressmen, and the NRC. Any good argument needs evidence. So we set out to discover who was doing nuclear. I was personally invested in two, and knew of a few more. But just how many were there? We discovered that in North America, there are over 50 organizations working on new nuclear—this data was amazingly compelling. And folks, these companies are already backed by nearly $1.6 billion in private capital!

The DOE was so skeptical with this claim that in early 2015 they asked me to assemble some of the funders, many of them billionaires, to come to Washington to talk about this. After I stopped laughing, I suggested a conference call. And in March 2015, we had six billionaires and venture capitalists talking to the head of DOE-NE about why they are doing what they are doing. That call changed everything.

Disruption and Innovation: Recommendations

So HOW do we achieve the same type of paradigm shift NASA’s been able to make? I think we need four things to happen. First, leadership from the very top. Second, private-public partnerships with the incredible DOE national labs. Third, a modernized and more flexible regulatory regime. And fourth, a staged funding mechanism at the federal level that can help drive these technologies forward.


It will take change at the top levels of the administration—and it’s got to be done in a way that is embraced all the way down to each of the facilities and every person working on nuclear energy on a daily basis.

And this starts by making nuclear energy a national priority to meet our climate and energy needs—and explicitly setting the goal for deployment and commercialization of advanced reactors.

The beginning of the new NASA program was marked with a speech by then president George W. Bush, where he addressed the nation on the importance of space research.

When the president stands up and says, look, we’re doing this—that puts force and urgency on the agenda within the agencies and with the public.

We asked the White House to make a speech … But no speech resulted. But we got an unequivocal support at COP 21 with strong U.S. presence and leadership.

That said, the most recent DOE draft vision and strategy report for nuclear energy is actually a fantastic document—it is the best one I’ve read in a long time.

It sets a goal of licensing two non-Light Water Reactors by 2030.

This is the right idea, but let me be clear: It must be and it can be done even faster if we are to have any chance against climate change—and a clear White House endorsement sure would help light that fire.

The math suggests that we’re going to need to be already deploying a range of clean technologies including new nuclear technologies starting in 2030 if we want to stay on target to meet our climate goals by 2050.

That means our first deployments have to come earlier—we’re talking 2025, fully commercialized and ready to go. Bold goals indeed that need to be expressed.

China is planning to deploy advanced reactors in 2018, so clearly this is less of a technology barrier than just a decision to do it.

The NASA experience shows THAT THIS IS POSSIBLE—but getting there will require a new level of leadership and commitment from the next administration.

Private-Public Partnerships with the National Labs

We need to start structuring our federal R&D programs so that they are goal-oriented—aimed at commercializing technologies. For the labs, this means opening their doors, figuring out what the private innovators need, and helping them get it.

The DOE has the best labs in the world. Historically and largely today, the labs set their research agendas and get their funding. If that aligns with industry needs, great. If not, oh well. Don’t get me wrong—research is essential. But it needs more focus.

What I’m advocating for is the opposite—a bottom-up approach where labs listen to industry first, and then the two groups coordinate and plan how lab resources can most effectively meet innovation industry needs. An aligned partnership driven by entrepreneurs. I’m not suggesting we stop our science research, but surely opening the labs and their vast talent for the private sector to engage will change the game for the United States.

Regular assessment of private sector needs should take place to identify common areas where a solution will benefit multiple companies or applications, and to be fair, this is already starting to happen—the GAIN folks are doing workshops that are very much in this vein. I applaud GAIN. Thank you, Mr. Secretary.

So we’re on the right track—but really need the DOE to adopt this as an operating principle, not just a one-off. I’m very hopeful.

Modernizing the regulatory framework

Additionally, and as you are all well too aware, nuclear developers need a well-defined, affordable, and predictable licensing process. By law, the NRC is the agency to provide this.

The NRC is often hailed as the gold standard for regulation of nuclear power in the world. Producing an incredible safety record is definitely worth bragging about. But remember, gold is heavy, expensive, and very difficult to move. We need to rethink how we do this and move the gold with leverage while still keeping the GOLD.

There are some pretty logical ways to make NRC more efficient and responsive to innovation—WITHOUT compromising safety.

The Clean Air Task Force and Nuclear Innovation Alliance have made a comprehensive set of recommendations to develop a staged licensing process and presented these to the NRC and key members of Congress who have oversight. The game is afoot. Now we need funding.

This goal of the new framework is to provide developers with clear, early feedback on a predictable schedule—something that will make it easier to attract and maintain private investment throughout their company life.  

By the way, the United States has done this before. Both the FDA and the FAA have regulatory processes that work very well in protecting the public and ensuring safety while providing the enormous benefits of pharmaceuticals and flight, respectively. This all with predictable, understandable, milestone-based regulatory processes that investors and entrepreneurs understand.

The NRC must respond to this challenge. Chairman Burns is responding, but Congress is the ultimate leverage with the NRC—Congress controls spending.

We have seen positive indications with bipartisan bills supporting modernization passed out of Congressional committees, and we anticipate these will be passed into law early in the next Congress, or even perhaps this fall, according to my latest sources. Your congressman Bill Flores supports these bills.  

The next Congress and administration need to take up these recommendations on January 21 to ensure that we continue to make progress.

Again, however, the question remains—can the NRC and Congress move fast enough and make sufficient change to make a difference in time?

That in part will be up to all of us. It is our government, after all.

Secure Funding and Staged Advancement and Financing Structure

DOE must have a well-defined process for advancing technologies through the innovation process. The steps and rules need to be clear as day for all involved. This, again, can be based on the successful NASA model.

Start by supporting all of the advanced nuclear companies, and continue to support those who reach key milestones until you have products on the market.

Under GAIN, the DOE just let $82 million to a wide variety of startup nuclear companies to assist them in their work.

New leadership at DOE is leaning forward and making this happen—people like Secretary Moniz and John Kotek at DOE-NE and Mark Peters at INL, to name only a few, are really helping to move this forward. Again, thank you, Mr. Secretary, for pushing this agenda and putting our money to work.

New Ways Forward—Underway!

The future of the human race is at stake here. America is again poised to lead.

We need to move rapidly to deploy zero CO2 emission sources for electricity in order to avoid the worst impacts of climate change. And the energy poverty of 2 billion people is simply unacceptable.

Big, audacious technological innovation can happen in a short time. It’s not easy but it can happen.

America has proven an ability to do amazing things when she has to. We have to.

Three years ago, when a group of us producers of Pandora’s Promise met at my home in Portola Valley to discuss what’s possible, no one thought we’d be able to change our government like we have.

In our quest, when we discovered these 50 ambitious nuclear startups backed by private capital in our country alone, we knew that there was a way forward. The entrepreneurs were ahead of all of us—they always are. Our job was to loop in the U.S. government with all its might and capability.

Since that time, we’ve moved from new nuclear being an unheard-of industry of one-off companies working in isolation, to the DOE rolling out a vision for this burgeoning sector and the NRC soliciting new licensing ideas. Further, Congress, both the Democrats and Republicans, got it, and got it fast passing legislation. This is a non-partisan issue, folks.

The promise of economically viable, new nuclear technologies is breathing new life into the nuclear sector as a whole, and has helped draw attention to the vital importance of nuclear (existing and new) in the fight against climate change.

Who could have imagined all this change just a few years ago? Believe me, not my wife and most of my friends!

This is an idea whose time is now. New nuclear is creating momentum and opportunity for everyone. It’s up to all of us to keep this momentum moving forward—to make sure the next administration and Congress understand the urgency and promise of nuclear innovation.

So, in closing I believe … no, I know, that with American entrepreneurship, venture capital, a new DOE partnership with the private sector, and a modern regulatory regime, we can make this happen. We have to make this happen. It’s our only planet. And it’s your planet.

We have a special opportunity to develop, scale, and deploy new nuclear technology needed to decarbonize our electricity generation and fight climate change. This opportunity has a shelf life. Time is running out.

There is a lot everyone in this room can do to advocate for more effective policies, to communicate the importance of nuclear innovation as a climate solution, and to represent this industry proudly.  

My advice for you here at Texas A&M: Finish your degree. Get your graduate degree. Use the power of social networking to connect with your friends and colleagues of similar goals and ambitions. There is a fantastic future ahead for all of you.

For more information, Third Way—a think tank I advise and have supported in this effort—has set up a webpage providing some resources of how everyone can get more involved. But there are many resources out there.

Or you can email me or call me. I’m happy to provide you an assignment.

Good luck. And gig ‘em, Aggies.

21st-Century Nuclear Innovation

Between a Rock and a Hard Place

The requirements are relatively straightforward: the Department of Energy (DOE) needs to transfer existing spent fuel from commercial power plant sites into interim storage facilities as soon as possible, ultimately moving spent fuel to a permanent geologic repository (or repositories) or reprocessing sites. Most policy makers, local communities, and the nuclear industry would agree. Even DOE agrees. That’s why they’ve already started moving forward with the recommendations of the President’s Blue Ribbon Commission. In 2015, DOE launched a consent-based siting initiative to start the process for selecting a future site or sites for waste storage. Throughout 2016 they held public meetings around the country.

This recommendation, in other words, is nothing new, and much of the rest of the report remains ambiguous. The title of the report tells us we’ll be paying more for greater risk, a claim repeated throughout, but the baseline or comparative scenario isn’t specified. Will communities be paying more than they are now, more than they should be, or more than they would be if DOE took control of the waste? It’s not clear, and none of these scenarios are quantified or compared.

The same goes with risk. How does the risk increase as plants close? Most of these plants are already storing spent fuel in dry casks on-site, and will continue to do so while they decommission, a process that takes decades. Is this riskier than transporting spent fuel around the country? Rather than evaluate the relative risks associated with any alternatives, the report simply stokes fears that these communities will find themselves at greater risk in the future. While local communities and utilities should be aware of the impacts of long-term on-site dry spent fuel storage, other impacts of premature plant closures remain an order of magnitude greater and more immediate, such as the loss of reliable, clean, and affordable power, as well as of tax revenue (often a large proportion of city budgets) and hundreds to thousands of jobs.

Ultimately, managing spent fuel is a long-term problem, and ensuring that the process is democratic and the solutions equitable is going to take longer. As the report states, “U.S. and international experts agree that permanent geologic (i.e. underground) disposal of highly radioactive spent nuclear fuel is the safest and most secure way to manage this waste,” but the process preceding disposal, and for building a geological repository, remains up for continued debate. DOE, for instance, conducted a study from 2011 to 2014 that evaluated over 4,000 different fuel cycles across a range of risk and benefit metrics. In the end, their top three choices all relied on continuous fuel recycling with fast reactors, something very different than what we’re doing today.

Yet another question: How should local governments and utilities use the information provided by the report to improve their decision-making, when ultimately the decision is by nature a federal concern? Utilities could lobby the DOE to move faster on a temporary solution for interim waste storage, but DOE is already considering this option, and is simply hindered by the slow-moving democratic site-selection process. Unless utilities move to set up a private waste storage facility (which would also require NRC licensing), it will be difficult to expedite such a complex process that involves a hodge-podge of municipal, state, and federal policies and dozens of different utilities, as well as a muddle of historical precedent.

Finally, the report mentions several times that fuel reprocessing was banned in 1977, but not that the ban was lifted in 1981. Reprocessing could serve as a short-term solution for managing spent fuel, as the first step would involve moving fuel to a central facility or regional facilities. In the longer term, there has been a resurgence of interest in advanced reactor designs that can burn waste as fuel, produce radically less waste, and reduce the lifetime of remaining wastes. We summarize the potential of these reactors in our report, “How to Make Nuclear Cheap,” and there’s been progress in Congress on this front as well.

Overall, the report would have benefited from a more thorough assessment of these and other proposed solutions to the multiplex problem of nuclear waste management.

A Climate Movement at War

A Climate Movement at War

The invocation of war—in situations other than where people in uniforms are firing guns at each other—is the last political stop before despair. In declaring war on crime (Hoover 1930s), cancer and drugs (Nixon 1970s), and terror (Bush 2001), politicians have long demonstrated their frustration in the face of intractable problems that seem to defy all efforts to resolve them. So it was only a matter of time before someone declared war on climate change. “World War III is well and truly underway. And we are losing,” Bill McKibben wrote this month in an article for The New Republic titled “A World at War.”

“This is no metaphor,” he insisted, “carbon and methane are seizing physical territory, sowing havoc and panic, racking up casualties, and even destabilizing governments.” He goes on to attribute the Syrian civil war and the rise of Boko Haram to “record-setting droughts,” themselves symptoms of climate change. “It’s a world war aimed at us all,” he argues.  The “powerful and inexorable enemy” in this war? Nothing less than “the laws of physics”.

The idea of the climate being at war with humanity is, in fact, just a metaphor. But is it a useful one? Does it move climate action any closer to the goal of limiting warming to 1.5-2°C or help us better cope with climate impacts?

Sadly, it does not.

Managing to be incoherent, contradictory, and counterproductive all at once, McKibben claims that the climate itself has launched an undeclared war against us, just as Japan did at Pearl Harbour. The “laws of physics” are the enemy! But soon enough, he is inveighing not only against this identified enemy, but also fifth columnists, collaborators, and profiteers. It seems that these include not only the standard cast of climate villains, such as Exxon, but also anyone, conservative or progressive, who advocates any approach to dealing with climate change that deviates from his particular all-out-war response.

The laws of physics, no longer the enemy, now justify his preferred political response to climate change as the only acceptable, indeed the only available, route to salvation. And here is the rub. Wartime is a state of emergency in which democratic and other civil rights, such as protection from government takings of assets, can be suspended.

This is, of course, exactly what McKibben’s war metaphor is designed to justify. And it should set off alarm bells for anyone committed to democratic governance, as states of emergency are not merely discovered, but are declared.  Whenever anyone uses nature or science to justify such a declaration, we are wise to cast a sceptical eye.

McKibben is quick to demonize fossil fuel companies. But he has little to say about the 1.5 billion people who currently have no access to reliable energy, nor about land-use or agriculture or the many other pressures associated with humanity’s complex intersecting, often conflicting, concerns at the root of climate change. These are what make it such an intractable “wicked problem.”

A true war footing would also presumably entail sacrifices that went beyond sticking it to fossil fuel companies and creating millions of new jobs manufacturing solar panels. But one would be hard pressed to find mention of any sacrifice that McKibben’s core audiences might find unpleasant or distasteful.

It is true, of course, that war can bring unprecedented unity of purpose to otherwise divided nation states. But McKibben’s statist programme—calling for direct government militarization of the economy—couldn’t be better calibrated to stoke exactly the fear that has made climate change such a divisive issue.

In describing the program, McKibben approvingly quotes historian Mark Wilson on the US government’s control of manufacturing for the Second World War effort: “The feds acted aggressively—they would cancel contracts as war needs changed, tossing factories full of people abruptly out of work. If firms refused to take direction, FDR ordered many of them seized.” This is precisely the conservative’s worst nightmare. In the politically polarized context of the United States, any attempt to push through such a program would only harden resistance from the right.

McKibben’s call for climate solidarity also apparently ends at the water’s edge, morphing curiously into a call for trade war with other potential producers of renewable technologies. A World War Two-style mobilization of American manufacturing might, McKibben argues, position the United States as “the world’s dominant power in clean energy, just as our mobilization in World War II ensured our economic might for two generations. If we don’t get there first, others will.”(Emphasis added). 

But if there is a real danger that others will develop “cheap foreign-made panels” that would threaten US market dominance, then one has to ask what sense can be made of McKibben’s starting premise that the threat from climate is so great as to require America to go onto a wartime footing to produce them in the first place? Why not just import them from China?

And while it is true, as McKibben observes, that after Pearl Harbor, Americans were “willing to do hard things: pay more in taxes, buy billions upon billions in war bonds, endure the shortages and disruptions that came when the country’s entire economy converted to wartime production”, the militarization of the US economy during WWII essentially lasted for only four or five years. It was understood to be a temporary situation to deal with a tangible and immediate threat to the country.

McKibben doesn’t tell us for how long he thinks that the United States should be on such a war footing to combat climate change, but to make a global impact it would have to be a matter of decades rather than years. Such a permanent state of emergency sounds more redolent of North Korea than the USA. It seems both implausible and undesirable that Americans would submit to such an extended “regime of exception,” one that would almost certainly be erosive of their democratic values and institutions.

And while I entirely agree with McKibben that, “Even if every nation in the world complies with the Paris Agreement, the world will heat up by as much as 3.5 degrees Celsius by 2100—not the 1.5 to 2 degrees promised in the pact’s preamble,” he then proceeds to advocate the kind of old-style top-down approach that has demonstrably paralyzed international efforts to address climate change for decades.

I too share McKibben’s frustration with the lack of imagination and resolve of national governments to develop climate positive policies. But he ignores the fact that while the achievements of national governments have mostly been more rhetorical than real, progress is being made around the world at the level of city and provincial administrations and community action. What was most important about the Paris agreement was not the agreement to another round of temperature targets that nobody knows how to meet, but rather that it opened up international policy making to a much wider range of policy actors at different levels (a so-called polycentric turn).

McKibben’s call to arms takes no account of the contributions and lessons of such initiatives and instead remains firmly entrenched in the head-on approach to climate change that assumes national governments can effectively implement internationally agreed emission reductions through domestic regulation.  This is essentially an out-dated, end-of-the-pipe pollution control approach to climate change, substituting virtual national “pipes” for real ones, and has positioned climate change as a discourse of constraint, rather than a discourse of opportunity. But make no mistake: the complex climate discourse won’t be un-muddled by dressing it up in confused metaphors about the Second World War.

None of this is likely news to McKibben. He is widely read on all matters climate related and it is hard to read the various off-handed caveats and qualifications woven into the text and not think that McKibben is well aware of all of these critiques. But having set about the business of raising an army, such objections are no doubt easy enough to dismiss as mere trifles.

That won’t change the fact that climate change is a “wicked problem,” one that we can cope with more or less well, but cannot definitively solve, much less wage war upon. While America’s metaphorical wars of recent decades have been used to great political advantage by those who have deployed them, they haven’t made much of a dent in crime, drug use, or terrorism. In this, McKibben’s agenda, based on the flawed metaphor of war, is ultimately antithetical to America’s traditions. These are the traditions, principally pragmatism and pluralism, that have always defined the country in its finest moments and that will be necessary if we are to make real progress toward addressing climate change.

Do High Agricultural Yields Spare Land for Conservation?

The appeal of the paper was best summarized in a tweet by Tamar Haspel:


Similar to other studies, Kniss and his colleagues find that organic yields are on average about 20% lower than conventional yields in the United States. But the authors also strongly argue that the sustainability of agriculture shouldn’t be judged on yields alone.

As they write, raising yields might not actually reduce the total area of cropland, and thus not spare forests and other natural habitats from being converted to farmland. What’s more, they argue, lower yields might be a price worth paying for the other environmental benefits that organic farming bring.

I’ll address each of these in turn.

According to Kniss and others,

“...yield gains have not been clearly linked with increased land set aside for conservation at the global or regional scale, thus the yield/conservation tradeoff is likely a false dichotomy not representative of the socioecological complexity of agricultural systems, with management decisions tied to markets and policy.”

At the regional and national scale, the evidence that higher yields equal less farmland – often referred to as “land sparing” – is indeed a mixed bag. In some countries, yields have increased over time while per-capita cropland has gone down; in other countries, it has gone up. This finding led Robert Ewers and others to conclude that “land-sparing is a weak process that occurs under a limited set of circumstances.”

This should be interpreted with some caution, since the study in question didn’t compare the outcomes of with a counterfactual–in other words, they didn’t ask what would have happened in the absence of yield improvements. Perhaps other factors stimulating cropland expansion outweighed the effect of the higher yields.

Still, the results are not surprising. When farmers in a region become more efficient and their yields go up, they can often sell their goods at a lower price. This increases demand for their crops, giving the farmers an opportunity to increase their incomes by taking new land into production. In the end, a country might see farmland grow rather than decline as average yields go up. The only way to counteract this effect would be to halt farmland expansion by other means, such as protected areas.

On a global level, however, the dynamics are different. While global food demand increases when yields go up and prices down–as was the case for most of the second half of the 20th century–the rise in demand is often not large enough to offset the higher yields. For instance, if yields go up by 10%, food demand might only increase by 5%. As a result, cropland area will be lower than it would have been without the yield improvement.

In this case, any rise in global average yields is very likely to be land sparing, even if we account for economic rebound effects. And indeed, Derek Byerlee and his colleagues observed as much in their 2014 paper: “At a global level, technology-driven intensification is strongly land saving although deforestation in specific regions is likely to continue to occur.” Another paper by Thomas Hertel and others concluded that the Green Revolution reduced cropland expansion by half compared to a counterfactual scenario, sparing an area larger than Western Europe. 

So it appears that, at least the the global level, the yield/conservation trade-off is not a false dichtomy. How then might we answer the question posed by Kniss and his colleagues: How much yield is enough? Might the lower yields in organic farming still be acceptably high in order to feed the world and spare some land for nature?

One way to look at it is to ask what it would take to keep food prices and cropland area from rising. Roughly speaking, to do so, yields would have to rise in tandem with food demand. This presents a massive challenge over the next few decades. Crop demand is forecast to grow by somewhere between 40% and 70% to 2050. Only if yields keep up–something that requires yields to grow at least as fast in the next several decades as they did during the Green Revolution–do we have a chance at avoiding further net deforestation from cropland expansion. In this regard, a yield loss of 20% or more in organic systems, if widely implemented, is the last thing we need. It would come at a high cost, from the perspective of conservation as well as food security.

Even so, as Kniss and colleagues argue, yields are not the only factor to consider. To complete the picture, we need to also look at pollution from nutrients and pesticides, water consumption, and other environmental and socioeconomic impacts. If organic farming is better on all these other accounts, perhaps it’s worth the sacrifice in terms of yields. 

To assess these trade-offs, we need to study the environmental impacts per unit of output, such as per ton of crops. This is the fairest way to assess farming practices, since it tells us what the costs and benefits are at any given level of food demand. On these terms, studies have found that nitrogen pollution is the same, if not higher, in organic systems compared to conventional. Pest control in organic farming is low-impact when it is preventative–such as through crop rotations–but often worse than conventional once a pest outbreak actually takes place, since organic farmers are highly restricted in their choice of pesticides. Any reduction in greenhouse gas emissions from organic farming operations might be counteracted by the carbon losses from expanding farmland into natural habitats.

But ultimately, neither system is a clear winner because each combines a set of good and less good practices. Organic farming is made less sustainable by its insistence on not using from synthetic fertilizers and certain low-impact synthetic pesticides. Conventional farming could lower its impacts by, among other things, adopting more proactive pest control measures like crop rotations. In other words, the trade-offs between yields and other environmental impacts are not a given. We can get the benefit of crop rotations without abstaining from yield-enhancing synthetic fertilizers and pesticides.

In the end, as Kniss and others conclude, the most environmentally and socially beneficial farming of the future–in terms of yields, pollution, and many other factors–will mix practices from both. And that's where we find my biggest reservation against organic farming: its current restrictions–as defined by USDA certification–forecloses some of the ways by which farming could evolve to not only raise yields but also achieve other environmental and social goals.

Taking Modernization Seriously


But perhaps the greatest challenge to the ecomodern project isn’t one that its critics seem eager to talk about. There can be no ecomodern future if billions of people remain trapped in deep agrarian poverty. If there is one fundamental characteristic of modern societies, it is that most people have left the farm and moved to the city and have abandoned lives of hard agricultural labor for off farm employment that offers better livelihoods.

About half the world population has made that transition. But half still haven’t, at least fully, and about 2 billion people have barely begun. And therein lies the rub. An ecomodern future—in which all of humanity lives prosperous, fulfilling lives while the human footprint shrinks and nature thrives—is only possible if there is economic opportunity for everyone in cities. And increasingly, a number of important scholars have raised questions about whether the virtuous cycle of urbanization, industrialization, and agricultural modernization will be available to late-developing nations to the same degree to which it has been available in the past.

Manufacturing has traditionally been the means through which poorly educated, low-skilled workers found off-farm employment and higher wages in the city. But rising labor productivity and increasing automation means there are fewer jobs in manufacturing than there used to be. Meanwhile, the globalization of supply chains means that import substitution, the way that industrializing economies have historically leveraged their domestic markets to assure that they capture a share of global manufacturing output, is much more difficult. Almost nowhere is a car, a computer, or any complex consumer product manufactured entirely within one nation’s borders any longer.

“Deindustrialization,” Harvard economist Rodrik observed in an important 2015 paper, “removes the main channel through which rapid growth has taken place in the past.”

Nowhere do the barriers to entry for new manufacturing economies pose greater risks than in sub-Saharan Africa. Manufacturing output, as a share of the total economy, has fallen over the last 25 years across most of sub-Saharan Africa, The Economist recently noted. Without manufacturing or some other pathway to higher productivity economies, there can be no demographic or agricultural transition nor sufficient societal wealth to allow for modern living standards or social mobility.

But all is not lost. Despite the headwinds that Rodrik and others observe, African manufacturing more than doubled over the last decade, growing at an annualized rate of 3.5%. The longer term decline in African manufacturing is attributable to a range of factors beyond the macroeconomic changes that Rodrik rightly worries about, including the end of the Cold War and with it robust development support from the US and USSR for African client states, the decades of civil strife that followed, and the AIDS epidemic.

The danger in the kind of declinist interpretation that some (not Rodrik) have drawn from the ways in which manufacturing and the global economy have changed is that it is easy enough to conclude that there is nothing to be done about it. And that is simply not the case. In a new Breakthrough Journal article, “Taking Modernization Seriously,” New America Foundation co-founder Mike Lind argues that there is still much that developing nations can do to catch up. The key to modernization, Lind argues, remains the critical role of developmental states in building infrastructure and identifying key economic sectors to promote.

The developmental tradition, Lind writes, is Hamiltonian–heavy on machinery and manufacturing, import controls, infrastructure, and public financing of investment and innovation. This “American System,” Lind recounts, was picked up by Friedrich List in Germany and many other European countries to power their own rapid growth. Much farther down the road, of course, China and other “Asian tigers” would follow suit.

What Lind calls the “American System” has fallen out of vogue, but its promise is not lost. “The United States,” writes Lind, “chose to adopt and preach free trade only after its own manufacturing industries no longer needed protection from foreign competition.” Globalization is here to stay and that’s fine, he writes. But countries still need export-oriented growth strategies and developmental states to finance and regulate that trade and to distribute the gains equitably.

Critics of neoliberalism and globalization too often imagine that more equitable and ecologically responsible arrangements might skip the development path that virtually every nation in the world has followed to prosperity. But Lind reminds us that while there is an alternative to the so-called “Washington Consensus” there is no alternative to economic development and modernization. There can be no demographic transition if most people remain in subsistence agrarian economies. And without a demographic transition, a planet of 9 or 11 or even 14 billion people scraping a bare existence from the land is a planet in which virtually every plot of arable land will be converted to low-intensity farming. That is a future that can be neither just and equitable nor ecologically vibrant.

Responses welcome.


What’s the Best Historical Comparison to Climate Change?

I am not the first person to think this.

Climate change has been compared to efforts to address acid rain, the smoking epidemic, marriage equality, drunk driving, HIV/AIDS...the list goes on. It has been, in particularly heated moments, described as the next World War.

Some of these analogs feel more appropriate than others. Environmental policies like the Montreal Protocol at least make more intuitive sense than, say, the fight for all Americans to marry whomever they choose (though the Montreal Protocol has its own idiosyncrasies as well).

But in the seemingly endless search for historical comparisons, one omission stands out.

If responding to climate change is (mostly) about reducing anthropogenic greenhouse emissions to zero as soon as possible, then it’s (mostly) just about energy transitions: from high-carbon to zero-carbon. Energy transitions have happened before. One might think about studying them, not drunk driving or war, to better understand the climate challenge.

I wrote about this earlier this year in a review of Nicholas Stern’s recent book. I also tweeted as much earlier this week:

The smart responses I got all basically boiled down to “this time is different.” Sure we’ve moved from one energy fuel or technology in the past, I was told, but not at the scale or speed necessary to confront global climate change.

Paul Sabin, whose book I love, put it like this:

The idea here is that climate change is such a massive problem that we need to accomplish something that’s never been done before. Essentially, history does not provide any good lessons for approaching decarbonization.

The implication also seems to be that we need to move to lower quality energy sources--less “dominant” fuels, in Sabin’s terminology. Climate scientist Michael Tobis made a similar point:

So climate change is different because the fuel transition needs to move towards relatively less dominant and/or more costly fuels.

I have a couple thoughts about this.

The first is to generally agree. Energy transitions have tended towards higher-quality and cheaper fuels in the past: wood-to-coal, whales-to-kerosene, horses-to-cars, coal-to-gas, etc. In contrast, moving from fossil fuels to zero-carbon fuels like solar, wind, nuclear, and hydrogen is harder because fossil fuels are amazing and solar, wind, nuclear, and hydrogen all have their issues (intermittency, capital costs, storability, miscellaneous infrastructural obstacles, etc.).

But my second thought is that, even if climate change makes this energy transition different or more difficult...it’s still an energy transition. We should learn as much as we can from past energy transitions and fill in the rest.

I think the tendency to dismiss energy transitions as instructive for dealing with climate change is that there’s one big, inescapable lesson from pretty much all energy transitions: they’re slow. And to deal with climate, we need to move fast.

Sabin, Tobis, and others with whom I’ve had this discussion seem to think that overcoming the slow nature of energy transitions will require major policy action. I agree -- I happen to work for a policy think tank that focuses on environmental problems like climate change. But it’s not as if previous energy transitions occurred in a policy vacuum. The ongoing US coal-to-gas transition is the result of three decades of federal investment in fracking technologies (and is helped along by environmental regulations that penalize coal pollution). France’s nuclear grid was built by a deliberate national decision to move off of oil for generating electricity, and that energy transition remains one of the fastest forms of decarbonization ever.

So, after all that, I arrive back where I began. Energy transitions offer the best historical lessons for the energy transition we need to address climate change. The past can offer guidance over how to “aim” energy innovation to make solar, wind, nuclear, and other technologies closer to “dominant” over fossil fuels. The past can offer templates for policies that have delivered the most rapid and/or complete decarbonization.

And even so, the past might not tell us all we need to know about the future. But that’s not a reason not to study historical energy transitions. It’s a reason to learn as much as we can from them, instead of assuming there's no comparison to be found, or worse, grasping at straws for even weaker comparisons.

Sustainability or Bust?

The calendar date marking our overdrawn ecological credit—as Time, Popular Science, and the Christian Science Monitor have all reported—is upon us. Actually, it passed: according to the Global Footprint Network’s “ecological accounting tool,” we officially used up the resources that can be regenerated in a year on August 8.

The very notion of “ecological overshoot,” however, remains flawed, both as a metric and as a concept. Our own Linus Blomqvist wrote on this subject last year, summarizing his own analysis over the ecological footprint. But with the turning of our ecological clock, we’ll briefly return to the subject here, with an eye towards ecomodernism and the much-debated meaning of the word “sustainable.”

The language Global Footprint Network uses to describe its work provides some insight. “The Ecological Footprint,” according to its website, “tells us how close we are to the goal of sustainable living. Footprint accounts work like bank statements, documenting whether we are living within our ecological budget or consuming nature’s resources faster than the planet can renew them.” The fact that this analogy resonates with misconceptions about national (economic) debt points to the basic, and misleading, assumption that underlies this framework: that balance—the hallmark of “sustainability”—is the gold standard when it comes to the natural world.

We might note first that contemporary ecology has largely discredited the “balance of nature” framework, and has replaced it with a dynamic model of constant change. But even beyond this scientific disconnect, sustainability remains a troubled construct, as Jeremy Butman argues in a recent New York Times op-ed. As Butman puts it, the sustainability movement has been defined by a misplaced faith in nature’s Edenic perfection, an understanding that derives from Romantics like Wordsworth and transcendentalists like Thoreau. Prior to this philosophical turn (i.e., from Aristotle through to Hobbes), nature was viewed as cruel and ever-changing; since then, it has come to embody a once-changeless harmony that is rapidly eroding at the hands of human influence.

Most damaging for our purposes is the strange nostalgia that accompanies such a stance—a mourning for a lost wholeness that never really existed. Rather than to trap ourselves in such a state of melancholia, Butman urges, we need to instead “face the future” and “say ‘yes’ to the anthropocene.” In order to do so, we’ll need to change the language we use to define our relations with the nonhuman world.  What Butman is proposing, in other words, is a paradigm shift—from a backward-facing sustainability, rooted in “preserving” and “sustaining,” to a forward-facing “adaptability.”

Ecomodernism, too, proceeds from such a posture of pragmatic affirmation, saying “yes” to the possibility of a “good, or even great, Anthropocene.” Where Butman and ecomodernists might differ, of course, is in the application of this model.

Butman draws from Bruno Latour in his understanding that humans and nature are so deeply embroiled that we can neither “let nature be” nor control it, and should therefore embrace such an enmeshed relationship. Ecomodernists, on the other hand, emphasize that an increasing dependence on nature will mostly prove destructive, and as such, endorse intensification and substitution as an antidote to agricultural expansion and biodiversity loss. The disagreement here recalls a rebuke of the framework that Mr. Latour himself gave at our Breakthrough Dialogue last year, which surfaced some tension even among self-described ecomodernists over whether, how, and to what extent humanity and nature can really “decouple.”

Nevertheless, Butman, ecomodernists, and surely many others can agree on the fundamental premises that make sustainability so problematic. First is the assumption of a prelapsarian natural world. In fact, Butman says, humans have been altering their climes for millennia—and the planet has been in turmoil all on its own for far longer. Second is the notion that sustainability will serve to return us to nature. Instead, we should note, sustainability is fundamentally a human project; in Butman’s words, “we preserve the resources needed for human consumption, whether that means energy consumption or aesthetic consumption.” And finally, sustainability presupposes an attitude resigned to the status quo—of a world with limits that must be adhered to. Such a failure to think creatively and constructively may not only fail to curtail environmental destruction, but it may also prevent us from reaching our full potential when it comes to conservation, human development, and adaptation to climate change.

Certainly, more will have to change about our policies and strategies than simply the words we use to describe them. But language plays a role in these processes as well, and in the shape of public perception. As the confusion about GMOs shows, labels matter. And sustainability, in the way it’s been brandished for decades, is one that environmentalists could do without.

Ecomodern Dispatches

As a result of the “zero emissions credits” that operators of nuclear plants will now receive under New York’s Clean Energy Standard, Exelon has agreed to purchase the struggling FitzPatrick plant, which was previously scheduled to close in 2017. Robert Walton of Utility Dive and Samantha House of Syracuse also report on the details and emission reduction benefits of the deal. Brad Jones, CEO of NYISO, echoed this optimism in a recent interview with Utility Dive: “Retaining the nuclear fleet is important not only to achieving the Clean Energy Standard, but also to maintaining fuel diversity.”


A recent comment by conservation scientists in Nature issues a reminder that the primary drivers of biodiversity loss remain overexploitation and agriculture. According to their analysis, 62 percent of a sample set of species from the IUCN’s Red List are threatened by agricultural expansion. Land-sparing, unfortunately, is not discussed as a strategy for conservation here...


New research from Yale's Dan Kahan shows that a curiosity about science reduces polarization over climate change. The question now becomes “whether there is an effective way to prime people to be more science-curious -- which could then also have political ramifications.”

Radiation and Reason

Dr. Wade Allison taught and studied at the University of Oxford  for over 40 years, where he is now an Emeritus Professor of Physics. His two books, Radiation and Reason and Nuclear is for Life, provide great introductions and references for those looking for a deeper understanding of how radiation affects the environment and public health.

We sat down to talk with him about the risks from overreacting to radiation fears, and how he views the state of the science and communication after the Fukushima accident.

What drove your interest in nuclear issues?

My career actually spans the whole of the nuclear period. When I was 13 on holiday in Europe with my parents in 1954, my sister and I got sick so we were driven to Geneva in case we needed a doctor. In the city there was a big exhibition, "Atoms for Peace 1954," and that was the occasion of CERN being founded. As a 13-year-old boy I was inspired and impressed by the reactor technology and what the possibilities were. So I went back to school and concentrated on science. I spent most of my career in particle physics and now I’ve come all the way back to the subject of that exhibition in 1954, the peaceful potential of nuclear power.

What brought you back to the potential of peaceful nuclear power you first saw as a boy?

Nearly 20 years ago I started giving a new final year student course in Oxford on applications for nuclear physics. I quickly got into medical physics and the course was rather popular with the students. It evolved more and more of a medical emphasis, including radiotherapy. That was when I saw the extraordinary difference between the high--but life-saving--radiation doses used routinely in medicine and the doses occasionally encountered in the environment.--many thousand times lower than medical doses. And I published a textbook for the student course that included a study of this comparison. The whole basis of current radiation safety regulation is crazy. We've got it all wrong. It was a big step for me to attempt an explanation of this for the unscientific public. Following a few popular lectures on the subject I started writing Radiation and Reason. Later, that got translated into Chinese and Japanese. After travelling to Fukushima, I began the second book [Nuclear Is For Life], which has a much broader scope than the first. It basically talks about how nuclear is positive, it is part of our natural environment and our cultural attitude is almost 180 degrees wrong. Nuclear radiation is almost completely harmless. Nowadays, I lecture around the world trying to get through to people, especially young people. They should put the legacy of the Cold War behind them. They need to look forward! But an important incident that I didn’t know when I wrote the first book was the accident at Goiânia in Brazil in 1987.

What happened at Goiânia, Brazil?

The radioactive contaminant of most concern at Fukushima was Cesium 137. This is produced in large quantities in any fission reactor, but it's also used prominently in radiation therapy. At Goiânia there was a radiation clinic that was abandoned. People broke into this facility and took the metal therapy unit with the Cesium inside it for scrap. This source of Cesium was 20 million million becquerels (Bq) which is orders and orders of magnitude higher than any contamination at Fukushima (100 becquerel is the legal limit in a litre of drinking water). So the source gave off a bluish hue and the local people were fascinated by it and played with it. By the time people started to get ill, 249 people were seriously exposed and about 70 were contaminated internally. The IAEA got involved and measured the contamination which was between a 1000 and 10,000 times larger than the largest dose recorded for any member of the public at Fukushima; truly colossal doses relatively. Four people died within the first few weeks, just as 28 firefighters at Chernobyl did. Others had serious skin burns but none of them died. In fact, in the following 25 years there were no further deaths that could be attributed to radiation--in particular, no attributable case of cancer.

I managed to contact the doctor in Brazil who was responsible for reporting on these patients to the IAEA in Vienna and he confirmed that in 25 years nobody got cancer from radiation. One woman was already pregnant when internally contaminated and another got pregnant five years later. Both children were measurably radioactive and yet are reported to be doing well. In the same way the wildlife in Chernobyl is thriving because the humans are gone and the radiation isn’t really harming them. In fact by driving the humans out the radiation at Chernobyl seems to have done the animals a favor.

You’ve written two books, Radiation and Reason and Nuclear is for Life, which deal with how society understands and interacts with radiation. What inspired you to step outside of the purely technical?

Because that’s where the problem is. The technical questions and the evidence that answers them are relatively straightforward but people's readiness to face them are not. I’m lecturing around the world trying to get through to people, especially young people. They should put the legacy of the Cold War that they have inherited from earlier generations behind them. They need to look forward

In Radiation and Reason, you say that when it comes to nuclear power, leaders “cannot instruct the court of public opinion” Why?

Leaders who stray too far from public opinion are soon out of power, in a democracy at least. At the moment the public still tends to see anyone who knows about nuclear technology as linked to the nuclear industry and so doesn't trust them. The people who really should know and be trusted by the public are the medics and academics (though most don’t). I’m retired and I have six grandchildren. When I went to Japan the local residents that I spoke to asked, “Who are you? How’d you get here? Did TEPCO or the government send you?” And I said, “No, I’m here because I have six grandchildren and I want them to live in a nuclear-powered world.” Then they started listening. Primarily the message needs to come from schools. But medical professionals should take a larger role. Anyone that undergoes radiotherapy gets a very large dose of radiation every day. People trust their doctors and these patients go home after successful treatment. Unfortunately, the use of radiation in medicine has been adversely affected by the connection with Fukushima so that some refuse the treatment they need. The positive message should be carried in the other direction: radiation in medicine is usually life-saving and the radiation doses at Fukushima were so low as to be totally harmless.

Is there a role for industry in this effort?

Well they’ve got to be very careful as the public doesn’t trust them. I don’t know whether it is true everywhere in the US, but until recently all the nuclear power stations here in the UK were closed to the public and they were closing their visitor centers too. “Open your visitor centers and let people in! Bring school children in!” I said. And they are doing that now. We can’t leave all matters with the nuclear power interests -- they are making a lot of their money from decommissioning the power stations that the environment needs to be kept working. In their defense, few in the nuclear power industry understand what I am saying and they don’t really like me. But we do need them back on track and building again.

Could you give a brief explanation of the Linear No-Threshold (LNT) model and its history?

In the initial fractions of a second after radiation hits living tissue, it easily smashes a few molecules and atoms because it is very powerful. As I describe in both of my books, the LNT theory is based on the assumption that there is no corrective mechanism for this phenomenon. The model assumes that life doesn’t do anything about the smashed molecules, which is only true of course for dead organisms. Living organisms maintain themselves. In fact for millions of years, in order to exist, life has lived with radiation. It has learned to either mend those molecules or identify and replace them, and all this happens automatically on the molecular level. It is a biological reaction that happens in all life. Now all that is ignored in the LNT model. It assumes that there is no correction and therefore the amount of damage is directly proportional to the amount of radiation received. It supposes the effect of exposure is cumulative. That leads to the safety concept known as ALARA, for which the exposure to radiation should always be As Low As Reasonably Achievable, i.e. completely avoided whenever possible. But in fact our bodies mend the damage as they go along. Radiotherapy wouldn’t work if it were otherwise. So a much more sensible idea is AHARS, or As High As Relatively Safe. In other words how high a dose can you receive without any harm.

What is the evidence that LNT is not the right model, can you give me a concrete example?

Radiation cures for cancer wouldn’t work if there were no correction mechanisms. Almost everyone knows someone who has had radiation therapy and knows that it does work. This does not come from the depths of a secret government lab, it's something that the majority of people have some familiarity with. The effect of continuous exposure to radiation below a certain threshold throughout life has been tested with dogs. Below a certain threshold there is no lasting effect and that threshold is quite high. (Dogs are chosen because they live much longer than mice and a good fraction of a human lifetime.)

What limits of exposure would you then recommend?

It depends on whether it's a single dose or spread out over a period of time. If it's spread out it's much better and does even less permanent harm. But even if it's a single dose you can actually safely be exposed to a level one thousand times higher than the current regulations. Looking at the data, an exposure of 1000 millisieverts spread over a year never causes any harm. A separate question is “what is the largest total dose spread over a lifetime that does no harm?” The evidence says that it is 5,000 millisievert but possibly higher. The current ALARA safety level is 1 millisievert per year for the public. In fact the regulation in 1934 was 740 millisieverts per year. The policy of the nuclear industry is counterproductive. They say: “Don’t worry, radiation isn’t going to hurt you,” and then they spend billions of dollars on needless radiation safety. A predictable popular response is: “if it’s so safe, what are you doing all of this for? Are you increasing my utility costs without reason?”

The International Commission on Radiation Protection (ICRP) has walked back some of their stances on radiation exposure in recent years. Yet many of their recommendations still adhere to a linear model of risk per level of exposure. Why is that?

They have reduced their concern about genetic risks but maintained their caution about cancer expressed in terms of LNT. You ask why? There is a large worldwide workforce predicated on ALARA and LNT. Change is very hard for them although it would be simple in principle to have a tolerance at 740 millisieverts per year. The current ALARA recommendation is lower than the average natural background radiation, which is about 3 millisieverts per year.

Where does this fear come from? Why don’t other pollutants, such as coal, cause the same type of concern?

Coal wasn’t introduced to the world through a bomb like Hiroshima and Nagasaki. It’s not good for public opinion to learn about something for the first time with a big scare like that. But in fact, the actual radiation from those bombs killed rather few people relative to those that died from the physical explosive blast and fire (about 1,000 out of 100,000).

Does the public have particular trouble grasping nonlinear phenomena in general? If so, why?

I think few people, even in the nuclear field, understand nonlinearity. The language is not helpful. It’s better to talk about the fact that the body naturally repairs radiation damage, just as the sun doesn’t burn you unless you’re out in it for too long. Even though skin cancer from solar radiation is a much higher risk than nuclear radiation, people still go on summer vacation don’t they? That’s because people have been brought up to think about solar radiation sensibly. They don't talk about non-linearity but the do talk of “not too much on the first day” and “getting used to it”: that is non-linear behaviour expressed in everyday terms. Using mathematically macho terms like linear does not help the public and is unnecessary.

What would a regulatory policy look like, which attempts to incorporate how people perceive radiation risks?

You have to take people into your confidence. For example, in Japan the pictures showing workers in white protective suits with geiger counters is not the right way to communicate with people. They should’ve gone in there with open necked shirts and shaken people firmly by the hand, explaining why everything’s going to be ok. But the trouble was they did not believe that, neither the public nor the officials. If they had studied the evidence, they would have treated the public with more confidence and been believed. But digging up children’s playgrounds in protective gear isn’t only pointless, it's harmful. It scares people. The deaths caused by the unnecessary evacuation are estimated at 1,500 but that is probably an underestimate. The social effects of scaring people about radiation are fatal. Abortions in Greece after Chernobyl increased by 2000, a figure many times higher than the actual radiation deaths from Chernobyl, many hundreds of miles away.

Any change in regulation is going to have to come from a change in the basic common understanding of radiation. Which in turn is going to have to start in schools and with education generally. If one can talk to kids in schools and teach them a little more about the detail of radiation and change their basic understanding, if those kids then go home and talk to their parents, those parents are more likely to listen to them than they are to a politician on the TV. We’ve got to educate people from the bottom up. Many people from my generation will continue to have a phobia of radiation. Of course it’s worst in Germany and Japan, less so in the UK and US, because they have the stronger memories from the Cold War and WWII.

There are times when behavior compounds radiological risks, like smoking cigarettes increases the harm from radon, and poor nutrition in the Soviet Union increased the chance of thyroid cancer. How can regulators consider these effects when setting dose limits?

Radon doesn’t cause cancer. But there is an industry out there that is exploits that belief. If you actually look at the data there is no statistical link. For instance in Cornwall they looked at the occurrence of lung cancer from residential exposure, where the individual radon dose was estimated for several thousand people, but no link was established.

When we make freeways safe we don’t then tell people to sit in the middle of the highway during rushhour. Education teaches people how to handle risk in relation to highways, approaching it carefully and without fear. The same should be true for radiation. It is indeed possible if you look at radiotherapy.  There have been a few nasty accidents with radiotherapy (as with highways). So people have to be educated to know what to be careful about. In Japan people are taught about earthquakes and tsunamis in school. As a result 40 minutes after the earthquake when the tsunami came they knew what to do and most people had already moved to higher ground as they had been taught. But five years later people are still panicking about the radiation that hasn’t killed anyone. It’s not a matter of a regulation that says you’re protected because there is no risk at all, but a matter of education, saying you shouldn’t have more than a threshold amount of radiation and being aware of the reason.

What are the larger risks we face when we set radiation limits too low?

In the case of Fukushima and Chernobyl, the larger risks were moving people out of their houses and resettling them when they didn’t have to. You make them refugees. The old people die when you move them. There is social and family trauma when people are told that they have been irradiated if they don’t understand. The danger of climate change is increased when fossil fuels are used instead of nuclear. There is no reason to make nuclear safer than it was at Fukushima. Three Mile Island and Chernobyl were due to human error and safety needed to be improved but Fukushima wasn’t due to any human error and didn’t cause any serious loss of life or adverse effect on the environment. So what can they really improve on (other than moving the location of the generators in that single instance)? The only reason that nuclear is treated differently is because it’s more powerful, and seems harder to understand, which is irrelevant. The human body has learned to live with radiation, much better than it has learned to live with fire or biological waste – both are dangerous and cause fatalities everyday. Yet look at how we treat nuclear plants compared to natural gas plants? No one has died from nuclear waste.

You wrote a new epilogue to your book after Fukushima. What did the accident show?

Three things happened at Fukushima. First, there was a natural disaster that caused a very high loss of life (over 20,000 people). The second was that three reactors were destroyed, the radiation from which affected no one's health at all. The third thing that happened was a panic, which was not a physical phenomenon, but a socio-psychological one. That third event spread around the world and has people behaving irrationally. Immediately after the event nobody knew how much radiation had been leaked so people should have been evacuated, as they were. But two weeks later, the worst was over and some preliminary measurements were available that showed that everybody could have returned home safely. At that time I wrote a piece for the BBC which spelled out what happened and we haven’t learn much that affected any risk to health since. Indeed, if people were taught about radiation and there was another Fukushima event tomorrow, it ought not to be seen as an event of global significance.

No Room for Conservatives in Climate Politics?

Hayes here echoes important wisdom on conservatives and climate that has emerged over the past decade. As experts like Dan Kahan, Matt Nisbet, Dan Sarewitz, and others have explained, people view climate change (like everything else) through political and social lenses. A generation ago, experts introduced climate change to the general public as a problem to be solved by global treaties, massive government regulation, and -- among some popular writers -- an end to capitalism and economic growth.

So it’s kinda not surprising that conservatives reacted harshly to strong calls for climate policy.

What's done is done. My present concern is not why conservatives initially spurned climate change as a policy imperative. My question is whether liberals even want conservatives in the climate debate at all.

Exhibit A: Ben Adler’s post at Grist this week about Jay Faison, founder and CEO of the ClearPath Foundation.

ClearPath’s mission is to “accelerate conservative clean energy solutions.” The organization joins other right-leaning climate outfits, like R Street and Bob Inglis’ RepublicEn. ClearPath stands out, perhaps, by being more technology-focused and taking a more skeptical stance towards policies like a carbon tax than other conservative climate hawks have.

Adler’s problem is that the “conservative clean energy solutions” are not the solutions he likes. Specifically, he writes, “Faison doesn’t understand what clean energy is,” taking Faison to task for supporting fracking and questioning the scalability of wind and solar.

This post is not meant to adjudicate the right and wrong between the right and the left on climate. I would just observe that short-circuiting the left-vs.right climate debate before it even begins might prove counterproductive.

I don't mean to pick on Adler, Grist, ClearPath, or anyone. The problem transcends any one blog post or dispute. Liberals have spent well over a decade broiling up anger at “climate deniers” -- itself not exactly a recruiting tactic for lukewarm climate-skeptical conservatives. Indeed, the definition of the term “climate denial” has expanded beyond mere questioning of climate science to, for instance, not opposing a particular oil pipeline and embracing particular zero-carbon energy technologies.

So in some very real ways, it’s become impossible to support certain climate policies without being labeled a climate denier by influential environmental leaders. If even those conservatives that proudly state their concern for climate change face accusations of dishonesty, delusion, and denial, why would we ever expect anything resembling a productive national climate debate?


Ecomodern Dispatches

By Emma Brush

GMO Go: A very cool interactive presentation of the practice of genetic engineering as it plays out “in the real world” -- in Bangladesh, where genetically modified eggplants have recently been introduced. The story features interviews with farmers who have benefited from the technology, which reduces the need for pesticides and protects against the shoot borer worm. Most importantly, these new crops will increase the livelihoods of the farmers who depend on them.


Chris Mooney and Brady Dennis at WaPo on the future of hydro, which should involve investment and innovation, coupled with the consideration of adverse environmental impacts. Additionally, the DOE has just released a study entitled “Hydropower Vision,” which outlines the innovation and financing that could lead to an increase in hydropower generation and storage capacity from 101 gigawatts to 150 gigawatts by 2050. Julian Spector at Greentech Media reviews the technology development and upgrades that will be necessary for such a transformation, as well as the need for energy market reform and streamlining of the licensing process (as will also be necessary for a nuclear renaissance...).


Diane Cardwell at NYT on the difficult economics of solar, which is forcing many utilities to reformulate their rate designs.


It looks like New York might actually pass a clean energy standard that saves the state's threatened nuclear power plants.


Bill Gates, in his book review of Robert Gordon’s The Rise and Fall of American Growth, applauds Gordon’s portrayal of unprecedented growth from 1890 to 1970 but takes issue with his failure to perceive the continued growth that will be fueled by the digital revolution. Gates’s penultimate sentence says it all: “When it comes to choosing a side in the debate between optimism and pessimism, my money is on the incredible forces of technological progress at work every day.”


This editorial points to improvements in the beef production industry such as efficiency in irrigation and grain production -- advances “thanks in part to agricultural and natural resources research being done by institutions such as the University of Nebraska-Lincoln” -- and urges thoughtful debate and progress, especially in light of increasing global demands for red meat.


On ecosystem-based engineering for climate-change adaptation and risk reduction.

Capitalism and the Planet

So-called “bright greens” have challenged this dichotomy, suggesting that with the right technologies and social arrangements, the future might be different than the past. But the basic idea, that economic growth and rising living standards have been made possible by ever greater exploitation of the environment and natural resources, underlies virtually all contemporary environmental debates.

And while it is true that rising population and consumption over the last three centuries has resulted in higher environmental impacts in the aggregate, it is also true that resource use in relation to economic output has been falling. Farmers produce more food on less land, and with less water, fertilizer and pesticides. Manufacturers produce more widgets with less energy and material inputs. Indeed, virtually every sector of the global economy has reliably and dramatically improved the efficiency with which it uses materials and energy over the last century.

These trends have been not much considered in debates about the environment and the economy. But they hold considerable promise for both because it turns out that the mechanism that accounts for falling material demands upon the environment per unit of economic output is the same mechanism that most economists agree accounts for long-term economic growth: total factor productivity.

Economists once thought that economic growth was largely accounted for by growth in available land, materials, and labor, and this intuition largely underlays the notion that economic growth must, inevitably, come at the expense of the environment. But in modern economies, those factors can account for little of the long-term growth that has been observed. The rest of it is accounted for by productivity improvements: the efficiency with which inputs, land, labor, materials, and energy, are used in production processes.

This fact undermines claims made about the environment and economic growth. Modern societies have become wealthy by becoming less dependent upon nature and the material bounty it provides, not more. So an end to global economic growth is not a necessary precondition for sustainability. What determines human impacts upon the environment are the resource demands of economic production and the pollutants associated with the transformation of those resources into consumer goods. The size and value of the human economy is not intrinsically associated with either, as long term improvements in resource productivity demonstrate.

Of course, material consumption and environmental impacts are still rising, as population continues to grow and as billions of people make the leap from deep agrarian poverty to modern social and economic arrangements. But both population growth rates and economic growth rates are slowing in many parts of the world, as fertility rates crash after the demographic transition and economic growth rates reliably slow after the early stages of industrialization and urbanization.

In a new Breakthrough Journal article, "Does Capitalism Require Endless Economic Growth?" economist Harry Saunders pulls together these threads to demonstrate that long-standing claims that capitalist societies will inevitably exhaust the natural capital upon which they depend are not supported by either standard economic theory or observed trends. As societies become wealthier, people work less and have more time for leisure. Material consumption saturates. Even in junk food-addled America, there is only so much food most of us can eat in a day. Outside of "Lifestyles of the Rich and Famous," most of us only have so many cars and homes we wish to maintain.

Saunders goes further, arguing not only that material consumption is already beginning to saturate in wealthy economies but that capitalist economies do not require growth at all. Once a certain level of material prosperity is achieved, capitalist economies, all else equal, will evolve to a steady state -- at least that is what neo-classical theory suggests.

But from the perspective of environmental protection, all that matters is that global calls upon natural resources and pollution must peak and then decline. With stabilizing population, slowing overall economic growth, and continuing improvement in resource productivity, there is every reason to think that such a future might be possible.

Of course, capitalism does not actually work the way that neoclassical economists draw it up in their theoretical models. In the real world, all capitalist economies are, to varying degrees, mixed economies. Modern market economies are creations of modern states. Most of the technologies that have driven rising total factor productivity -- modern agricultural techniques and inputs, energy efficient turbines and machinery, microchips and satellites -- have been jointly developed by public and private actors. And the trends that Saunders observes are robust across economies as heterogeneous as Japan, the United States, and Scandinavia, suggesting that the particular mode of mixed capitalist economy is less important than having functioning, innovative markets and public institutions in concert. 

Ultimately, human prosperity and ecological sustainability depend on the choices we make today and the way that market economies, public institutions, social values, and consumer preferences continue to coevolve. It is possible that capitalist economies will fall apart either as shifting social values and political fault lines undermine the institutions upon which capitalism depends or because ecological pressures, most plausibly those associated with climate change, overwhelm the ability of human societies to adapt to them. But what Saunders demonstrates is that neither scenario is an inherent outcome of capitalist economic arrangements, as critics of capitalism from Marx onwards have in one form or another suggested.

Food and Fuel for Thought

So let's check back in on food and fuel.

How will we increase agricultural yields to feed a world population of 9 billion, without also increasing farmland area? How will we meet the energy needs of developing countries, while also achieving global decarbonization?

These are clearly complex problems, and they will require solutions, technological and political, that add to the complexity. Certainly part of the way there, especially in a pluralistic democracy, will be through reasoned public debate. But when debates are rehearsed simply for debate’s sake, and when dogma, rather than reason, drives the conversation, it’s necessary to re-examine the terms of our conflicts and the assumptions upon which they rest -- such as the support of so many environmentalists for renewables over a zero-carbon energy source like nuclear, or the deep-seated resistance to biotechnology and machines when it comes to food production.

Questions of energy use and agricultural practice float quickly to the top of the environmental debate -- and also to some of the key concerns of modern life. As we weigh the evidence for and against the technologies at our disposal and under development we should also keep in mind the future we’re shooting for, and the individual biases and public myths that might stand in the way. For example:

Nuclear’s struggles continue. This week both Eduardo Porter and Will Boisvert highlighted a BNEF report’s finding that 55 percent of nuclear power plants in the United States are in the red and thus may face premature shutdown. If these closures come to pass, Porter stresses, the clean power they produce will likely be replaced with natural gas, which would lead to an increase in emissions -- about 200 million tons of carbon dioxide per year.

The nuclear vs. renewables thing remains a difficult conversation. Does talking about the limits of wind and solar turn off environmentalists and renewables enthusiasts? Or is it essential, given the popular and economic challenges facing nuclear power? How important are tone and phrasing when writing about nuclear and renewables? One of us (Alex) wrote a bit about this last year, but we’re still not sure the best way to go about it. Fortunately as we muddle through, bright observers like Boisvert, Porter, Jesse Jenkins, Alex Gilbert, and Gavin Bade give us plenty to chew on.

Of course, even nuclear may not be so hotly contested as the issue of innovation in biotech and agriculture. “With the possible exceptions of drones and robo-farm workers,” as Lauren Hepler of GreenBiz puts it, “GMOs are perhaps the quickest way to bring a conversation about the future of food to a grinding halt.” As we confront growing concerns regarding food security amidst population growth and climate change, Hepler notes, the debate around the genetic engineering of crops only intensifies.

And yet, genetic manipulation is also only one tool in the “smart farming” toolkit. With airborne imaging instruments, farm-management software, and robotic aids assisting with measurement, analysis, and even physical labor, precision agriculture has the potential to lower costs, increase yields, and reduce environmental degradation. The resistance to such technologies, especially when considering the impact they might have on the developing world, is, to use The Economist’s phrasing in its most recent Technology Quarterly edition, “unconscionable.”

So how do we de-escalate the conversation? One option would be to shift the narrative. According to Hepler, Julie Borlaug of Texas A&M has asked us to “please never use the term ‘genetic engineering’ again.” Another approach would be to focus on changes in the technologies themselves. For The Economist, the future of agriculture lies with advanced techniques such as genome editing and genomic selection. Both mimic the natural process of mutation (the former alters DNA at the level of the single genetic “letter,” while the latter facilitates cross-breeding through the identification of genetic markers) and should thus offer more palatable options to a public so resistant to transgenics.

As these sources indicate, widespread opposition to nuclear energy and genetic modification alike needs to be rethought. Collective recalibration of our biases and problematic preferences (including that for renewables) would enable these powerful technologies to more effectively source both clean energy and readily available food. The necessary tools are there for the taking, as soon as we’re ready for them. In the meantime -- we may simply need to follow the lead of all those electric Pokémon telling us to move in the direction of nuclear power plants.

The Coming Baby Bust

In a world in which momentum from the twentieth-century baby boom will continue to drive population growth globally for decades to come, it is easy to forget that the global fertility rate peaked decades ago and has been falling consistently ever since. Population continues to grow globally, but in many places and among many populations, it is falling precipitously. Europe, presently ten percent of global population, is projected to fall to 6 percent by 2050. Japan, a nation of 120 million today will number fewer than 100 million by 2050.

But the baby bust, as Paul Robbins notes in a new Breakthrough Journal essay, is by no means limited to wealthy societies. Greater access to education and economic opportunity for women has resulted in falling birthrates all over the world. As Hindu birthrates fall in Ladakh,  Hindu nationalists accuse minority muslims of “love jihad.” Singapore has embarked upon a concerted effort to encourage reproduction among the native born population while tightly controlling reproduction among its guest worker population.

Continuing population growth will bring with it increasing environmental pressures. Barring a technological miracle, greenhouse gas emissions will continue to rise in the coming decades. Growing food demand and slowing growth in agricultural yields will increase pressure to convert forests and other valuable habitat to cropland in order to feed a global population that, under the best of circumstances, will top 9 billion people by the middle of this century.

But the spread of the demographic transition — and with it the baby bust, reminds us that global population growth, sooner or later, will come to an end. The future of human societies does not point toward endless growth in resource demands and, hence, environmental degradation. How quickly that transition proceeds, and how well we are able to manage it will ultimately determine what the future climate looks like and how many of the earth’s ecosystems can be preserved.

But recent events like Brexit and the rise of Donald Trump suggest that the same dynamics that may hold much promise for environmental protection, may also be undermining many of the institutions that we might count upon to manage the transition. Rising nationalism and a turn away from globalization could result in a turning away from the multilateral institutions that we will need to manage twenty-first century ecological challenges. A world with fewer people will not necessarily be one with lower environmental impacts if that future brings less innovation or investment in better infrastructure. Slowing population growth will likely bring slowing economic growth. Without the rising tide that has floated all boats over the last several centuries, social, economic, and environmental conflicts may become more zero sum and less amenable to shared investments in a common future.

Both Malthusian pessimism and Cornucopian enthusiasm are legacies of two centuries of rapid demographic growth. Robbins reminds us that virtually all of our analytical and ideological frameworks for thinking about the environment, the economy, and society were born amidst a period of unprecedented population growth, one that is already coming to an end. To manage the challenges and opportunities that are already manifesting as demographic growth comes to an end in many parts of the world, Robbins argues, we will need new approaches for a world in which depopulation and slower rates of economic growth may henceforth be the norm.

Responses welcome.

Rationalia: We Don’t Simply Disagree About Facts and Evidence

Rationalia is, on its face, a silly concept, and has already been aptly taken apart by VoxNational Review, Popular Science, and Uproxx, among others. 

But it's worth briefly reflecting on why it's silly, because there are implications for ecomodernism.

The "modern" in "ecomodernism" comes from our framework's embrace of Enlightenment values of democracy and pluralism, yes, but also science, technology, and rationality. When it comes to issues like climate change, nuclear power, and biotech, proponents (myself included) have a tendency to lean heavily on the authority that Science and Rationality imbue into, say, supporting nuclear power. And it's true, yes, that the overwhelming weight of the evidence tells us that climate change is real, nuclear power is one of the very safest forms of energy production, and GMOs are harmless and beneficial in all sorts of ways.

Tyson would seemingly love to end the discussion there. But the discussion doesn't end there. Technologies occupy interesting spaces in our mind reserved for, variously, the sublime, the magical, the alien, the sacred, the repulsive, etc etc. Most people do not approach a technology or a policy based initially on "the weight of evidence." Nor should they! People are busy. One of the other nice things about modernity is that it gives people time to do other stuff, like work, read, volunteer, climb rocks, or sit around and do nothing. Most of the time, those exploits are almost certainly more worth people's efforts than carefully parsing the evidence over CRISPR (clustered regularly interspaced short palindromic repeats). 

All this is by way of saying that part of the mission of ecomodernism is to create space in society where we can embrace, not merely accept, the role that certain technologies can play in saving nature. It's a difficult task, given the pre-baked notion among so many people that nature and technology are at odds. Rationality is essential, but it alone will not accomplish this mission. We’ll also need open deliberation that allows for a recognition of our values and collective goals, for humanity and the planet.

Here are a few things that rationality alone can't solve:


107 Nobel Laureates wrote an open-letter to Greenpeace in an attempt to persuade the international green group to change their position on GMOs. Joel Achenbach at the Washington Post has the original story, and here's "An Ecomodernist Manifesto" coauthor Mark Lynas with his own plea. Brad Plumer's coverage is not to be missed either.


The debate over how to "feed 10 billion" continues, and it's a lot more than just a thumbs-up/thumbs-down over GMOs. Feeding a still-growing population, and preferably doing so using as little land as possible, will require an abundance of technologies and polices, some old, and some new. The Economist is the latest out with an in-depth look at the debate, and they take an ultimately optimistic angle towards this massive challenge.

Also check out this blog post by Charlie Arnot, who--like Pam Ronald and Raoul Adamchak before him--aims to "bring everyone to the table" to discuss GMOs and conventional agriculture "with engagement rather than aggression."


Saving nature will in many cases require simply leaving it alone. But with many social and environmental pressures acting at once, saving certain ecosystems will sometimes require active intervention and management. Such is the case with coral reefs, which have an acute sensititivity to carbonic acid concentrations in the ocean and are thus declining around the world as oceans absorb anthropogenic carbon emissions. Samantha Lee at Grist reviews a recent study in Nature locating 15 thriving coral ecosystems and reflects on how we might learn from their success.


Finally I would be remiss if I didn't mention the dumb decision last week to close California's last nuclear power plant, Diablo Canyon. I visited Diablo Canyon two summers ago and, for me, it is the epitome of the technological sublime and a symbol of the ecomodernist project. 

Other nuclear plants around the country have closed under pressure from lower-than-expected natural gas prices, but not Diablo Canyon. As Third Way's Amber Robson and Breakthrough's Jessica Lovering explain here, this closure is much more the result of pressure from the Natural Resources Defense Council (NRDC) among other groups that have for years been actively trying to shut down California's largest source of zero-carbon power.  

Amber Robson

After the Great Transformation

At some point over the last decade, the human population crossed a remarkable threshold. Today, over half of humanity lives in cities and towns, up from one-third in 1960 and only 3 percent in 1800. By 2050, the United Nations estimates, two-thirds of the global population will live in urban settings.

The shift from rural to urban represents far more than a change in settlement patterns. It brings with it profound changes in social, political, and economic organization: the urbanization of the planet has been largely inseparable from industrialization and the rise of market economies.

Writing at the close of World War II, the sociologist and economic historian Karl Polanyi called that shift “the Great Transformation.” For most people over the past two centuries, the Great Transformation has entailed moving from subsistence agriculture to off-farm employment; from economic relations based upon barter, tribute, and reciprocity to those structured around markets and wages; and from economies in which arable land, and the amount of labor that could be applied to it, were the sole determinants of wealth and economic growth to economies in which capital and technology have untethered human well-being from brute physical labor.

In this, the sixth issue of the Breakthrough Journal, we consider the Great Transformation, if not fully in retrospect, then at least from deep within it. We live today on an increasingly urban and industrialized planet. The Great Transformation has solved old problems and created new ones. And while the nature of the shift from premodern economies to what Polanyi called “market society” has long been clear, what comes after has yet to be written.


In “Taking Modernization Seriously,” New America Foundation cofounder and long-time Breakthrough Journal contributor Michael Lind channels Polanyi, reminding us that “the Industrial Revolution remains the fundamental fact of our time.” Like Polanyi, Lind rejects the idea that modern economic relations represent either the natural state of human affairs or a spontaneous evolution of human relations.

For two centuries, he argues, modernization and industrialization “have been carried out by developmental states, not on behalf of consumers or humanity as a whole, but to secure their own position in global struggles for relative wealth and military power”—a sentiment with which Polanyi surely would have agreed. “The primary units in the world economy,” Lind argues, “are not private actors—workers, consumers, investors, firms—but states.”

But while Lind refuses to naturalize modern economic relations, he also refuses to reduce modernity to little more than markets, capitalism, and the creation of self-interested individuals, as Polanyi would have it. At its core, “modernization is mechanization,” Lind argues. “What drives prosperity in the long run is not markets, but the substitution of human and animal labor by machinery or software, powered by energy sources other than human and animal muscle and biomass.”

The distinction is a critical one. With the end of the Cold War, contemporary debates about human progress and modernization have largely become proxies for debates about capitalism. Tied up in pitched, ideologically driven arguments about economic relations and the environment, critics of capitalism have often conflated basic economic development and capitalism, and then proceeded to deny, or at least ignore, the extraordinary and broadly shared advances in material welfare that most humans on the planet have experienced over the past two centuries. And where those benefits have become too widespread to be deniable, they have suggested that they simply can’t be sustained, largely on ecological grounds.

The latter claim, that capitalism “harbors the seeds of its own ecological destruction,” is the subject of economist and Breakthrough Institute Senior Fellow Harry Saunders’s new essay. The idea, Saunders observes, “owes its provenance to a most unlikely duo of canonical economic thinkers,” Thomas Malthus and Karl Marx. Ecological economists like Herman Daly and Robert Costanza found common cause with Marxist scholars such as Paul Sweezy, Fred Magdoff, and John Foster, combining Marx’s insight that capitalism required continual economic growth with Malthus’ warning that human demand for food and resources would inevitably run up against resource constraints.

The resulting mashup, Saunders argues, “made Malthusian arguments accessible to elements of the global left that had historically rejected them” and struck a chord in the popular environmental imagination. In recent years, charges of extractivism and calls for degrowth have moved from the fringes of the environmental discourse toward the center, as a new brand of climate activist demands a fundamental shift in modern economic and political arrangements. Incrementalism simply will not do.

In “Does Capitalism Require Endless Growth?” Saunders marshals a raft of economic evidence, along with neoclassical economic theory, to challenge this narrative. It may be inevitable that capitalism will collapse without endless economic growth. But that problem would seem to obviate the inevitability of capitalism running up against ecological limits. “The long-term challenge for capitalist economies,” Saunders observes, “is too little growth, not too much.”

But Saunders also observes that the headwinds facing growth in advanced developed economies are primarily due to saturating demand for material goods and services. When most people achieve a certain level of material consumption, growth slows and leisure increases. That need not inevitably lead to economic collapse, Saunders argues. Households continue to save for retirement even when returns to their investment fall to zero, for the simple reason that people want to have some money around to live on when they stop working. And producers continue to invest in new capital because while, on average, returns to capital may begin to approach zero, that doesn’t mean that they will always be zero.  There will still be opportunities for profit in a zero-growth economy.

“A capitalist economy,” Saunders argues, “is as likely as any other to see stable and declining demands on natural resources and ecological services. Indeed, with the right policies and institutions, capitalist economies are more likely to achieve high living standards and low environmental impacts than just about any other economic system.”


Marx and Malthus, Paul Robbins sagely observes in “After the Baby Bust,” both wrote their foundational works during an extended period of unprecedented population growth across the United Kingdom and Western Europe. “All the major works of classical and contemporary political economy were written during an era of rapid demographic growth,” Robbins, a geographer and director of the Nelson Institute for Environmental Studies at the University of Wisconsin, writes. Today, however, fertility rates across the planet are plummeting. Even though virtually all advanced developed economies and many emerging economies now have fertility rates below replacement, “our habits of thinking, in economics, ecology, and politics, have hardly changed.”

Arguments about population growth and scarcity have so dominated political and economic discourse that little thought has been given to a world in which both might be absent. But the absence of population growth and scarcity may bring its own challenges. “Can prosperity be decoupled from demographic growth in a way that is just, equitable, and good for the planet?” Robbins asks.

The answers aren’t easy. Falling birth rates, migration, and the resulting shifts in ethnic composition have sparked rising ethno-national tensions around the world. An aging population and the inversion of the traditional population pyramid bring daunting intergenerational equity challenges, most obviously the question of who will support and care for an aging population when the old outnumber the young. The emptying out of the countryside as rural populations decline can bring land abandonment and the return of secondary forest and other wild landscapes. But with fewer people and communities in the way, depopulation can also lead to greater resource exploitation. Even when it doesn’t, unmanaged lands do not always lead to greater biodiversity and better ecological outcomes in the Anthropocene.

Robust demographic and economic growth, Saunders and Robbins remind us, may be more an artifact of the early phases of the Great Transformation than a permanent condition of modernizing societies.

Scarcity, too, science journalist John Fleck observes, is not inevitable. Contrary to the notion that economic growth throughout the American West is driving ever-higher water use, water use in California has fallen 40 percent since 1980.  Water use in the Colorado River Basin similarly peaked in the late 1990s and has been declining ever since.

In “High-Tech Desert,” Fleck tells the remarkable story of the decoupling of water use from economic output and population growth in the American West. Efficiency improvements, together with smart planning, have increasingly meant that there is plenty of water to go around for both farms and cities.

Greater water efficiency doesn’t always lead to better environmental outcomes. Acreage for water-hogging crops like alfalfa and cotton has fallen, but much of the water savings has been redirected to higher-value, water-intensive crops such as almonds. And the impacts of a century of water use on fish and wildlife have been enormous. Restoring habitat for fish and wildlife that has been sacrificed for water development will require continuing improvements in water productivity and better collaboration among users.

But the long-term trend offers tremendous opportunities for environmental restoration. Across the American West, aquifers have stabilized or are refilling, because both farmers and cities are using water much more efficiently. “Contrary to the narratives of apocalyptic doom or a need for ever-growing supply,” Fleck writes, “these communities have demonstrated an adaptive capacity that has allowed more people and economic activity using less water.”

Unfortunately, the old western water ethic dies hard. Throughout the American West, water use planners continue to plan for growing demand for water, even as water use has been in decline for decades. Apocalyptic narratives, and the tensions between water users that those narratives create, don’t help, Fleck concludes, making “environmental policies harder, because water users afraid of shortfalls are less likely to be willing to cooperate in reallocating supplies to environmental restoration.”


Prognosticators of one stripe or another, of course, have been predicting California’s demise since the state’s earliest days. A life so comfortable and pleasant surely couldn’t last. In this, the state’s reputation for hedonism and easy living belies the remarkable engineering and labor that has gone into making California a habitable and prosperous place. Those efforts have made California and its population more resilient to environmental change, not less.

But the expectation that humans will be punished for their success is evergreen and has deep religious roots. The Genesis story is frequently taken as a cautionary tale against hubris and efforts of all sorts to remake the world. Humankind’s original sin is to have tasted from the tree of knowledge and fallen out of harmony with nature.

In “Modern Pope, Catholic theologian Sally Vance-Trembath argues that this is a misreading of scripture. The Genesis story, rather, represents “an explicit rejection of the notions of the natural world in the cultures adjacent to Israel and then later, Christianity,” Vance-Trembath argues. “The great insight of Judaism is that humans experience both harmony with the natural world and alienation,” she observes. “Unlike trilobites or dinosaurs, human beings can transform their environment for the sake of their own future existence. The Judeo-Christian tradition understands this as a blessing.”

Laudato Si, Pope Francis’s recent encyclical on the environment, has more than its share of problems, Vance-Trembath acknowledges. The Pope reduces environmental destruction to nothing more than human greed, failing to acknowledge that humans have also transformed the planet in ways that have made it a vastly better place for humans to live. “Self-destruction may now be a feature of our situation,” Vance-Trembath writes, “but the people who first domesticated animals and built dams to support farming and reserved seed that they gathered for planting in more fertile places in the next year’s growing season did not do so as acts of greed or self-destruction.”

Yet despite its flaws, Vance-Trembath urges us to read Laudato Si in the larger context of Catholic social teaching and not simply present-day, overheated environmental debates. Laudato Si can only be understood in the context of Pope Francis’s efforts to reform the Church and drag it, once and for all, into the modern world. “One homily, one interview, one new appointment, and one request for retirement or transfer at a time,” Vance-Trembath writes, Francis is “shedding the Western, Roman, and feudal framework that had provided the superstructure of the institution for more than a millennium.”

The confusions in Laudato Si, Vance-Trembath argues, are eminently correctable, for the simple reason that Francis’s approach, in contrast with that of his immediate predecessors, is inductive, finding Catholic principles and values in the world as it is, not demanding that the world remake itself to better comport with Catholic dogma. His approach is one that seeks dialogue and engagement with the world and can evolve with new information, as it must if it is to offer the Catholic community useful guidance as to how to reconcile human well-being with environmental preservation.

Making peace with the past, and our former selves, animates Michael Zimmerman’s “Love and Vinyl Chloride. Zimmerman, a philosopher and early theorist of Deep Ecology, writes about his alienation from his father, a chemical engineer, and the ways in which it shaped his environmental identity as a young man.

Zimmerman’s search for authenticity and emotional connection drew him away from his father and toward environmentalism. His study of Heidegger and human consciousness drew him to Deep Ecology.  Zimmerman proposed, in an influential 1976 talk, that Heidegger be read to provide the philosophical underpinnings of Deep Ecology. But after several years of attempting to reconcile the two, Zimmerman was forced to conclude that Heidegger’s work actually deeply undermined Deep Ecology.

“Heidegger maintained that Darwinism and other efforts to interpret humans as clever animals have the opposite effect to the one that many environmentalists hope they will,” Zimmerman concludes. “If humans are no different from other animals, and if animals seek to maximize their reproductive fitness, then humans are perfectly justified in giving free rein to their own reproductive Will to Power, leading to total domination of the planet by the human animal.”

Deep ecologists, at Zimmerman’s instigation, had made Heidegger’s famous call to “Let Being Be” a rallying cry. But for Heidegger, letting being be was actually a profoundly anthropocentric act. It is only human consciousness that allows an appreciation for the rest of creation. “Humans are animals plus,” Zimmerman writes.  “What they add is the awareness through which beings can reveal themselves in ways that they cannot do so within the awareness allotted to nonhuman animals.”

Reconciling himself with modernity and the Anthropocene led Zimmerman to reconcile with his father. “My father and his cohort were determined to materially improve the world and mostly did so with the best of intentions,” Zimmerman writes. “Reconciling with my father forced me to reconcile my aversion to the industrial aspect of modernity with my support for its emancipatory aspect…. Having long looked down upon industrial production, I came to see what a remarkable expression of humankind’s desire to re-create the world it is.”


The Great Transformation is far from complete. As Lind notes, at least a billion people have not even begun the journey. For those trapped in deep agrarian poverty, modernization is no certainty. Changing fashions in international development programs have “undermined traditional development pathways,” he writes. “In the early years of the twenty-first century, many international development agencies promoted a kind of synthesis of Washington Consensus neoliberalism and Green sustainability.”

But poor nations would be better served to reject the latter-day wisdom of rich nations and instead do what those nations actually did to become prosperous. “Since the Industrial Revolution began, the successful communities have been those in which governments and industrial and financial enterprises have worked together to promote the community’s relative share of global wealth and power,” Lind argues. “No country in history has ever become rich thanks to the efforts of small-scale peddlers taking advantage of free markets to sell each other low-value-added items in the equivalents of flea markets and garage sales.”

The case for free trade, Lind reminds us, originated in nineteenth-century Britain, then the dominant global manufacturing power, and has always been made by nations with already mature manufacturing sectors. The notion that poor nations might somehow skip or otherwise avoid the basic process of mechanizing and industrializing their economies is similarly held mostly by those who already take modernity for granted.

Polanyi was right that the rise of urban, industrial societies, like the rise of agriculture before it, leads to the creation of new values and ultimately new humans. But those processes did not end with the initial stages of urbanization or the rise of capitalist economies. New values and identities continue to proliferate. “The only way forward, for humans and nature,” Zimmerman writes, “is to integrate our modern commitment to abundance and liberation with our postmodern consciousness of the nonhuman world and our responsibilities to it.” In this, as in so many other ways, Paul Robbins reminds us, the Great Transformation does not mark “the end of history, economics, or politics. Instead, it is the beginning.”

All Pain, No Gain

Last week, California utility Pacific Gas & Electric (PG&E) announced it intends to close the state’s last nuclear power plant, Diablo Canyon, starting in 2024. Diablo Canyon, a 2200-megawatt plant just north of San Luis Obispo, generates 8–10% of California’s electricity every year with zero air pollution and zero carbon emissions. The closure is explained in a proposal developed by the utility along with environmental and labor groups and identifies a number of factors, including state policies and local grid conditions that present challenges to continued operation of Diablo Canyon. In particular, the proposal notes mounting renewables over-generation and California’s energy goals that require increasing reliance on renewables – at least 50 percent by 2030. Higher levels of distributed generation sources are also having an impact. The decision to remove Diablo Canyon is, therefore, largely an effort to make room for growing levels of solar energy and other variable resources that have been driven by pro-renewables policies.

One noteworthy element of the joint proposal is a promise to replace the electricity generated by Diablo Canyon with only zero-carbon sources. The theory of the case for environmental organizations behind the plan is that closing Diablo Canyon will be a net gain for the environment by making more room on the grid for renewables. Amory Lovins, Cofounder of the Rocky Mountain Institute, claims that the closure will reduce emissions and electricity costs in California, and more broadly asserts that this might also be the case for other nuclear facilities across the country. Representatives of the Natural Resources Defense Council (NRDC), which played a pivotal role in establishing the agreement, make similar claims regarding the benefits to Californians and the climate.

The problem with these optimistic conclusions is that they are rooted in some very uncertain assumptions, and they ignore strong evidence of the negative climate impacts of shuttering nuclear plants. In addition, the decision to shutter Diablo is largely a result of local policies and grid conditions that are unique to California, a scenario that is not transferable to other regions. 

When Nuclear Plants Shut Down, Emissions Go Up

The existing U.S. nuclear fleet is currently the largest source of clean electricity in the United States, providing 62.9% of zero-carbon electricity in 2014. When a large plant like Diablo Canyon closes, it leaves a clean energy gap that needs to be filled by other low-carbon sources or else total carbon emissions will rise. Diablo Canyon’s two reactors account for 20% of annual power production in PG&E’s territory, leaving an enormous clean energy gap to fill.

Although the intent of the Joint Proposal put forward by PG&E, NRDC et al. is that Diablo Canyon can be replaced by emissions-free sources, the document does not include a sufficient plan for how this will be accomplished. Diablo Canyon generated 17,027 GWh of electricity in 2014. PG&E estimated that demand reduction could replace 20%-125% of Diablo Canyon’s generation. Though this wide range of outcomes indicates a good deal of uncertainty, PG&E has estimated that it will only need to replace half of the plant’s capacity, or 9,000 GWh. Even if these figures pan out, the proposal only provides a plan to source 4,000 GWh of GHG-free resources before 2024 and states that the full solution will emerge over the 2024–2045 period. That leaves at least 5,000 GWh still in question — nearly 1/3 of the plant’s annual generation.

There just isn’t enough information to conclude that Diablo Canyon won’t be replaced, at least in part, with natural gas or other fossil fuel sources. In fact, PIRA, a global energy research firm, projects that with the expected closure of the Diablo Canyon plant, Northern California gas power generation will rise approximately 34% from 2023 to 2026, reversing a multi-year decline.

Recent history backs up this case, confirming that when nuclear plants close, total CO2 emissions go up as a mix of both renewable energy and carbon-emitting natural gas power plants ramp up to replace lost power generation. There are several examples of this in the United States, the most relevant being the recent closure of the San Onofre Nuclear Generating Station (SONGS) in 2012. After the SONGS closure, annual statewide emissions of CO2 from electricity production increased by 24% as the plant was replaced by a combination of renewable and natural gas-fired sources. Similarly, after Vermont Yankee in Vermont closed in 2014, CO2 emissions in the New England power grid increased 5%, reversing five-years of steady reductions in CO2 emissions. In Wisconsin, emissions jumped more than 15% following the shutdown of the Kewaunee nuclear facility.

It’s important to note that in many of these cases, there were expectations that these units would be replaced with carbon-free electricity. NRDC made such a claim in the case of SONGS. But good intentions don’t always hold up against the complex realities of the electric grid. Evidence strongly suggests that closing down a nuclear plant will increase fossil fuel use and undermine climate efforts. Compare this to the guaranteed zero-carbon electricity generated by plants like Diablo Canyon and the whole venture seems like a risky ordeal.

A Flawed Economic Argument

More troubling than the questionable climate case that some environmental groups are making for Diablo Canyon’s closure is their desire to extend it to the rest of America’s nuclear fleet. NRDC President Rhea Suh has said that closing Diablo Canyon should be “a model for fighting climate change across the country.” Lovins has taken a similar stance, coupling the flawed climate argument with an equally questionable economic rationale to suggest closing additional nuclear plants.

To support his point, Lovins focuses-in on some of the highest cost nuclear reactors. He references the average levelized cost of electricity (LCOE) of the most expensive quartile of the U.S. fleet at $62/MWh. There’s nothing objectionable about this so far. But it becomes misleading when he tries to compare this $62/MWh LCOE with wind and solar using an entirely different metric — citing the average value of power purchase agreements (PPAs) of wind and solar in the U.S. and quoting at $30 and $50/MWh, respectively. This makes the economic advantage of renewables look like a no-brainer. But by using PPAs to demonstrate the advantage of wind and solar, Lovins fails to account for the significant federal and state subsidies these technologies receive. If we add the $23/MWh that wind receives from the federal production tax credit to the $30 average PPA for wind used by Lovins, the advantage quickly erodes. And that doesn’t include state-level subsidies.

Lovins defends his use of the subsidized PPA, noting that the unsubsidized price for wind in Morocco (cheapest in the world) has come in $6/MWh lower than the average U.S. reactor operating cost of $36/MWh. But again, this is an unfair comparison, as it does not reflect the regional factors that impact the cost of energy. For example, leasing land for a wind farm in California is going to cost a lot more than in Morocco. The same holds true for labor costs. This results of this comparison are not surprising — nor are they in any way relevant to the discussion.

Another flaw in Lovins’ economic argument for replacing nuclear with renewables is his choice to ignore the cost of infrastructure required to back-up each MW of intermittent wind and solar, such as storage or transmission. Energy storage is necessary to balance intermittent wind and solar, and additional transmission will likely be needed to transport the electricity provided by new renewable resources. One rough estimate by the Ontario Society of Professional Engineers suggests that installing storage will quadruple the cost of wind, and will double the cost of solar.

For a more accurate cost comparison, equal metrics should be used for all sources and the full costs of the system should be considered. Recent work by MIT researchers which considered aggressive deployment of grid-scale storage, showed that a balanced portfolio that includes nuclear and renewables is the lowest-cost option for achieving deep decarbonization of the power grid.

All Pain, No Gain

The future is uncertain. The cost of renewables and storage could dip significantly by 2024, increasing the amount of Diablo Canyon’s generation that could feasibly be replaced by wind and solar. But even if every last electron of Diablo Canyon’s power is replaced by some combination of emissions-free sources, this is hardly a victory. Preliminary estimates by Bloomberg and the Breakthrough Institute estimate that replacing Diablo Canyon with wind and solar will cost as much as $15 billion. Then there’s the time and political capital that will be spent to achieve these goals. That’s an enormous amount of resources to devote to this effort in order to makezero progress toward emissions goals.

If decarbonization is the ultimate goal, policies should promote all zero-carbon technologies, instead of creating a situation in which a favored set of zero-carbon technologies (renewables) simply cannibalizes a less favorable one (nuclear). PG&E has even stated that, if it weren’t for the constraints of California’s RPS, they could have accomplished a low-carbon generation mix for less money that would include nuclear.

To a great extent, the challenges Diablo Canyon faces are the result of state policies that prioritize deployment of renewables over actual cuts to emissions. There is an important place for renewables, and their growth should be supported — but not at the expense of other low-carbon sources. The planned closure of Diablo Canyon should not be seen as a model for shuttering other nuclear facilities. It should be seen for what it is: an unforced error, the kind which we can no longer afford at this stage in the game.

This article is cross-posted at Medium

Amber Robson is a policy advisor for the Clean Energy Program at Third Way, a centrist think tank. Jessica Lovering is the director of the energy program at The Breakthrough Institute, an environmental think tank.

Michael E. Zimmerman

John Fleck

Sally Vance-Trembath

Love and Vinyl Chloride

My father’s child-rearing methods were nineteenth century. Discipline came from the back of a belt, and compliments were few and far between. He rarely showed his feelings and spoke of them even less.

When I finally had enough fuzz on my face, I asked my father to show me how to shave. As a chemical engineer, he approached the issue methodically. It was strictly a technical matter, one that could be mastered with practice, not a rite of passage.

A conservative Republican, he worked in the chemical and plastics industry for B.F. Goodrich. For me, and for my mother, my father’s emotional distance was inseparable from his politics and profession.

My mother was intelligent, complex, and sensitive. Like so many women of her generation, she abandoned her personal ambitions to have a family. But she had flirted with communism in college and never abandoned her left-wing sensibilities. The problem with corporations was not only that they were engines of capitalism; they also required her husband (and thus her family) to uproot itself repeatedly. Between 1944 and 1984, the family moved eight times.

My own relationship with my father was no less complicated. With eight children at home, there wasn’t much time for any one of us. All the more so as my father’s growing responsibilities took him from the industrial laboratory into sales and management, which required frequent business travel. I spent a lot of time outdoors, where I hoped to find emotional solace to compensate for what was missing at home. The love for nature that led me to environmentalism has its roots here. But the outside world could never substitute for satisfying emotional bonds with other people.

The political is, for all of us, always personal. My alienation from my father and my search for a life and a self-identity that felt authentic and meaningful would shape my radicalism and rejection of modernity as a young philosopher in the 1970s, when I helped theorize deep ecology. My reconciliation with my father in the 1980s was inseparable from a coming to terms with modernity and the unavoidably anthropocentric world that we all inhabit. Doing so did not require me to abandon my commitment to the environment. But it did require me to appreciate my father, with all his limitations, for his technical mastery, his rational mind, and for the modern world that he helped to build. Without his labors, and those of so many of his peers, my environmental consciousness could not have existed. In this, there is a lesson for environmentalists and modernists alike.


Born in Louisville, Kentucky, in 1919, Harry M. Zimmerman came of age during the Great Depression. He played French horn for the Louisville symphony orchestra as a high school student and won a full scholarship to the University of Kentucky, where he played for the marching band and studied chemical engineering. Upon graduating in 1940, he faced the enviable choice of studying music at Juilliard or chemical engineering at MIT. He chose MIT. Chemical engineering offered a more secure future for a child of the Depression.

A heart murmur had made him ineligible for military service, but in 1942 my father left MIT without completing his dissertation to contribute to the war effort. He went to work for the legendary Waldo Semon at B.F. Goodrich, joining a company at the forefront of rubber and plastics research.

In the 1920s, Semon had found a way to “plasticize” (soften) an otherwise brittle and thus commercially unviable synthetic compound known as polyvinyl chloride (PVC). Starting in the 1930s, PVC was used for a growing number of commercial products, including fabric coatings, wire insulation, and many other applications that proved important for the American military during World War II.

Semon’s contributions to the war effort did not end there, however. In 1940, Goodrich introduced a new form of synthetic rubber that Semon had developed known as Ameripol, which was higher quality and could be produced more efficiently than earlier forms of synthetic rubber. Before then, cars and trucks could only run on tires made from natural rubber, which was grown on vast plantations in Southeast Asia and would become unavailable to the US military if Japan invaded rubber-producing countries.

In June 1940—the same month that Goodrich began selling Ameripol—President Roosevelt created the Rubber Reserve Company (RRC) to conserve and to stockpile one million tons of rubber. Weeks after the attack on Pearl Harbor in 1941, Goodrich, Standard Oil of New Jersey, Firestone, Goodyear, and U.S. Rubber Company signed an extraordinary patent- and information-sharing agreement under the auspices of the RRC.1

During 1942, the rubber consortium, which my father went to work for when he joined Goodrich, ironed out conflicting approaches to synthetic rubber production, opening the way for mass production, which went from a few thousand tons in early 1942 to about 800,000 tons per year by 1945. The collaboration between the federal government, private industries, and universities was unprecedented. Much has been made of the extraordinary effort to develop the atomic bomb through the Apollo Project; but the little-known “Akron” project, such as it was (Akron was where the synthetic rubber effort was based), was arguably far more important to the war effort. Had the US not solved the problem posed by lack of rubber, the war could not long have been sustained.     

In the years after the war, my father earned three patents for his PVC research and helped pioneer a range of new uses for the compound, including a new kind of PVC that allowed for the manufacture of light and unbreakable plastic bottles, interior wall siding for the new Boeing 707, and the plastic used in credit cards. Upon retiring in 1985, he was second in command of the Port Allen, Louisiana, Georgia-Pacific plant, which at the time produced more PVC annually than any other plant in the world.


Coming of age in the 1950s and early ’60s, the double-edged nature of modern industrial society was never far from my view. I remember being thrilled at the grainy black-and-white TV images of John Glenn becoming the first American to orbit the Earth (Glenn was a graduate of Ohio’s Muskingum College, and classes at Newcomerstown High School were dismissed that day so we could watch the launch). A few months later, the Cuban Missile Crisis reminded us that the same scientific and technical achievements could also destroy us.

That summer, The New Yorker serialized Rachel Carson’s Silent Spring. I can still remember my mother’s alarm at Carson’s grim warning. When my friends and I weren’t running behind trucks that dispensed dense white clouds of DDT for mosquito control to cool off at the height of the sweltering Midwestern summers, we spent countless hours playing on glacial cliffs and seemingly endless fields on the edge of our suburban town.

Around the same time, a family trip took us through Pittsburgh, which was a hellish scene of lurid, stinking clouds of smoke spewing from the city’s enormous steel mills. People born in the 1970s have a kind of environmental amnesia about how bad pollution was at that time, but trip to parts of today’s China will reveal what air and water pollution used to be like in the United States.

In college, I began to regard my father’s work with suspicion. Dow Chemical was manufacturing napalm to douse North Vietnamese while promoting “Better Living Through Chemistry” in its advertisements at home. PVC, too, was in the news, having been linked to a rare liver cancer contracted by workers in PVC production. Between 1967 and 1973 four men, working with the vinyl chloride monomer in Goodrich’s Louisville plant, were afflicted with a rare form of liver cancer known as angiosarcoma.

I majored in philosophy in part because I increasingly associated the sciences and engineering with the control-of-nature mentality that was responsible not only for material abundance, but also for modern weaponry and industrial pollution. By the year 2000, I was quite certain, human activity would almost certainly turn the surface of planet Earth into a smoking ruin. 

But my interest in philosophy and my environmentalism were about something else as well. Growing up Catholic, I was continually reminded that there is a transcendent domain beyond material reality. While I left my childhood faith behind when I went to college, my interest in transcendence, in the search for depth and meaning, remained.

The search for authenticity, an idea central to the early works of Heidegger and Sartre, drew me to philosophy and came to constitute for me the antithesis of the materialism, consumerism, and environmental destruction that I associated with modern industrial society. How could one be true to one’s humanity while at the same time heedlessly destroying nature? Reconnecting with our authentic selves and healing our split with nature became, for me, one and the same project.

In 1969 I enrolled in a philosophy PhD program at Tulane University, where I studied European thinkers such as Descartes, Kant, Hegel, Marx, Nietzsche, and Heidegger. On the side, I read Carlos Castaneda and dabbled in meditation, hatha yoga, and Asian religions, eventually finding my way back to the New Testament, in search of an authentic life that could reconcile the transcendent and immanent domains of human existence.

Ultimately, I focused my work on Heidegger, who maintained that modernity discloses everything—including human beings—as nothing but raw material or resources.2 My dissertation was titled The Concept of the Self in Heidegger’s Being and Time, and my first book was called Eclipse of the Self: The Development of Heidegger’s Concept of Authenticity (1981). Both documents examined Heidegger’s approach to how humans can exist in a way that is “authentic.”       

My ideas about authenticity, modernity, the environment, and progress were deeply influenced by a year I spent as a Fulbright Fellow in Brussels in the early 1970s. While researching and writing my dissertation, I read the two-volume Nietzsche lectures, which Heidegger had read at Freiburg University in the late 1930s and early 1940s.

Deconstructing the prevailing idea of political, scientific, and economic progress, Heidegger wrote that Nietzsche’s doctrine of the Will to Power was the culmination of the West’s long decline into nihilism. Modern science’s objectification of nature was allied with the drive to gain total control over nature, including the human animal.

Far from being “progressive,” techno-industrial modernity was the last stage in the decline from a noble beginning. Western humankind had reduced itself to the status of a clever animal bent on promoting its own power. For Heidegger, Nietzsche’s discourse about “the death of God” concerned not only the collapse of Platonic-Christian values, but also the erasure of any sense of transcendence, apart from the material “progress” enabled by growing control over the planet. In reducing all of nature to raw materials, we had reduced ourselves to clever animals, objects manipulating objects, or One-Dimensional Man, as Heidegger’s student Herbert Marcuse argued in a book popular with so many of my New Left peers.


In 1976, I presented a paper at an American Philosophical Association meeting, proposing that Heidegger’s call to “let being be” could provide the philosophical underpinnings for the then-emerging environmental ethic that nature, like humans, had inherent value. My paper drew the attention of George Sessions, who would, in 1984, publish the “Deep Ecology Platform” with the Norwegian philosopher Arne Næss, who had coined the phrase in the early 1970s. I proposed to Sessions that we read Heidegger as a forerunner of Deep Ecology, and Sessions and I spent many years exchanging ideas about how best to formulate Deep Ecology. Heidegger’s notion that we should “let beings be” would become a slogan for Deep Ecology in the 1980s, and his rejection of modernity and enlightenment notions of progress became a touchstone for a significant thread of post-60s environmentalism more broadly.

Yet even as I worked with Sessions to theorize Deep Ecology, I came to see the ways in which Heidegger created as many problems for Deep Ecology as he solved. Most Deep Ecologists embraced what would soon be called “biocentric egalitarianism,” according to which no member of the biosphere has greater value or standing than anything else. A termite is as good as a deer or a human being. The making of hierarchical distinctions among life forms was arrogant, anthropocentric, and subjective.

Putting aside the practical matter that flattening creation in this manner provides no guidance as to what one should do when preserving one organism required harming another, the larger problem, from the Heideggerian perspective, is the failure to acknowledge that there is something special about human existence.

Only through human existence, Heidegger believed, can beings “be.” Without humankind, life on Earth could not reveal itself as being there and as having already been there. Understood as an animal organism, of course, humankind is kin with all other life and dependent on the biosphere. But humans are animals plus. What they add is the awareness through which beings can reveal themselves in ways that they cannot do so within the awareness allotted to nonhuman animals.

Heidegger maintained that Darwinism and other efforts to interpret humans as clever animals has the opposite effect to the one that many environmentalists hope it will. If humans are no different from other animals, and if animals seek to maximize their reproductive fitness, then humans are perfectly justified in giving free rein to their own reproductive Will to Power, leading to total domination of the planet by the human animal.

Why, then, should we expect humans to curb their Will to Power? The other animals don’t. White-tailed deer would be happy to cover the planet with their own kind, for example. If one replies that humans have a moral conscience and thus are obligated to show proper concern for other species, then this assertion once again singles out the human species as somehow “special.”

For Heidegger, “being” can only occur within the “clearing” constituted by human consciousness. In the name of ecocentrism, deep ecologists made the same error as the modernists they rejected, reducing humans to being mere animals in the same way that moderns reduced nature to raw materials. To the contrary, Heidegger maintained that the real desolation wreaked by techno-industrial nihilism is its effect on human existence. Our openness to being has become so constricted that entities—including humans—appear in highly limited, one-dimensional ways.

My break with Deep Ecology became clearer to me after reading Ken Wilber’s book Up From Eden, which helped me to better appreciate the benefits of modernity, as well as its limitations.3 Wilber, like Heidegger, recognized the ways in which moderns had, in dissociating themselves from traditional beliefs and religions, also too often dissociated themselves from the sacred and transcendent in the human experience. And like Heidegger, he rejected the ontological leveling called for by biocentric egalitarians.

Instead, following Heidegger, he retained a special place for humans in the cosmos. Human consciousness is necessary to tell the history of the universe, and human specialness is necessary to see the specialness in nonhuman nature. Greens, Wilber argued, rightly affirm the intrinsic value of the nonhuman world in a way that modernity failed to do. But in yearning for premodern, nonindustrial lifeways, they ignore the brief human lifespans and often-oppressive social practices that characterized such eras, and indeed the knowledge, science, and freedom from scarcity that allows contemporary environmentalists to appreciate nature in the ways that we do.


One summer day in 1984 I called my father and asked if he would give me a tour of the Georgia-Pacific PVC plant where he worked in Port Allen, Louisiana. Rebelling against one’s parents and their world may be a necessary step in growing up and creating one’s own identity, but creating a workable world of one’s own often involves integrating what was valuable about the previous generation, even while recognizing its shortcomings. If I were ever going to stand up for what is right about modernity, I was also going to have to embrace my father and his worldview.

After he gave me a tour of the plant, we sat down and talked. To my surprise, I discovered that he was in his own way an environmentalist! He complained about top-down EPA regulations that required his plants and others along the Mississippi River to dispose of “toxic wastes,” which were often valuable chemicals. At the time, companies were not allowed to make the more efficient move of trading or selling those chemicals. Instead, they disposed of them by injecting them deep beneath the earth’s surface. He argued that we have far too little knowledge of geological processes to trust that such toxins will not migrate, perhaps into aquifers used for drinking water and agriculture.

He was aware of the dangers involved in producing PVC. The deaths of the Louisville workers were a wake-up call for those involved in PVC production, and he had worked hard to decrease the dangers to his workers. He didn’t dismiss concerns about PVCs’ possible effects on consumers and the natural environment, but he strongly believed that the social and economic value of PVCs far outweighed the risks posed by them. Trained as an engineer, he could comprehend and assess the implications of technical claims and statistical findings in ways that most people cannot.

There are, of course, still those who argue that PVC production results in unacceptable environmental and public health costs. But however one weighs the cost and benefits, one thing of which I am certain is that my father would never have countenanced direct human harm arising from his work. Indeed, by several accounts, his professional career was shortened by his insistence on reporting things as he saw them, not as others may have wanted to see them.

In reconciling with my father, I was forced to differentiate between his human limitations, on the one hand, and the negative side of modernity, on the other. My father and his cohort were determined to materially improve the world and mostly did so with the best of intentions. To be sure, they possessed plenty of the usual personal failings and blind spots. But then, so too did my cohort. We baby boomers could aspire to be “different” only because of the economic foundation provided by our parents. In our own way, we were as smug as our parents. I am reminded of Emerson’s remark that a “self-reliant” young man is one who is “sure of his dinner.”

My early Green sensibilities, tied up as they were with my personal search for authenticity and transcendence, imagined the world and human societies as being in a sort of zero-sum conflict, between authenticity and the inherent value of nature on one side and materialism and one-dimensional humanity on the other. These ideas created profound conflict with my father, who truly understood the hardships imposed by material deprivation and had dedicated his life to creating material abundance, even if that brought consequences for the natural environment.

Reconciling with my father forced me to reconcile my aversion to the industrial aspect of modernity with my support for its emancipatory aspect, including liberation movements that had proliferated in the 1960s and 1970s and provided the template for my ecological radicalism. And I came to better appreciate how “sweet” it is for engineers and scientists to solve technical problems. Having long looked down upon industrial production, I came to see what a remarkable expression of humankind’s desire to re-create the world it is, both as an end in itself and as a way of enhancing material well-being.

The modern environmental movement has importantly introduced new values—most importantly, respect for the intrinsic value of nonhuman nature—that were not found in earlier expressions of modernist thought. Greens deserve credit for bringing the spiritual and transcendent value of nature back into the clearing that is human consciousness.

But I cannot abide the “biocentric egalitarianism” favored by many deeper Greens. It reduces both humanity and nature to one-dimensionality. There is something strange, wonderful, dangerous, frightening, and unique about human beings. Our specialness and our responsibilities to the nonhuman world cannot be separated.

The eternal tension between our capabilities for creation and for destruction, our love of nature and our sometimes arrogant disregard for it, is fundamental to the human condition and to human consciousness. Modernization brings both greater appreciation and love for nature and greater capabilities to destroy it But there is no going back to a premodern way of life or consciousness. The only way forward, for humans and nature, is to integrate our modern commitment to abundance and liberation with our postmodern consciousness of the nonhuman world and our responsibilities to it. 

Modern Pope

If you want to make sense of the often coded and conflicting language of Laudato Si, Pope Francis’s recent encyclical on the environment, the place to start is not to compare it with the latest Intergovernmental Panel on Climate Change report or the United Nations Convention on Biodiversity, but rather to understand it in the context of the tradition known as Catholic Social Teaching.

For Laudato Si, the critically important preceding texts are Pacem in Terris (1963), Gaudium et spes (1965), and Populorum Progressio (1967). Pacem in Terris, written by Pope John XXIII, is an encyclical, like Laudato Si. Peace on Earth, its English name, is the first papal text that was addressed to “all people of good will” in addition to the Catholic community.

Gaudium et spes (The Pastoral Constitution on the Church in the Modern World) emerged during the Second Vatican Council in 1965 and is a revolutionary document, marking a fundamental shift in methodology for magisterial texts, abjuring fundamental truth claims and “natural law,” and establishing that the purpose of “the Church in the modern world” is to “render service” to the human community.

Pope Paul VI’s Popolorum Progessio (On the Development of Peoples) (1967) is a masterpiece of reading the actual human global situation with an eye to how the Catholic Church might promote human development in a manner that uplifted human dignity and advanced the common good.

Like its predecessors, Laudato Si has internal contradictions and, in some cases, outright errors. But the particulars of the Pope’s environmental vision are not where the significance of the document lies. Rather, it can only be understood in the context of Francis’s broader effort to drag the Church, once and for all, out of its feudal traditions, authoritarian hierarchy, and hostility toward the modern world and into dialogue with the broader human community. The ways in which Pope Francis has addressed his papacy and Church teachings toward the environment, both the importance of the document and its failings and limitations, can only be properly appreciated in that context.


During the first thousand years in the life of the Church, the role of the pope was not that of a religious emperor, as it became in the second millennium. The papacy began as a unifying function to create stability among the bishops who were the leaders of local communities. The “pope” was first and foremost the bishop of the most important community, the Roman community. Only secondarily would he function as the bridge-builder, the pontiff. That function was only used when it was necessary because of some specific challenge to unity. Otherwise the pope was just one of the bishops. The office was functional.

The medieval period brought with it the monarchical papacy. Encyclicals were historically addressed to the worldwide bishops in order to clarify or stabilize the teaching of a very centralized but global bureaucracy and emphasized a highly deductive method. The form of the teaching fit the form of the institution.

For many of the authors, the local bishops were not viewed as teachers in their own right but rather as clerks of the central bureaucracy who were in charge of “the brand,” so to speak. When McDonald’s sends a directive to its local franchises, it is not looking for “dialogue.” The purpose of most texts prior to Pacem in Terris was to outline the principles that the Roman Church expected the local church to implement.

Popes John XXIII and Paul VI were actively trying to shed those regal features and to reinvigorate the original purpose of the office: helping unity to endure within a global institution in a pluralistic, and religiously diverse, world. By contrast, Popes John Paul II and Benedict XVI were quite comfortable with and, indeed, deeply committed to the categories of the previous era. In many regards, the papacies of John Paul and Benedict were a reaction to the opening of the Church, an attempt to reestablish the old hierarchies and the old traditions, and they very nearly destroyed the Church, saddling it with corruption, scandal, and diminishing numbers in many parts of the world. 

Pope Francis is an heir to Pope John XXIII, who broadened his audience to “all people of good will.” By directly engaging the entire human community, Pope John began the process of shedding the Western, Roman, and feudal framework that had provided the superstructure of the institution for more than a millennium. One homily, one interview, one new appointment, and one request for retirement or transfer at a time, Pope Francis continues that process and discloses that he is both listening and learning as well as making decisions and exercising executive judgment.

His is the style and manner of dialogue. The audience is no longer primarily the bishops as leaders of local churches, but rather all members of the Church as well as the entire human community. Robust commitment to dialogue rather than pronouncement has been evident from the very first moments of his pontificate.


When it comes to being the pope, one’s predecessors are more robustly present than, say, Washington and Lincoln are to any US President. In the Vatican, a new broom rarely sweeps clean; in the exercise of papacy, the stances of previous office holders are actively engaged and give shape to the current pontificate.

As a result, Catholic teaching texts are very often internally inconsistent. This is not because the popes have been weak thinkers but is instead an instantiation of the Catholic desire to include previously described concepts and ideas about the chosen topic, even ideas that the present author intends to supersede. This can at times look like doublespeak.

Take, for example, the Church’s teaching on religious freedom. Dignitatis humanae (The Declaration on Religious Freedom) reversed centuries of official teaching in 1965. Prior to Dignitatis humanae, official teaching held that only Catholics should be protected in their religious practice because they were the only ones whose beliefs were “true.” Other Christian communities and other religious traditions were “in error”about religion, so they had no rights.

But note how Dignitatis humanae does so: “Although, through the vicissitudes of human history, there have at times appeared patterns of behavior which was not in keeping with the spirit of the Gospel and were even opposed to it, it has always remained the teaching of the church that no one is to be coerced into believing.”

The text rejects the claim that Church teachings ever encouraged religious intolerance, even as it acknowledges that for many centuries, it was the doctrine and practice of the Church to coerce belief. This is a common pattern in Catholic teaching: find the thread of Gospel, even in doctrines that seem to undermine it, and pull the Gospel forward leaving the desiccated or worn-out concept behind.

Old paradigms die hard. Both Pope Paul and Pope Francis display unreflective commitment to the previous paradigm in some of their work. Pope Paul is remembered as the author of the birth control document where “artificial” birth control is condemned. Paul was willing to be creative with regard to social issues but not so open when it came to personal, sexual issues. In Pope Francis’s most recent Apostolic Exhortation on family life, Joy of Love, he describes family life as an “inexhaustible mystery” on the one hand, and then goes on to reaffirm the bankrupt notion of “the anthropological basis of the family,” on the other. (Read: body parts determine married life and parenthood.) So goes the internal tension that occurs in the middle of a paradigm shift.

In Laudato Si, Pope Francis is, as with so much else, groping his way, inductively, toward a new understanding of how the Church will engage environmental issues and concerns. On the one hand, his strong affirmation of the severe threats to the environment in an encyclical, exercising the most robust form of teaching that a pope can use, is very positive. On the other hand, there are many things in Laudato Si that are simply wrong. 


In Laudauto Si, Francis suggests that human selfishness is the source of a large portion of human suffering. This is a distorted description of reality, of the Judeo-Christian teaching with regard to humanity’s place in the created world, and of the foundational Judeo-Christian theological anthropology which says that the human person is foundationally good. That tradition says further that human persons remain so when they attend to those capacities that build right relationships with God, the created world, and with all human persons.

Tellingly, in these sections, Francis shifts back to the deductive style of his predecessors, citing Benedict explicitly on the parallel between crimes against the natural environment and crimes against “the social environment.” Benedict connects these crimes to the rise of moral relativism and human hubris, a holism that Francis largely affirms. “Both are ultimately due to the same evil,” Benedict claims, “the notion that there are not indisputable truths to guide our lives, and hence human freedom is limitless.”1 Francis continues this emphasis on human moral failing as a foundational source of climate change, describing it as a crime.

The analysis oversimplifies environmental problems in ways that don’t help us navigate them more effectively. It also locates the problem (and thus any solution) primarily in personal moral choices rather than in the complex development of human society and culture.1

While there is no doubt that criminal actions have occurred at various times, the long history of environmental degradation has more often been an unintended consequence of the human impulse to alter the environment in service of improving the human condition, acts which are a fundamental good in the foundational Christian narrative. While self-destruction may now be a feature of our situation, the people who first domesticated animals, built dams to support farming, and reserved seed that they gathered for planting in more fertile places in the next year’s growing season did not do so as acts of greed or self-destruction. They did so expressing their human capacity to analyze their context and to make informed decisions about it.

In this regard, Laudato Si in places falls prey to a kind of remnant theological anthropology, which over-physicalizes the negative features of the human person and over-spiritualizes the positive ones. In so doing, it associates human technology and modernization with our overly-physicalized, selfish human “animal” nature rather than placing modernity and technological activities on the same spectrum as all human acts.

While observing that technology “sometimes solves one problem only to create others,” Francis too often fails to acknowledge the magnitude of the problems that technology has solved. Innovations in modern medicine, hygiene, food production, and safety have mitigated immeasurable amounts of human suffering.

Those technologies have been developed at the behest not only of businesses seeking profit, as Francis suggests, but also by governments, scientists, and humanitarians in the service of reducing human suffering, improving public health, providing economic security, and, yes, preserving the environment.

Francis further suggests that modern consumer societies, lacking proper respect for nature, propose to substitute “an irreplaceable and irretrievable beauty with something which we have created ourselves.” In so doing, Francis fails to recognize the ways in which contemporary environmentalism and appreciation for the natural world is in many ways a product of material abundance and rising living standards.

He also fails to apprehend, in arguing that human intervention has diminished the natural world, how much of what we now think of as natural is in fact an artifact of earlier human interventions, a fact now well established by natural scientists. One wonders, moreover, what we are then to make of the innumerable historical and artistic treasures protected within the Vatican museums. Do these human creations too make “our earth less rich and beautiful”?

The Church’s roots in feudal, agrarian societies also continue to maintain a powerful hold, as evidenced by Francis’s complicated view of cities. Francis describes cities as “unruly,” chaotic, and polluted, and as “huge, inefficient structures, excessively wasteful of energy and water.”

As the first pope from the Southern Hemisphere, Francis is no doubt familiar with the dehumanizing features of many cities in the global south. But he fails to recognize the ways in which cities are also liberatory. And he is simply wrong about the inefficiency of cities, which use energy, water, and natural resources in a reliably more efficient manner than agrarian societies.


Beyond the specific claims about environmental destruction and its solutions, however, there are important insights in Laudato Si which suggest that the ultimate evolution of Catholic social teaching on the environment may be quite promising. Francis explicitly recognizes the benefits of science, technology, and modernity. “We are the beneficiaries of two centuries of enormous waves of change,” he writes in Laudato Si. “It is right to rejoice in these advances and to be excited by the immense possibilities which they continue to open up before us, for science and technology are wonderful products of a God-given human creativity.”

Francis affirms the central Judeo-Christian insights that the human and the divine are united in each individual human person. The human activity of knowing presumes both growth in knowledge and intense attention to experience. And the place where all this happens, the Creation (to use biblical language), is a gift both for its own sake and for our human flourishing, since we are part of the Creation.

Laudato Si rejects the notion that the Genesis account grants humankind “dominion over the earth,” and acknowledges this as a misreading of Genesis that “has encouraged the unbridled exploitation of nature by painting him [sic] as domineering and destructive by nature.” Instead, Francis reads the biblical texts in their context, recognizing that they tell us to “till and keep the garden of the world (cf. Genesis 2:15).” “Tilling” refers to cultivating, plowing, or working, while “keeping” means caring, protecting, overseeing, and preserving. This implies a relationship of mutual responsibility between human beings and nature. Francis contrasts the misinterpreted language of domination with the language that the Jews intended. They were quite trenchantly repudiating adjacent cultural understandings of the relationship between human beings and the earth. “It is good for humanity and the world at large,” Francis writes, “when we believers better recognize the ecological commitments which stem from our convictions.”1

The great insight of Judaism’s Creation Story is that humans experience both harmony with the natural world and alienation. Plants and animals may be wholly at the mercy of natural forces and cycles, but we human persons can read and interpret our own experience and have the capacity to shape, to change, to respond to our own existence. Unlike trilobites or dinosaurs, human beings can transform their environment for the sake of their own future existence. The Judeo-Christian tradition understands this as a blessing. Humans experience God both in the gift that is the natural world and in their own capacity to enjoy and care for it. God asks us to care for the world the way God cares for both the world and us.

In this regard, the Judeo-Christian insight about our place within nature is an explicit rejection of the notions of the natural world in the cultures adjacent to Israel and then later, Christianity. Laudato Si places great weight on “The Story of the Beginnings”(Genesis) in the Hebrew Bible. That story rejects pagan religions’ view of “the gods” as either benign or malicious depending on their capricious character. Instead, that story posits an all-good Creator who is committed to the human community in history and who created the natural world as a gift to that community. Both the natural world and the human community are good, not neutral or negative; both are worthy of love and care for their own sake.

There are Judeo-Christian interpretations that do imagine that in falling from nature in the Genesis story, humans have fallen out of harmony with nature. But

these notions of returning to harmony with nature or the obverse, explanations of the “environmental crisis” as primarily a result of human hubris and sin have more in common with the Western Rousseauian tradition than with the Judeo-Christian narrative and the truth claims that flow from it.           

In the Judeo-Christian tradition, from Genesis onward, human self-consciousness is, as ever, double edged, opening the space for technological development and artistic expression and also egoism and self-indulgence, for tribalism and alienation but also for community and intimacy. Humans are both part of nature and alienated from it, immanent and transcendent, material and scientific as well as tactile and spiritual.


The internal strife and struggle within the Church, beginning with the Second Vatican Council, continuing through the retrenchment of Popes John Paul and Benedict, and onward today in the reforms of Pope Francis, are all reflections of the Church’s efforts to find its way in a global postcolonial and ultimately post-Christian world. We no longer inhabit a conceptual space where the overarching narrative of Christianity is presumed.

The monarchical papacy and its frequent distortions of the Gospel were a reflection of Christian hegemony throughout medieval Europe and later its colonial holdings. As the Church grew in power, reach, wealth, and influence, the Gospel increasingly served the institutional Church, and not the other way around.

Theologian David Tracy argues that the Church, having been stripped of this hegemonic position in the world, is still in the midst of “naming the present” as we navigate globalization, secularization, and rapid changes in technology and communication. Pope Francis’s return to the style of Popes John and Paul is a move toward not only a more decentralized and less authoritarian church hierarchy but also toward dialogue with the secular and non-Christian worlds.

The effort to “name the present” is an invitation not only to Christians and Catholics but also to all of us. As Tracy points out, this new “present” must continue to re-assess early modernity’s uncritical trust in technology, which led to eugenics; nuclear, chemical, and biological weapons; and environmental damage occurring not as an intentional result of human development but by way of failure to predict and protect against many forms of pollution and waste.1

Flawed as it is, Laudato Si is an invitation to examine the world, what Jesus of Nazareth called “The Kingdom of God,” that is, and consider what the world and the human community ought to look like, how they ought to function, how the world would function if all persons were allowed to flourish as free, intentional people. That commitment to the human community is the guiding purpose of the Catholic tradition.

Pope Francis uses the post-Vatican II historical-critical method, seeking to uncover the original intention of the authors of biblical texts and then trace the arc of the way the living community made use of those texts. By returning to the ancient narratives in Jewish and Christian Scriptures and reading them anew, in light of present circumstances, Francis is constructing a “name” for the present situation, one that honors both the intentions of the communities that crafted those texts originally and the realities of the communities that must apply them today.

The problems with Laudato Si are not a function of applying outmoded or inappropriate principles deductively, but rather display a failure to inductively apprehend the correct facts about the present environmental situation. Pope Francis’s markedly inductive approach, for this reason, in contrast to the prevailing deductive method of Catholic teaching, is imminently correctable were Pope Francis to apprehend the facts that contradict sections of the encyclical.

Francis is going about the business of reconstructing a language that the entire human community can understand. His foundation is a return to the formative stories, the enduring way that most human cultures across time and space have “named” their own present times.

By inviting dialogue with actual practitioners working for the sake of the natural world and with equal energy for human development, he has begun an effort to bring more people into the circle of care for our sister, Mother Earth, and for our brothers and sisters who live in dehumanizing conditions. In places, his analysis misses the mark. But it is the broader effort, to bring the Church fully into the modern world, to begin to think about how the Genesis story might help us improve our relationship with the natural world in the Anthropocene, and most importantly, to begin a dialogue within the Church and beyond, that is the most important feature of Laudato Si.

Does Capitalism Require Endless Growth?

The modern notion that capitalism harbors the seeds of its own ecological destruction owes its provenance to a most unlikely duo of canonical economic thinkers. The Reverend Thomas Malthus claimed in the eighteenth century that a collision between the growing number of mouths to feed and the capacity to add productive agricultural land was inevitable. Karl Marx argued in the nineteenth century that technological change would bring with it falling wages, declining profits, and hence, ultimately, the collapse of capital formation.

The argument of Malthus was famously resurrected in the early 1970s in the Club of Rome report The Limits to Growth.1 Around the same time, ecological economists Nicholas Georgescu-Rosen, Herman Daly, Robert Costanza, Robert Ayres, and others advanced the idea that all human economic activity fundamentally relies on a limited planetary endowment of what they call “natural capital.” On the other side, Marxist scholars like Paul Sweezy2, Fred Magdoff, and John Foster3 have extended Marx’s insight, directing our attention to what they call the “growth imperative of capitalism,” by which they mean the indispensable necessity of capitalism to continually accumulate capital and generate a reserve of unemployed workers if it is to remain viable. Without continual economic growth, they argue, capitalism will collapse. Or, as Giorgos Kallis recently so succinctly put it, “Growth is what capitalism needs, knows, and does.”4

Taken together, the dilemma is evident: An economic system that requires perpetual economic growth on a spherical planet with finite resources simply cannot last.

Merging Marx and Malthus in this way has made Malthusian arguments accessible to elements of the global left that had historically rejected them. Capitalism and environmental sustainability simply could not be reconciled. Constraining the economy to keep it within a safe margin of ecological limits would only hasten capitalism’s collapse, while allowing capitalism to grow unconstrained would result in ecological collapse. Either way, the choice was clear: abandon capitalism or risk the end of the human project.

But Marx and Malthus are not so easily reconciled. Marx’s central insight was that capitalism would collapse of its own contradictions, including rising inequality and immiseration of labor that would ultimately destroy the market for the goods that capitalists produced. As it turns out, the mechanism by which this would occur, technological change driving greater economic productivity, was precisely the mechanism that Malthus failed to anticipate when he predicted that food production would fail to keep up with population growth. In Marx’s crisis lay precisely the mechanism that would prevent Malthus’ prophecy.

We see much evidence for this today. Improving technologies have driven a major expansion in food availability, along with continuing production efficiencies across the global economy more generally. The world faces no shortage of ecological challenges — species extinctions, collapsing fisheries, depleted aquifers, poisoned land, and, of course, the inexorable rise of global temperatures as atmospheric concentrations of greenhouse gases increase. And economists today concern themselves with the threat of “secular stagnation,” chronically low growth rates that threaten long-term prosperity.

But it is important to distinguish these challenges from the sweeping claims made originally by Sweezy, Magdoff, and Foster and repeated today by prominent intellectuals and activists such as Naomi Klein and Bill McKibben. In the pages that follow, I will demonstrate that both neoclassical growth theory and empirical evidence suggest that capitalist economies do not require endless growth but are rather much more likely to evolve toward a steady state once consumption demands of the global population have been satisfied. Those demands demonstrably saturate once economies achieve a certain level of affluence. For these reasons, a capitalist economy is as likely as any other to see stable and declining demands on natural resources and ecological services. Indeed, with the right policies and institutions, capitalist economies are more likely to achieve high living standards and low environmental impacts than just about any other economic system.


From the window of his Manchester home in the mid-1840s, Marx’s colleague and contemporary Friedrich Engels looked out on a horrifying microcosm of what was happening in England and throughout the newly industrializing world — a stark imbalance between the luxurious wealth of capital owners and the miserable poverty of the workers they employed. Marx himself had witnessed firsthand this same imbalance, and over several decades of intense study came to propose that a core flaw of capitalism resides in excessive claims placed by privately owned capital as against labor on the economic value created by their combination.

Herein lay the fundamental contradiction, in Marx’s view, which would bring an end to capitalism. As capitalists invested in ever-newer technologies, Marx predicted that their dependence on labor would decline. As this occurred, returns to labor in the form of earned wages would decline. If there were no return to households for their labor, there would be no income with which to consume goods produced by capital owners, nor savings that households might reinvest in new capital. An economic system in which declining returns to labor due to technological change immiserated most households was a system in which the market for goods sold by capital owners could not long survive.

Notably, Marx did not dispute the necessity of capital for producing what households need, only who in society need control this resource. The problem, as Marx saw it, was that the surplus value created by labor was being unfairly conscripted by capital owners.

In the first decades of the twenty-first century, a number of prominent analyses have suggested that Marx’s prophecy is perhaps coming true. MIT economists Erik Brynjolfsson and Andrew McAfee5 in recent years have suggested that continuing automation and rising labor productivity threaten mass unemployment, a problem foreseen by Keynes in 1930.6 Thomas Piketty, in his much-lauded book Capital in the Twenty-first Century7, finds that returns to capital have exceeded real economic growth in the industrialized world in recent decades, attributing that shift to ever-increasing concentration of limited capital in the hands of the few.

The economist Robert Gordon8,9 finds that growth rates slow dramatically as societies become wealthier. The growth associated with the enormous rise in economic productivity and output associated with the transition from agrarian to industrial societies cannot be sustained as societies shift from industrial to post-industrial economies. Meanwhile, Paul Mason and others in the “post capitalism" movement contend that “an economy based on the full utilization of information cannot tolerate the free market.”10 His argument is that capitalist corporations will not prove capable of capturing value from the technology they deliver, value adequate to sustain them over time.

Before considering whether these various challenges to advanced capitalist economies portend their collapse, it is important to note what none of these analyses suggest, which is that capitalism’s unquenchable demand for growth has run up against fundamental biophysical limits. If anything, these analyses suggest the opposite: that the limits to continuing growth in capitalist economies are social or technological, not biophysical. Brynjolfsson and McAfee, and Piketty, through technically different mechanisms, ultimately raise concerns that center around the immiseration of labor. Whether due to technological change, growing returns to capital, or both, all three centrally focus on declining wages and employment as the central challenge that threatens robust and equitable growth in capitalist economies.

Mason, conversely, projects that technological change threatens returns to capital. The commodification of everything — material goods, knowledge, and information — ultimately brings with it an end to profits and hence both capital accumulation and capital reinvestment.11 Gordon, meanwhile, observes that there is simply no further techno-economic revolution that can replicate the one-time boost in economic productivity that comes with the shift from agrarian to industrial economies.12 If there is a common theme in these challenges to capitalist economies it is that all find their way, to one degree or another, back to Marx, not Malthus. The long-term challenge for capitalist economies, these analyses suggest, is too little growth, not too much.


The headwinds facing advanced industrial economies — stagnant growth and rising inequality — tell us something about the prospects for low- or zero-growth capitalist economies. Gordon’s analysis suggests that industrialized economies in relatively short order achieve a “satisficing” level of household consumption. Once that level is achieved, and once societies have built out the basic infrastructure of modernity — cities, roads, electrical grids, water and sewage systems, and the like — the growth rates characterized by the early stages of industrialization cannot be sustained by the knowledge and service sectors that increasingly dominate post-industrial societies.

World Bank data clearly show this. Economic growth rates decline as countries become richer. Growth in GDP per capita in OECD countries slowed from an average of about 3 percent per year in the period 1961–1985 to about half of that in the period 1986–2014.13 Gordon’s analysis is supported not only by the long-term slowing of growth in industrialized economies but also by saturating household consumption in those economies. According to the World Bank, OECD growth in real household consumption per capita (consumption of both goods and services) has shown steady decline each decade from around 3 percent per year in the 1970s to around 1 percent per year since 2000.14

Brynjolfsson and McAfee, and Piketty, suggest that declining returns to households from their labor will drive worsening inequality and stagnant or declining wages. But that does not imply a declining material standard of living. The same technology gains and capital mobility that have eroded the power of labor in developed world labor markets have also persistently reduced the real prices of goods and services, making them ever more affordable.

Even as nominal wage growth has slowed or stagnated in the US and other advanced developed economies, households are able to buy more with less of their incomes. This is because the cost of goods and services has grown even more anemically, inflation nearly disappearing in these countries over the same time period, meaning wages have grown in real terms. OECD data show that real wages OECD-wide have grown by about 1 percent per year between 2000 and 2014, including real growth in the United States, the United Kingdom, France, and Germany.15 Growth in the Scandinavian economies (Norway, Denmark, Sweden, and Finland) has exceeded this.16

This is true even at the bottom of the income distribution. Virtually all low-income homes in the United States today boast a refrigerator, modern heating and cooling, and electricity. Large majorities have dishwashers, washers and dryers, computers, cable television, and large-screen displays. Consumer goods and services once considered luxuries in the United States and other developed countries are today widely available and utilized by all citizens. That is mostly because home appliances and other goods today cost a small fraction, measured in the work time necessary to purchase them, of what they did thirty years ago.17,18

Of course, rising economic inequality raises a range of concerns beyond those related to access to goods and services. Higher rates of inequality may threaten social mobility, social cohesion, and perhaps even democratic governance. Even so, inequality appears to decline as nations industrialize and become wealthier. In rich Scandinavian countries (Sweden, Denmark), inequality has essentially halved since World War II.19 Declines recently are less impressive in the United States, United Kingdom, and other parts of Europe20, but, nonetheless, inequality remains reliably lower than in most developing economies21, where aggressive but still insufficient capital formation in the presence of large labor forces tends to result in higher levels of inequality.

Moreover, increased capital mobility has driven declining inequality between countries, even as it may be worsening inequality within them. Thanks to global trade and international supply chains, firms have become increasingly able to locate production facilities in the developing world, where labor with the requisite skills can be employed at lower wages.

As might be expected, labor in industrialized countries is not happy with this turn of events. But the result has been a long-term convergence of wages between producing and consuming countries, declining inequality globally, and a dramatic decline in absolute levels of poverty. The ILO reports that between 2000 and 2011, real average wages approximately doubled in Asia.22 In Latin America, the Caribbean, and Africa they also rose substantially, well above the developed world average23, while in developed economies they increased by only about 5 percent, far below the world average24, leading to what leading ILO observer Patrick Belser has dubbed “the great convergence”25 — a dynamic that was incidentally predicted many decades ago on theoretical grounds by famed economist Paul Samuelson.26 Meanwhile, according to the World Bank, the global share of people living on less than $1.90 per day (the World Bank definition of extreme poverty) fell from 44 percent in 1981 to 13 percent in 2012.27

Taken together, then, the dynamics transforming the global economy, while not without challenges, paint an interesting picture of slowing growth, converging global incomes, falling cost, and saturating demand for goods and services. Should these dynamics hold, it is not hard to imagine a future in which the global economy gravitates toward a prosperous and equitable zero-growth economy placing relatively modest demands on the biocapacity of the planet. But getting from here to there will require a number of further conditions.


For a capitalist economy to flourish without economic growth, population stabilization is a necessary condition. This condition is, of course, not limited to the capitalist model. It is implausible that any alternative to the capitalist model could deliver zero economic growth with a persistently growing global population.

The second critical precondition for a steady state capitalist economy is that everyone must achieve a satisfactory level of consumption. This second precondition, in contrast to the first, is arguably unique to the capitalist model. The fundamental characteristic of capitalist economies, that which sets them apart from centrally managed socialist economies, is that capitalism, formally defined, is the economic system where the means of production resides in the hands of households.

By households, of course, I mean private individuals. Some households own more capital than others, but most households in capitalist economies own some capital, whether as shareholders in public corporations, owners or investors in privately held businesses, beneficiaries of pension funds, or holders of corporate bonds. Irrespective of which particular households own it, all capital formation in a capitalist economy originates from households, be it directly or indirectly. In a capitalist economy, households run the show, both as producers and — as we shall see next — consumers. All else being equal, households decide how much they spend and how much they save, how much they work, and how much leisure time they wish to have.

So long as there remains significant pent-up demand for work and consumption, those choices are limited. Households that don’t earn enough to consume all the goods they need don’t have a choice of whether or not to save or whether or not to work. But once those needs are met, work, consumption, savings, and leisure become a matter of choice. Households will work as much as they need to consume and save as much as they need or want.

In an economy wherein households have globally realized a level of physical consumption they deem “satisficing,” aggregate consumption will precisely match individual household preferences among consumption, savings, and leisure time. For this reason, a steady state economy cannot be achieved in a capitalist economy until such time as households in the aggregate have deemed themselves to have achieved sufficient goods and services consumption.

This is an outcome that in many parts of the world may not be so far away. In the rich Scandinavian countries, households today already forfeit significant added earnings in favor of increased leisure time. According to the OECD, the number of annual hours worked per employed worker has declined substantially in the 14-year period from 1999 to 2013 (0.2%–0.3%/year in Norway, Denmark, Sweden, the UK, and the US; 0.5%/year in Germany) while real wages in these countries have increased by about 1.3%/year over the same period.29

So long as consumption is unsaturated, labor is in surplus, and capital is scarce globally, growth will be required to meet pent-up demand for goods and services. Once that demand is met, however, either because wages have risen sufficiently to achieve satisficing levels of consumption, or the cost of goods and services has declined sufficiently to achieve the same outcome, continuing aggregate growth is no longer required, or likely.


If there is a problem with this picture, it is the issue raised by Mason and the post-capitalists. Why would anyone continue to invest in capital when returns have fallen to zero? The answer brings us back to the central role of households in capitalist economies. Given the means to do so, most households in capitalist economies forgo some consumption in order to save, allowing some of the economy’s production to be directed toward the creation of new physical capital to replace or grow the existing capital in place instead of consuming all production in the present period.

Savings behavior by households, it turns out, is not driven by the returns that households expect, but rather by their desire to save for retirement. As demonstrated by Franco Modigliani and colleagues30 (work, not incidentally, that won Modigliani a Nobel Prize), households working toward retirement want made available to them financial resources to carry them through their post-retirement life without working, so they save for this. Even if they receive no positive return on the savings they have set aside, they will nonetheless be able to draw on them.

Why invest rather than just stuffing savings under a mattress? Because even in a zero-growth economy, returns to capital are not always zero. When the supply of physical capital supported by household savings declines, returns to capital rise, inducing households to raid their mattresses and invest their savings in the hope of getting more back for retirement than they put in. Eventually, the system adjusts itself and capital returns again fall to zero (on average, economy wide).

The result, then, is that in a zero-growth economy, households provide all the new capital needed for producers to replace capital stock that has deteriorated (depreciated) out of the system, but no more. There is no need to grow the capital stock in a zero-growth economy, just to sustain and refresh it. Household savings for retirement accomplish exactly this.

But what about producers? Why continue to build replacement capital if the expected returns are zero? Again, because expected returns to replacement capital will not always be zero, even when average returns are. When investment in new capital falls, capital stock in the production economy falls as well, and with that employment falls, because the productive capacity of the economy shrinks. Lower output and lower wages in turn create new opportunities for profitable investment in new capital stock and increased employment. In a zero-growth economy, household savings invested match replacement capital needs, economy-wide capital returns on average approach zero, and production and consumption continue apace, neither growing nor declining.

Theoretically, then, there is no particular reason that capitalist economies must collapse without growth. Empirically, a range of trends suggest that growth rates slow as societies become wealthier, not because labor becomes immiserated but rather for the opposite reason, because demand for goods and services saturates.


There remains, however, the problem of Malthus. A zero-growth steady state economy is not necessarily a sustainable one. Writing in the eighteenth century, at the dawn of the industrial revolution when even the British economy was still overwhelmingly agrarian, Malthus viewed the natural capital of deepest concern as being productive agricultural land. Today, Malthus’ argument is commonly cast more broadly in terms of our inattention to our planetary endowment (and the legacy we leave) in general.

There are clear constraints on planetary endowment. The economic system resides within, and draws upon, the munificence of the planet’s natural ecosystems and depends on it for its functioning. There is growing recognition that these natural capital “services” are coming under increased strain as population grows and carbon emissions and other pollutants from economic activity challenge the capacity of natural capital to absorb economic waste and replenish the supply of natural capital services taken from it.

To be truly sustainable, the natural capital of the planet must carry the economy indefinitely without suffering irreversible or catastrophic damage. Sustainability requires that natural capital have the capacity to absorb all waste products from human activities and to replenish itself sufficiently over time to maintain its stock, including, notably, its capacity to produce food. This requirement is as true of a steady state economy as of any other and applies to alternative economic systems as it does to capitalism.

Human societies today consume prodigious quantities of natural capital. With much of the global population still living in deep poverty, that level of consumption is likely to grow dramatically in the coming century. But even if consumption did not grow, steady state consumption over time might still deplete reserves of natural capital sufficiently that it would be no longer able to support prevailing levels of consumption.

For this reason, some have called31 not only for an end to economic growth but also for the even more radical step of “degrowing” the global economy. Degrowth of the economy can be achieved in one of two ways, reducing population or reducing per-capita consumption. These two levers are not unrelated.

Malthus imagined that humans, like bacteria in a petri dish, reproduced geometrically in relation to resource availability. In fact, the opposite is the case. As human populations move from subsistence agrarian economies to modern industrial economies, and become more affluent in the process, fecundity levels decline. The human population over the past century has risen dramatically, but not because people were having more children. Rather, thanks to better public health and medical care, more children are surviving to adulthood, and adults are living much longer.

The distinction is an important one. Fertility rates in virtually all developed economies are at or near replacement levels, meaning that native-born population levels are either stable or falling. Many emerging economies are at or near replacement levels of fertility as well, largely due to rising societal wealth and incomes. Indeed, differences among demographic models as to when and at what level global population will stabilize are almost entirely attributable to different assumptions about economic growth rates in Africa and Developing Asia. Accordingly, efforts to stabilize global population and reduce per-capita global consumption are potentially at cross-purposes. A global population that continues to live in subsistence agrarian economies will likely be much larger than one that has fully made the transition to an urban and industrial economy, even as the latter consumes at significantly higher levels.

Advocates of degrowth propose to address this paradox through a process they call “shrink and share,” proposing not only to limit total global consumption but also to redistribute it equitably. A more equitable distribution of a smaller pie would presumably allow those remaining in deep agrarian poverty to make the leap to modern living standards and fertility rates.

Such a scheme might be possible theoretically but as a practical matter would appear to be unlikely, requiring some combination of voluntary austerity and redistribution at a global scale or, lacking that, some form of global government that would mandate limits on personal consumption and would forcibly redistribute wealth from richer precincts of the global economy to poorer ones.

Even if such a scenario were plausible, it is not clear that it would actually degrow the economy. Global per-capita income today is about $10,000–$15,000, depending on the accounting method.32 Even with perfect redistribution of wealth, this level of income would probably be insufficient to achieve something approximating modern living standards globally. Actually degrowing the economy significantly from present levels would probably leave large populations stranded in deep agrarian poverty. And to provide a first-world perspective, such a perfect redistribution would require — for global GDP to be maintained at its current level — High Income countries (World Bank designated) to reduce their present GDP by between about 60 percent to 70 percent, again depending on the accounting method.33 This surely qualifies as exceptional austerity, voluntary or otherwise.

There is also the small matter of whether such a scheme could actually sustain itself economically or ecologically. It is not clear that such a scheme could work without large-scale collectivization, as it is not clear that the incentives necessary to keep producers producing, savers saving, and investors investing in new or replacement capital voluntarily could be sustained.

The history of the collectivization of production has not been good for either living standards or the environment. Incomes stagnate as more work often does not bring greater income. Stagnant incomes bring lower savings and capital reinvestment, and lower capital reinvestment brings aging capital stock and infrastructure and ultimately stagnant or declining economic productivity. These problems become self-reinforcing. Incomes stagnate further with declining productivity, and with declining incomes comes less surplus either to redistribute or invest in new capital and infrastructure. Aging capital stock and declining productivity also bring greater calls on natural capital.

In short, natural capital is not the only endowment that societies erode at their own risk. Societal wealth, the product of economic surplus made possible by rising economic productivity, is also an endowment that grows as societies develop economically. Erosion of that endowment erodes the surplus necessary to reinvest in new capital that is capable of sustaining living standards while requiring declining calls on natural capital.


There is another path to stable and declining calls on the planet’s endowment of natural capital. Malthus erred not only because he failed to understand the relationship between fertility rates and food consumption but also because he underestimated the rate at which agricultural productivity would improve. By growing more food on every acre of land, human societies avoided mass starvation. More broadly, rising economic productivity due to technological advances raises incomes, creates economic surplus that can be reinvested in new capital and infrastructure, and produces more economic output from less natural capital input.

So long as there are large populations living in deep poverty, gains in economic productivity will be put toward greater output, assuring that some or all of the efficiencies associated with productivity gains will be put toward greater production and consumption. But once everyone on the planet achieves a satisfactory level of consumption, consumption of goods and services should stabilize while calls on natural capital should stabilize and then decline.34

By satisfactory levels of consumption, what I mean is a standard of living that would be recognizable to the average citizen of an advanced developed economy — modern housing, an ample and diverse diet, sufficient electricity for run-of-the-mill household appliances, roads, hospitals, well-lit public spaces, garbage collection, and so on. The saturation of demand for goods and services in advanced developed economies in the latter half of the twentieth century provides a reasonable proxy for the point at which most people start to see diminishing utility from further household consumption.

In a zero-growth world, in which household consumption has saturated while labor- and resource-sparing technological change continues, leisure time grows continually over time while societal calls on natural capital decline.35 Given these conditions, how quickly a zero-growth economy is achieved, and calls on natural capital globally peak and then decline, depends upon three closely related phenomena: how rapidly global population stabilizes, how rapidly incomes among the global poor rise, and the rate at which resource-sparing technological change occurs.


Getting to a zero-growth steady state economy with declining calls on natural capital will require, then, sustaining — or better yet, accelerating — two trends that capitalism has proven better able to advance than any alternative economic arrangement to date: lifting large agrarian populations out of poverty, and improving resource productivity through technological change. The former, as noted above, is also the key to stabilizing global population.

In these regards, standard neoclassical models and theory suggest an idealized and uniform expression of capitalism. The means of production is privately owned by households. Households are free to purchase whatever goods and services they wish in an open, free market. Households are free to allocate their budget between current consumption and saving for the future, and to allocate their time between work and leisure. Producers act to maximize profits and are constrained by perfect competition.

In reality, capitalism takes many hybrid forms in economies around the world. The trends elaborated above and the many imperfect (from the admittedly reductive and formalized view of economic theorists) expressions of capitalism and markets around the world would suggest that complete private ownership of all production, perfect competition, and minimal government intervention in markets are not necessary for the basic dynamics described above to sustain themselves, and are unlikely to obtain for many generations in any case.

Scandinavian countries redistribute more income than other OECD economies. Japan, France, and South Korea have more actively engaged in industrial policy and centralized economic planning than, say, the United States or Great Britain. And while these, along with a range of other indigenous factors, appear to account for some significant variation in key trends associated with growth rates, wealth distribution, dematerialization, and calls on natural capital across national economies, the broader trends are robust. As economies develop and become wealthier, and as populations are integrated into the formal, market economy, productivity rises, calls on natural capital in relation to economic output fall, poverty is eradicated, and inequality, both within nations and between them, declines.36,37,38

What is important is that a set of processes are established and sustained — capital formation, integration of everyone into the cash and wage economies, rising labor, capital, and resource productivity, and the generation of economic surplus for savings and reinvestment in new capital and infrastructure. These dynamics are not inconsistent with a range of state interventions in the private economy. Social insurance to reduce economic insecurity, public investment in infrastructure and the creation of public or heavily regulated private utilities to provide for basic services such as water, sewage, and electricity, antitrust and other measures to assure fair and competitive markets, public support for basic science, applied research and development, and commercialization of new technologies — all represent measures national governments in various contexts have implemented to good effect and that, depending on the circumstance, may even be essential to sustaining and accelerating rising incomes and resource productivity.

But we should also not overlook the underlying engine that has driven rising prosperity, slowing population growth rates, and increasing resource efficiency. Standing before the offices of the Federal Trade Commission in Washington, DC, is a sculpture depicting a heavily muscled man trying to restrain an even more heavily muscled workhorse. The horse represents the massive power of trade; the man represents the obligation of government to tame and bring into the service of society this wild and vibrant power. This sculpture could have been placed just as meaningfully in front of the Securities and Exchange Commission or the Environmental Protection Agency, or indeed any government entity in any country charged with harnessing and guiding the forces unleashed by capitalism. But the Federal Trade Commission sculpture also implicitly conveys a forceful warning. Tame it as you will. But don’t kill the horse we need to ride into the future.

After the Baby Bust

“Lazy workers.”

This, the owners of coffee and rubber estates in Karnataka, India, told us, was why they would tear out dense canopies of trees harboring wild hornbills and critically endangered frogs and replace them with more intensive and less wildlife-friendly crops. Compared to the days when their fathers ran these estates, and the workers required for the back-breaking tasks of weeding, coppicing, and harvesting were more pliable, today’s workers had become defiant and demanding. Laborers now insisted on smoke breaks, higher wages, and even electricity. Worse, farmers told us, they had little choice but to either give up labor-demanding crops or to comply with worker demands, lest their laborers vanish.

The shift in labor relations is striking given the locale. Karnataka is a place where the bargaining power of workers has always been notoriously poor, where rural poverty is crushing, and where generations of people have lived without access to modern amenities and education.1

Many factors have contributed to the shift: urbanization, labor outmigration, globalization, and an unprecedented aspirational culture that eschews rural farm labor where other opportunities exist. But one central reason, contributing to and accelerating all the others, is far more surprising: Karnataka is shrinking. As in most states throughout southern India, the fertility rate in the state has fallen to 1.8, and for many years has been well below the rate at which new births can replace those who naturally pass away.2 Population is getting smaller, influencing wages, farming practices, and habitat. Zero population growth has arrived in southern India: a Baby Bust.

As growth has ceased throughout Karnataka, across southern India, and in many other parts of the world3, new social arrangements are evolving, new ecologies are coming into being, and new political and economic conflicts are emerging. What happens to an economy, anywhere in the world, when population stalls or declines? How are relationships between workers and owners reconfigured? What happens in families, when the demands for women’s labor and demands for reproduction come into conflict, especially in historically patriarchal contexts? When labor becomes scarce, do regions shift to land abandonment and incidental rewilding, or instead to increasingly mechanized and intensive agricultural systems?

A scarcity of people, or at least the end to a constantly increasing surplus of laboring bodies, in short, has an enormous influence on politics, economics, and ecology. Demographers have been observing falling fertility rates around the world for many decades.4 Our ideas about geo-politics, social relations, economics, and ecology, meanwhile, have scarcely evolved at all.


In January, 2015, the Ladakh Buddhist Association (LBA) appealed to India’s new Prime Minister, Narendra Modi, accusing Muslims of what they called “Love Jihad.”  They charged that the Muslim community was waging a campaign of “luring Buddhist girls,” in the words of the LBA secretary, Sonam Dawa, into marriage, in a bid to convert them to Islam.5

This phrase, “Love Jihad,” has become common across India since at least 2009 to denote a supposed campaign of demographic aggression on the part of India’s Muslims. The decadal release of the Indian census is in part inflaming the paranoia, insofar as an ongoing shift in religious composition is underway. India’s Hindu population, as a percent of the total population, fell from 80.5 percent in 2001 to 78.4 percent in 2011, while the percent of Muslims increased from 13.4 percent to 14.2 percent.6

The differential rates of population growth between Hindus and Muslims are driven by complex economic and social factors. But they are read by religious leaders in strictly sectarian terms. From their pulpits in Ladakh, Buddhist and Muslim leaders, all men, have been calling for enhanced family size for many years. Geographer Sara Smith finds general agreement in her interviews with women in the region. The women believe that there is increasing need to drive fertility up, with members of each community respectively fearing being outnumbered by the other.

And yet, when Smith counts heads across villages she sees family sizes descending across the board. Individual women all report the same thing. They have aspirations for their children to secure education and employment, and therefore reject increasing their own family size, preferring one child or two at most.7

What Smith is documenting is a growing schism between demographic anxieties and personal aspirations, between fixed notions of ethnonational identity and changing gender roles and responsibilities. Increasing education, economic opportunities outside the household, and access to health care and birth control in countries around the world are leading to greater autonomy, equity, and power for women within households and beyond. This, in turn, is leading to different fertility choices, and with that, a host of new challenges and conflicts, in a world where births are decreasingly common and, hence, increasingly political.

As in Ladakh, ethno-nationalists around the world decry the threats associated with demographic transition. A flurry of racial fears of a coming nonwhite majority have swept through the United States in recent years. In Europe, similar unease has mounted as birth rates have fallen, leading to hand-wringing about insufficient family size among the native majority, revealing what the Dutch geographer Luiza Bialasiewicz has called a “moral geopolitics of birth.”8

Beyond the North Atlantic, pronatalist policies are rampant. In Singapore, the fertility rate has been below 1.3 for years. Here, the government has attempted to create a new tradition, procreation as a national holiday. The ad campaign for “National Night,” cosponsored by the Mentos candy brand, is hilariously explicit,9 but the fact that Singapore’s government simultaneously monitors and manages the fertility of immigrant workers from South and Southeast Asia reveals the disturbingly racialized nature of this campaign.10

At the same time, indigenous groups like the Tawahka of Honduras and the Miskito of Nicaragua have experienced recent demographic turnaround after centuries of decline and have leveraged their expanding populations to advance territorial and environmental claims. Operating from what geographer Kendra McSweeney and anthropologist Shahna Arps have called a “communal memory of near-extinction,” these groups have developed a novel indigenous politics, observed elsewhere in Central America as well, emerging only where the demographic tide has otherwise turned.11

Declining fertility rates also bring aging populations. While the majority of the world’s population is young, an artifact of more than 100 years of growth, the balance is swiftly tilting. The population of those over 60 years of age, most dramatically, will rise to near 25 percent globally by the middle of the century (the historical average is less than 10 percent). In more developed countries, this transition has already occurred.

The shift in intergenerational equity that follows, and the basic problem of providing care, have profound social and political implications. The burden of the aging population increasingly falls on an incrementally shrinking population of young people. The potential support ratio (PSR), the number of 15–64 year-olds for every person over 65, will fall to four by 2050, from a mid-twentieth-century level of twelve, nearly tripling the economic and care burden for younger generations since 1950.

Figure 1: Global potential support ratio (PSR), 1950–2050

Data source: Population Division, DESA, United Nations (2002)

Add class and race to this mix and other problems emerge. Elder care skews dramatically toward poor, minority, and immigrant communities in the United States and Europe, and predominantly toward women. The most intimate and exhausting of work experiences, care-giving, nursing, and end-of-life support, all fall to a growing underclass of female immigrants.12

Though slightly delayed, similar population pyramid inversions are not far behind in the developing world, where hotspots in a global dementia epidemic are brewing. Cases of dementia are forecast to increase threefold in India, China, and their south Asian and western Pacific neighbors by 2040.13 Providing care, economic support, and labor in a world where the aging begin to outnumber the young is a challenge that is only beginning to take center stage as the implications of the inverted population pyramid start to become clear. Doing so in a manner that is humane, supportive, and economically and socially equitable for all will be more challenging still.

These shifts also hold enormous environmental implications. The tilting demographics of the countryside, in particular, can give rise to a range of contradictory outcomes. On the one hand, the slow emptying-out of some rural areas may logically lead to land abandonment, the succession of former fields into forests, and the rewilding of landscapes. On the other, the complex reality of agrarian and resource economies assures no such outcome. The absence of available labor can also lead in the reverse direction, to the acceleration of energy-based inputs and technologies, and more-intensive agricultural systems.

Returning to the coffee and rubber farmers of Karnataka, do our farmers choose to abandon production and shift into other livelihoods (like tourism) or, instead, replace lost labor with chemicals or ecologically simpler, less labor-demanding agricultural systems? Do they diversify and disintensify, allowing the land to shift into wilder ecosystems, or instead intensify production and industrialize the agricultural landscape?

The evidence from around the world is mixed. In many places, especially throughout Central and South America, areas heavily modified by long-term human occupation have sometimes been abandoned as land values, labor availability, and production have moved, leading to indigenous or invasive vegetative growth, shrub and forest recovery, and succession. This “forest transition” has been observed in a wide range of geographic contexts.14

Yet this outcome is by no means inevitable or easily predicted. State policy, remittances from migration, configurations of property, and systematic violence can either drive or retard such outcomes, regardless of demographic conditions. In the Yucatan, outmigration has indeed led to a decline in cropping, only to be replaced by ranching, a land use with an arguably heavier footprint.15

Even where land abandonment from the demographic transition does occur, ecological “recovery” is by no means assured. In both Mexico and India, declining human disturbance often coincides with decreases, rather than increases, in biodiversity. Abandonment of rice fields in rural Japan, where populations are plummeting, has similarly led to declining habitat heterogeneity, and impaired ecosystem function. As with geopolitics, health care, and sectarian strife, a world after population growth suggests diverse and surprising trajectories for global environments.16


For those paying attention, none of this should come as a surprise. Worldwide Total Fertility is 2.3, down from 4.95 in 1950. More dramatically, in 2014, a majority of nations in the world reported fertility lower than the replacement rate, the tipping point between a growing and shrinking population (a fertility figure slightly more than 2.3). More countries are now shrinking than growing. In fact, national fertility rates are now at or below replacement in a huge range of countries that, until recently, were growing by leaps and bounds, including Tunisia, Iran, and Vietnam.17

Yet our habits of thinking, in economics, ecology, and politics, have hardly changed. Each of these fields remains rooted in scarcity as an organizing principle. This intellectual heritage is not a coincidence. All the major works of classical and contemporary political economy were written during an era of rapid demographic growth.

During the eighteenth and nineteenth centuries, just as the Enlightenment roared through European intellectual and political circles, populations also began to surge, owing to improvements in sanitation, health, and longevity. The great thinkers of the era, upon whose work the later edifice of modern political economic thought would be built, advanced theories of economy and politics rooted in the context of a sudden but sustained European demographic transition.

Consider Scotland, only one of several critical geographic touchstones for Enlightenment thinking. Throughout the early modern era, that country was a site of vexing enclosure policies, revolutionary property experiments, and emerging juxtapositions of starvation and abundance, making it a source of inspiration for many key thinkers. Adam Smith (himself Scottish) set off the classical revolution with Wealth of Nations in 1776, using Scottish birth rates, poverty rates, and interest rates as empirical fodder. Continuing onward, Scotland would remain to be a source of intellectual inspiration precisely during its era of greatest growth, with Thomas Malthus writing at length about its population and Karl Marx meditating on its migrations and enclosures in the next century. During precisely this period, when the major works of classical political economy were written, Scotland went from a half-million people to almost five million, a product of the very economic changes the great thinkers of that era sought to explain.18

Figure 2: Demographic change and intellectual history, 1776–1899

Data source: Paul Robbins and Sara H. Smith, Baby bust: Towards political demography, Progress in Human Geography, © The Authors, 2016. Reprinted by Permission of SAGE Publications, Ltd. 

Such coincident development would carry on into the twentieth century, when global development models superseded the classical era and new thinkers emerged, from Shumpeter and Keynes to Rostow and De Soto. The intellectual infrastructure used to explain global development, especially now focused on the “Global South,” from India to Ghana, was inextricably embedded in a period of accelerating population growth in those same places. India alone provides a startling example, where historically unprecedented rates of demographic growth were taken for granted within (and undoubtedly accelerated by) development economics throughout the twentieth century.

In sum, all the major conceptual innovations and debates of the modern era unfolded within the confines of two constants: first, a population growth rate well in excess of 1.5 percent, and second, an enlightenment and colonial knowledge system rooted in questions either of resource shortage or human growth and abundance (i.e., Malthus or his critics).

Figure 3: Demographic change and intellectual history, 1934–2005

Data source: Paul Robbins and Sara H. Smith, Baby bust: Towards political demography, Progress in Human Geography, © The Authors, 2016. Reprinted by Permission of SAGE Publications, Ltd. 

Demographic change and intellectual history, 1934–2005 (Robbins and Smith, 2016)

One cornerstone of twentieth-century development theory is emblematic: Arthur Lewis’s Economic Development with Unlimited Supplies of Labor. Lewis, arguably the father of modernization theory, was a visionary development theorist, the first (and so far only) black man to receive the Nobel Prize in economics, and the first economic advisor for the nation of Ghana as it emerged after independence.

Lewis understood population to be, whatever its other burdens, a central engine of capitalist growth. Proposing what would come to be known as the Dual Sector model (or the “Lewis Model”), he observed that an ever-present and ever-growing reserve army of rural immigrants to the city, fueled by high rates of rural fertility, drove down urban wages in the developing world. This created a comparative advantage in industrialization, floated on an ocean of cheap labor, which lifted the boats of development throughout the poorest parts of the world.19

This insight, tied to Marx’s observation a century earlier that capital maintained an “industrial reserve army” as a “lever of capitalistic accumulation,”20 became a powerful cornerstone of investment and wage strategy in the postwar world. Even today there are debates about whether China has passed through the “Lewis turning-point.”21

Figure 4: Sir William Arthur Lewis, Nobel Prize–winning father of Modernization Theory and visionary observer of the conditions of surplus population and labor.

Lewis foresaw an end of population growth with concern, since its implication for developing economies was problematic. He postulated that such a shift would put upward pressure on industrial wages and lead to inflation of rural farm prices. Even so, the bulk of his work, echoing more than a century of intellectual history, modeled and projected economic history around an assumed, ongoing, and uncontrollable population boom.


What this all means is that most of our contemporary ideas and expectations are steeped in two centuries of population growth, a demographic condition that, however real, represents a comparatively brief historic anomaly.  The intellectual legacies we have inherited, even where they are not explicitly and problematically Malthusian, are still better suited to the period in which they emerged, one in which advances in medicine and health drove down death rates while birth rates remained temporarily high. That period, characterized by exponential population growth, has been overtaken by history and is coming to a rapid end.

The precise ending date for population growth is a matter of dispute; some models predict it for the middle of this century, and other models suggest the final end may be several decades delayed.22 But even the most draconian Malthusians agree that world population growth peaked in the early 1960s, that zero population growth is nearing (and past) in both Latin America and much of Asia, and that populations are falling in a great many places. Judging from the trends worldwide, the decline in global and regional fertility and growth rates is unlikely to ever reverse direction.23 In most regions of the world, the end of demographic growth is already evident. Within our current lifetimes, it will be ubiquitous.

Figure 5: World Population Growth Rates: 1950-2050

Data source: US Census Bureau (2011)

But as this era ends, a huge range of new problems, questions, and opportunities emerge, all of which demand new ways of thinking, and most of which are deeply political, insofar as they impinge on the control of polities, people, resources, and land, and will shape the trajectory of new environments in the Anthropocene.

And, of course, lurking behind all these smaller questions is the most interesting one of all: what will the global economy do without human demographic growth? On a planet arguably already plagued with overproduction, where will sufficient demand emerge to maintain the levels of surplus accumulation demanded by many political leaders, most investors, and every corporate CEO? Will demographic decline lead empowered laboring classes to leverage improved wages and rights or instead lead to harsher bargains for workers to squeeze still more productivity from fewer bodies? Can prosperity be decoupled from demographic growth in a way that is just, equitable, and good for the planet? Given the population luxury that capitalism has enjoyed for two centuries, this has been a question long deferred. But no longer.

Sadly, questions of population and demographics remain off the table in many quarters of the intellectual community, for fear of the taint of Malthusianism. And with good reason. Malthusian thinking has been consistently insistent on predicting economic and ecological disasters that never arrived, even while it was used to bolster barbaric and authoritarian actions against women and the poor.

For those who faced the sterilization camps of Indira Gandhi’s emergency and the draconian legacies of China’s now-abandoned, one child per family policy, the ghost of Malthus is more than a specter. Indeed, as David Harvey famously observed: “Whenever a theory of overpopulation seizes hold in a society dominated by an elite, then the non-elite invariably experience some kind of political, economic, and social repression.” Thinking about population almost always has meant inviting arguments for repression.24

But as the demographics of the globe begin to lurch toward stasis, it is clear that research and policy will need to begin grappling with the questions of demography and population raised here, however uncomfortably. No meaningful health policy can emerge in the absence of serious consideration of global aging and the intergenerational politics and resource transfers this implies. To confront the strident politics that has emerged alongside the demographic transition, we will need a far better understanding of the shifts in values, gender roles, and ethno-national composition when births become scarce. Guiding ecosystems to ecologically rich conditions and outcomes demands that we understand them under conditions where populations decline. A new kind of research and theory is required for a world of zero population growth, albeit one that both acknowledges and avoids the often-catastrophic legacies of past demographic thinking.

How do we engage the Baby Bust without reproducing the habits of Orientalism and patriarchy that have always attended discussions of population? A breakthrough in thinking about global economies, polities, societies, and ecologies requires that we brace for the oncoming demographic shift while admitting that all matters of population are inherently political. After two centuries of growth, something quite remarkable is happening. As suggested by our experience in Karnataka, however, this transition is not the end of history, economics, or politics. Instead, it is the beginning.

Taking Modernization Seriously

Can everyone on Earth live a modern life? Can most or all countries succeed in economic development? Is it possible over time for the entire human race to enjoy the living standards of most inhabitants of today’s advanced industrial economies?

These are urgent questions. The global population is expected to grow from a little over 7 billion today to 9.7 billion in 2050 and 11.2 billion in 2100, according to the United Nations.1 Roughly half of the growth will take place in Africa. Ensuring that a much larger global population enjoys a decent standard of living will be an enormous challenge. The United Nations predicts that by 2050 the human race will require 60 percent more food—100 percent more in the developing world.2 By 2040, the US Energy Information Administration predicts that global energy consumption will increase by 56 percent; more than half of this energy will be consumed by industry.3

But even as living standards rise in the newly industrialized, urbanizing middle-income countries of East Asia and Latin America, a sizeable minority of humanity, concentrated in rural regions of Africa and Asia, lacks access to basic modern infrastructure and amenities. 1.2 billion people lack access to electricity.4

According to the World Health Organization, around 3 billion people rely for home heating and cooking on open fires or stoves that burn coal or biomass (woods, animal dung, crop waste), which produce deforestation and cause high levels of mortality thanks to the inhalation of particulate pollutants. Inhaled household soot accounts for more than half of pneumonia deaths among children around the world.5

For these reasons, it is profoundly important that the entire human race over time should acquire the ability to enjoy living standards like those of today’s middle-income countries and perhaps those of high-income nations. Achieving that outcome, however, is not guaranteed. Poor governance and changing fashions in international development programs have both undermined traditional development pathways. No less daunting, free trade, globalized supply chains, and rising manufacturing efficiency have reduced opportunities for employment in the manufacturing sector, the traditional pathway to modern economic life for large agrarian populations. In this essay, I consider the best strategies by which poor and less-developed countries and regions can catch up with richer societies in a twenty-first-century globalized economy.


As a description of global political and economic trends, modernization has fallen out of fashion. In part that is because the very concept of “modernity” has come under attack. Within the First World, an influential current of thought holds that modernism has given way to postmodernism. In some intellectual circles, “high modernism” is a pejorative term, redolent of technocratic and bureaucratic authoritarianism.

It is also the case that the concept of modernization has been plagued by competing definitions. At one extreme there are approaches that focus on what economists in the tradition of “evolutionary economics,” influenced by Joseph Schumpeter, call the “techno-economic paradigm”—how technologies, like the water wheel, the steam engine, and the internal combustion engine, combine with new institutions, like the large national or multinational corporation of the industrial age, to transform modernizing economies.

At the other extreme, there are theories that equate modernization with the adoption of particular social and political structures and even ethical values. Modernization has been equated with the movement from status to contract (Henry Maine), from Gemeinschaft (community) to Gesellschaft (society) in the thought of Ferdinand Tönnies, and has been identified with secularization and with political and economic liberalization.

As a catchall term to describe everything from industrialization to economic and political liberalism, then, modernization has perhaps been deployed too broadly. But whatever one chooses to call it, there remains the fact that human societies have undergone a series of profound transformations over the past three centuries. The Industrial Revolution remains the fundamental fact of our time. Industrialization, and the urbanization that it makes possible, represent the greatest transformations in human life in the last ten millennia.

Modernization, in this regard, is mechanization, and development is industrialization. The replacement of hand-looms by mechanical looms. Using tractors, harvesters, and other farm equipment along with artificial fertilizers to grow more food with less labor. Replacing equines and equine-drawn vehicles by automobiles, and replacing galleys or sailing ships by ships powered by steam engines, internal combustion engines, or nuclear engines, as well as by locomotives and aircraft. Whatever else may characterize it, a developed economy is one based on machinery and powered by modern energy technology.

The step-wise mechanization of modern economies brought with it a step change in the capability of economies to produce wealth and surplus. For most of human history, all industries had constant or diminishing returns. An industry with constant or diminishing returns to scale is one in which the marginal cost of each additional unit is the same. For example, in the restaurant industry, it costs as much to make the twentieth meal as it does to make the first.

Premodern manufacturing was more like modern restaurant work than like modern factory production. Just as the number of restaurant meals is limited by the number of cooks in the kitchen and the time required for each meal to be cooked, so the number of nails, horse-shoes, skillets, and knives was limited by the number of blacksmiths working in a labor-intensive smithy and the time that their tasks required. In contrast, in a modern highly robotic factory one human worker with the help of machinery might be able to produce hundreds or thousands of nails, horse-shoes, skillets, or knives a day. Each additional unit produced in a modern factory costs less to manufacture than the unit that preceded it.

Mechanization and the birth of industries with increasing returns to scale allowed for an enormous expansion of the traded sectors of modern economies, which includes goods and services that, while they can be consumed at home, can also be exported to customers in other countries.

In the premodern world, economic activity was dominated by nontraded goods. Agrarian societies could spare few able-bodied laborers to produce specialized and labor-intensive goods, such as metal goods, and the cost of transporting goods was high. Mechanization and increasing return to scale production in the factory allowed workers to produce more manufactured goods than could be consumed domestically. Meanwhile, mechanization on the farm increased yields and lowered labor requirements, producing agricultural surplus to feed growing urban populations and freeing up more labor to work in factories and other traded sectors.

Mechanization also increased labor productivity, allowing for rising wages and greater consumption of traded goods, and dramatically lowered the cost of transporting goods. Today, the technology of mass production allows countries to produce far more automobiles or phones or shoes than their own consumers want or can afford, and regional and global freight transportation costs have fallen dramatically thanks to container shipping and modern waterways and rail and road grids. Together, these developments have driven a large-scale shift in the structure of modern economies, which are characterized by much larger traded sectors than premodern economies.

Along with increasing returns to scale and larger traded sectors, modernizing economies are also characterized by higher-value-added production. More mechanized and technologically advanced production and complex supply chains add greater value to the goods that are produced. With each different stage of production, additional value is added to a product. The farmer grows wheat, then the baker adds value by baking bread. Some stages of production add more value than others. The task of extracting oil from the ground adds less value than refining it into gasoline or various other complex chemical by-products.

In almost every industry involving material production, most of the value—and most of the profit—results from design, processing, or manufacturing, not from the raw material inputs: the automobile factory, not the iron ore mine; the chemical refinery, not the oil well or coal mine; the textile factory, not the sheep farm; the diamond-cutting workshop, not the diamond mine.

At the risk of oversimplification, the development of an economy can be viewed as progressive movement along a spectrum from nontraded industries characterized by constant or diminishing returns and low-value-added production toward traded sector industries characterized by increasing returns and high-value-added production.

Consider the four poorest countries in the world: Malawi, Burundi, the Central African Republic, and Niger. Here are the top exports for each, respectively: raw tobacco, coffee, wood, and for Niger, a single category encompassing ores, slag, and ash. Now consider the top exports from the United States, Japan, and Germany, the three largest advanced developed economies: machinery, vehicles, and vehicles again, respectively.

The pattern is unmistakable. Rich countries export mostly high-value-added manufactured goods, like cars, planes, computers, machine tools, and complex drugs. Poor countries export mostly low-value-added commodities like tobacco, sugar, tea, and hides, much of which are harvested by hand using primitive, ancient, labor-intensive methods instead of modern machinery.

A few countries with relatively small populations and enormous natural resources have high per-capita incomes and wealth, most of them oil-producing countries like Qatar and Norway. But for two centuries the general rule has been that countries that specialize in manufacturing and manufacturing-related services are richer than countries that specialize in commodity exports or tourism.


Even before the rise of mechanized factories powered by modern energy sources, policy-makers in premodern city-states, empires, or nation-states understood the desirability of monopolizing higher-value-added production within their borders.           

Before the American war of independence, the British Empire outlawed most manufacturing in the British North American colonies that became the United States. The role of the colonies was to supply raw materials to the British Isles, where all processing and manufacturing would take place, followed by the sale of finished goods to the Anglo-American colonists, who were forbidden by law to purchase manufactured goods from other sources.           

Even in the absence of mercantilist laws controlling trade, modern machine-based mass-production industry by its nature could promote a monopoly of manufacturing in one or a few countries. Mechanized factories made possible the low-cost mass production of staple goods for ordinary people at very low cost. In turn, the export of machine-produced staple goods made possible large-scale international trade in consumer goods like clothing for ordinary people, in addition to old-fashioned upper class luxuries like snuff.

As George Washington’s aide during the American War of Independence, Alexander Hamilton was struck by the dependence of the largely rural American states on French and European military and industrial supplies during the struggle with Britain. As America’s first Secretary of the Treasury, Hamilton urged Congress to promote the development of industrial manufacturing in the United States. In his Report on Manufactures of December 5, 1791, in addition to emphasizing the importance of manufacturing for national security, Hamilton emphasized how Britain had gained from the mechanization of its economy:

The employment of machinery forms an item of great importance in the general mass of national industry. It is an artificial force, brought in aid of the natural force of man; and, to all the purposes of labor, is an increase of hands; an accession of the strength, unencumbered too, by the expense of maintaining the laborer. . . . The cotton mill invented in England, within the last twenty years, is a signal illustration of the general proposition, which has been just advanced. In consequence of it, all the different processes for spinning cotton are performed by means of machines, which are put in motion by water, and attended chiefly by women and children; and by a smaller number of persons, in the whole, than are requisite in the ordinary mode of spinning. And it is an advantage of great moment that the operations of this mill continue, with convenience, during the night, as well as through the day. The prodigious effect of such a machine is easily conceived. To this invention is to be attributed, essentially, the immense progress which has been so suddenly made in Great Britain, in the various fabrics of cotton. . .6

Britain, the first industrial nation, built its global primacy in the nineteenth century on its mechanized textile industry. The effects abroad were sometimes benign, like falling prices for clothing. But the side effects of British textile manufacturing and export were sometimes malign, as well. They included the devastation of British India’s native textile craft industry, and the specialization of the American South in slave-picked cotton destined for British mills.

Indeed, a vision of monopolizing global industry, not through world conquest but through mass production combined with global free trade, inspired Britain’s intellectual and political leaders in the middle of the nineteenth century. Britain had successfully used protection and various kinds of subsidies and regulations to promote its own manufacturing industries, from the Tudor era to the Victorian era.

By the 1840s, however, Britain was the unsurpassed manufacturing power in the world. Supported by capitalists in the British manufacturing sector, British liberals argued that if Britain adopted free trade and the rest of the world followed suit, Britain would almost certainly monopolize global manufacturing. Britain began to preach and practice free trade, reducing its protectionist corn [wheat] laws unilaterally beginning in 1846. British free traders hoped to encourage Europe, the United States, and Latin America, in the name of free trade, to specialize in growing cotton for British textile factories and growing wheat to feed British industrial workers.

The British economist David Ricardo provided the rationalization for a permanent British monopoly of machine-based industry in the form of the principle of “comparative advantage”: “It is this principle which determines that wine shall be made in France and Portugal, that corn [wheat] shall be grown in America and Poland, and that hardware and other goods shall be manufactured in England.”

But in 1838, Disraeli warned his fellow Britons that other countries would not “suffer England to be the workshop of the world.” He was soon proved to be right. Global economic history, since the beginnings of the Industrial Revolution in the eighteenth century, has largely consisted of efforts of backward countries to catch up with more industrial nations, beginning with the attempt of the early American republic to catch up with industrial Britain.

In 1851, Henry Carey, a leading American economist in the Hamiltonian tradition, argued that the protection of their infant industries by the United States and other nations would “break down” Britain’s “monopoly of machinery” and create the great powers of the future: Germany, Russia, and the United States. Behind a wall of tariffs and national infrastructure investments in the decades after the Civil War, the United States became the world’s largest economy. Like Britain, the United States chose to adopt and preach free trade only after its own manufacturing industries no longer needed protection from foreign competition.7

Influenced by the tradition of Hamilton and Carey, called the “American School” of economics or the school of “national economy,” the German-American economic thinker Friedrich List urged his native Germany and other European countries to follow the example of the United States by uniting and pursuing import-substitution industrialization.

List restated the central insight of the developmental tradition: “The power of producing wealth is therefore infinitely more important than wealth itself… [T]he power of production…not only secures to the nation an infinitely greater amount of material goods, but also industrial independence in case of war.” List died in 1846, but his vision of a United States of Germany was partly realized following the unification of the German states outside of the Hapsburg empire by Prussia in 1871. Imperial Germany quickly became an industrial powerhouse, adding its own innovations to the arsenal of industrial policy, including social insurance and public support for R&D.

For its part, following the Meiji Restoration Japan was determined to avoid formal or informal subjection to the European great powers. In addition to trying to carve out an empire of its own in its region, Japan emulated the German-American model of state-backed industrial capitalist development. Following its defeat in World War II, Japan lost its foreign policy autonomy and became a military protectorate of the United States. But it continued to promote its industries by civilian mercantilism—using nontariff barriers like regulations to reserve its domestic market for its own producers, while using currency manipulation to subsidize its exports.

Import Substitution Industrialization (ISI) is the name of the strategy undertaken in different ways by the United States, Germany, Japan, and other countries that sought to catch up with industrial Britain by protecting and promoting their “infant industries” until they were mature enough to compete in global markets. The success of developmental protectionism in the second wave of industrialization that included the United States inspired other countries in the twentieth century to adopt versions of the policy.

For over two centuries, the most successful examples of modernization-as-mechanization have been carried out by developmental states, not on behalf of consumers or humanity as a whole, but to secure their own position in global struggles for relative wealth and military power. The history of global industrial development is impossible to separate from the history of global great-power struggles since the eighteenth century.


In 1956, the economist W.W. Rostow identified five stages of growth that characterized the shift from premodern to modern economies: traditional society, based on subsistence agriculture; pre-conditioning, characterized by agricultural modernization, infrastructure, and the beginnings of industrialization; take-off, characterized by high investment and industrialization; the drive to maturity, based on economic diversification; and maturity, identified with mass consumption and a dominant service sector.

Rostow’s analysis was challenged by Alexander Gerschenkron, who argued in Economic Backwardness in Comparative Perspective (1962) that late-developing countries could skip some of the stages that early leaders had gone through.8 Gerschenkron echoed many of the themes of the developmental statist tradition, including the idea that state capitalism could substitute for private capital in the financing of local infant industries.

Unfortunately, Gerschenkron’s work did not lead to a renaissance of the developmental statist tradition. Instead, the backlash against government associated with Reaganism and Thatcherism in domestic politics found parallels in development economics following the 1960s. The British economist P.T. Bauer mingled sensible criticisms of development policy with free-market fervor in his influential books Equality, the Third World, and Economic Delusion (1981) and Reality and Rhetoric: Studies in the Economics of Development (1984).9

In the 1980s and 1990s, the older emphasis on the transition from agrarianism to industrialism and service sector employment was replaced by “the Washington Consensus.” According to the Washington Consensus, the most important policies for developing nations included reducing fiscal deficits and controlling inflation, “structural adjustment” or the opening up of domestic markets and banking systems to international trade and international financial flows, and the privatization of as much of the public sector as possible.

At the same time, on the center-left, younger thinkers in the spirit of the New Left rebelled against the equation of modernization with large-scale industry and infrastructure. While mid-century American liberals, European social democrats, and Third World socialists alike had celebrated large infrastructure projects like hydropower dams, James C. Scott demonized “megaprojects” for wrecking rural peasant communities and denounced “high modernism” as technocratic and authoritarian in his influential book Seeing Like a State: How Certain Schemes to Improve the Human Condition Have Failed (1998).10 Influenced by the counterculture, a newer American left abominated big government and big industry and took guidance from E. F. Schumacher in Small is Beautiful: A Study of Economics As If People Mattered (1973).11

The World Commission on Environment and Development, known as the Brundtland Commission, convened by the United Nations in 1983, popularized the term “sustainable development” in its 1987 report, Our Common Future. In the name of “sustainable development,” the western environmental movement has sought to move developing countries toward its own Rousseauian visions of a green utopia, based on wind and solar power and biofuels instead of fossil fuels or nuclear energy; labor-intensive, local agriculture instead of modern, capital-intensive, industrialized agriculture; and village life instead of modern cities and automobile-based suburbs and exurbs.

A third element was added to the mix by the United Nations Millennium Development Goals, which emphasized immediate reductions in disease, poverty, child mortality, and illiteracy, even in the absence of nation-building and economic development in the traditional sense. As billionaires like Bill Gates and Warren Buffett pledged parts of their fortunes to causes like combating malaria in Africa, private philanthropy by western tycoons came to play a role that it had not played since the days of European and American missionaries in the colonial era. While many of these efforts were noble and worthwhile, the entire point of development, in the traditional sense, was to free countries from dependence on the charity of benevolent foreigners.

In the early years of the twenty-first century, many international development agencies promoted a kind of synthesis of Washington Consensus neoliberalism and Green sustainability, using their bargaining power as lenders to encourage austerity and privatization on the one hand, and to discourage fossil fuel use or dam construction and nuclear power plant construction on the other. “Big push” industrialization based on hydropower dams and electric grids and highways was out. “Microfinance” was in. Large-scale foreign aid to governments for infrastructure construction was out. The Grameen Bank, a “microcredit” development bank in Bangladesh, was celebrated as an icon of a new age of globalization.

The nation-state as the intermediary between the global economy and the local economy was airbrushed out of the picture, by the evangelists of neoliberal globalization. At the macro level there would be one single rule-governed global economy, with the same rules imposed on all countries—the “golden straitjacket,” in the words of the journalist Tom Friedman. At the subnational level, there would be micro-credit, micro-finance, micro-enterprise—and, perhaps, micro-development.

The delegitimatization of the state as an actor in development was symbolized by the replacement of the term “developing country” by “emerging market.” The phrase “emerging market” implied that successful development could result from the interaction of international private investors and multinational corporations with local entrepreneurs. The role of the state in the “emerging market” was reduced to that of an umpire, enforcing the rules of the Washington Consensus and strictly neutral in disputes among its own producers and foreigners.

Post–Cold War trade organizations like the World Trade Organization and the Trans-Pacific Partnership Treaty (TPP) sought to literally outlaw many of the techniques by which developing countries, including the United States, had helped their own industrial producers in the early stages of industrialization—not only tariffs but also subsidies, government procurement requirements, and other methods of infant industry protection.


Ironically, at the very moment American and European neoliberals were celebrating the triumph of micro-enterprises and global free trade, the most successful episode of rapid, large-scale industrialization in human history was occurring in East Asia, as a result of classic mercantilist and protectionist techniques that were the complete opposite of the Washington Consensus.

In How Asia Works: Success and Failure in the World’s Most Dynamic Region (2013), Joseph Studwell describes how Japan, South Korea, and Taiwan developed successfully on the basis of three key policies: promotion of small farmers as a market for domestic manufacturing goods; subsidies of internationally competitive domestic manufacturing industries; and postponement of financial deregulation that would interfere with national industrial policies.12

Unlike Japan and the Little Tigers, China did not use agricultural and social reforms to build up a domestic middle class as the basis for its traded-sector industries. Instead, China focused on production for export from coastal enclaves financed by foreign investment. But in its own ways, the Chinese model of state-sponsored industrialization violated the tenets of neoliberal economics.

Instead of liberalizing its financial markets, China used government banks to steer credit to subsidize its factories and invest in infrastructure. China used cheap labor, cheap land, government subsidies, and currency manipulation to encourage foreign multinationals to engage in industrial production on Chinese soil—and then pressured them to transfer their technology to Chinese nationals and Chinese companies.

Instead of promoting small business, the Chinese regime, inspired by similar policies earlier in Japan and South Korea, sought to encourage giant national champion corporations and groups in strategic industries. Having initially industrialized on the basis of foreign investment in low-value-added industries, the Chinese government sought to push the economy up the value chain by means of policies of indigenous innovation that western countries denounced as protectionist.

China’s version of developmental statism came at a high price, including high debt and asset bubbles as a result of easy credit, overinvestment, and overcapacity in steel and other industries, and a profound divide between the impoverished rural interior and the more developed coast. But the industrialization of Britain, the United States, and Germany, not to mention that of the Soviet Union under Stalin, had proceeded with similar fits and starts, inequities, and disruptions.

Today the Washington Consensus is widely considered to be discredited, and major developing countries like Brazil, Russia, India, and China (the so-called BRICs) all deviate from free-market orthodoxy in one way or another as part of their economic development plans. While the leading East Asian economies industrialized in defiance of the Washington Consensus, no developing country has succeeded by following the rules of neoliberal globalization. That should come as no surprise. Industrialization means that a country acquires its own share of increasing-returns industries, a sector dominated by large-scale manufacturing enterprises. No country in history has ever become rich thanks to the efforts of small-scale peddlers taking advantage of free markets to sell each other low-value-added items in the equivalents of flea markets and garage sales.


The very existence of already-industrialized great powers like those of North America, Europe, and East Asia presents a challenge to societies in the global South in their quest for what deserves to be called, without apology, modernization. The methods employed by those of today’s late-developers will not necessarily be the same as those used by countries and regions that industrialized earlier. But the broad lessons of several centuries of developmental statism must be learned and applied by the leaders of the developing world, if their countries are to escape poverty and geopolitical subordination.

The first of those lessons is that what drives prosperity in the long run is not markets, in themselves, but the substitution of human and animal labor by machinery or software, powered by energy sources other than human and animal muscle and biomass.

The second insight of developmental statism is that regions and societies do not develop in isolation from one another. This is a source of both risk and opportunity for less developed societies. On the one hand, late-developing societies risk military conquest or economic subordination to more technologically advanced communities. On the other, they can borrow technology from the more advanced, and they can skip stages of development rather than repeat the histories of the early industrializers.

The third insight is that the primary units in the world economy are not private actors—workers, consumers, investors, firms—but states. Today, the nation-state is the dominant form of polity, but developmental statism is relevant to city-states like Singapore and to multinational blocs like the European Union.

Like today’s advanced industrial nations at earlier stages of their history, today’s developing countries face two challenges. The first challenge is to move from an agrarian economy to an industrial economy. The second challenge is to make their traded-sector industries competitive in regional and global markets.

The first challenge is simple and straightforward, even if achieving it is difficult in particular poor countries because of political factors. Basic modernization requires moving from the agrarian era into the industrial era. It means building the essential infrastructure of a modern technological society—indoor plumbing and sanitation systems, electrical grids, paved roads, airports, and telecommunications infrastructures. And it means turning traditional manufacturing, agriculture, and mining into industrialized industries of the kind familiar in the developed nations of North America, Europe, and East Asia.

Great gains in living standards can be obtained by the initial deployment of technology even in the poorest countries. Relatively old and mature technologies like indoor plumbing, electrical grids, and asphalt roads can make a profound difference in the quality of life of the inhabitants of the poorest countries, like many in Africa. The installation of modern infrastructures—many of them the large-scale grids demonized as “high modernism” by romantic leftists like James C. Scott—must be the priority for reasons of humanitarianism as well as economic strategy.

But once a country has succeeded in achieving basic modernization, a second, harder challenge remains. That is the challenge to create a traded sector whose industries are efficient and internationally competitive.

Tariffs, subsidies, or other measures can create a national automobile industry, by keeping out imports and forcing the nation’s citizens to buy the locally made automobiles. But in the absence of severe competition within the protected national market, the national automakers may have no incentive to create quality automobiles that foreign customers would choose to buy. The most successful developing countries in the last few generations, as we have seen, have been those of East Asia, which protected their infant industries but also pressured them to sell their products in foreign markets.

Unfortunately for today’s developing nations, the favorable geopolitical conditions that enabled East Asia’s export-oriented industrialization (EOI) no longer exist. During the Cold War, in order to keep them in the anti-Soviet alliance, the United States was willing to let its East Asian protectorates like Japan and South Korea and Taiwan protect their home markets for their own producers, while enjoying one-way access to America’s vastly larger consumer market. The United States ran permanent merchandise trade deficits, to the benefit of East Asian producers and to the detriment of many American manufacturers. The American market for East Asian imports was enlarged even further by the reliance of many American consumers on credit.

The geopolitical rationale for US toleration of East Asian mercantilism ended with the Cold War and the rise of China. And the American credit bubble collapsed with the Great Recession. Slow growth, high debt overhangs, and rising inequality mean that the US market will not be able to drive EOI strategies for other countries in the future. And the three largest industrial economies other than the United States—China, Japan, and Germany—prefer to maintain merchandise trade surpluses with other countries over buying imports. Unless one or more major economies is willing to run permanent merchandise trade deficits, EOI strategies by developing countries are impossible.

The irrelevance of export-driven development like that of Japan, the Little Tigers, and China prior to the Great Recession means that developing countries have only two major options for creating and expanding their increasing-returns traded sector industries: one old and familiar, import substitution industrialization (ISI), and one relatively new, global value chain-oriented industrial policy.   

Innovations in low-cost global freight transportation and communications, including container ships and satellite telephony, have made it possible for companies that once dispersed production among different regions within a single nation-state to decentralize aspects of production in a number of countries in global value chains (GVCs). As a result, about a third of what is counted as international trade is actually international production within a single transnational company or group of suppliers and contractors whose activities are coordinated by a transnational company known as an original equipment manufacturer (OEM).

Much cross-border trade now involves intermediate goods—for example, auto parts made in one country, which are incorporated into an automobile component in a second country, with final assembly then occurring in a third country. This pattern of regional or global industrial production was unknown to Alexander Hamilton and Friedrich List and W.W. Rostow.

But the basic logic of developmental economics remains the same. Even if no single country manufactures the entire good, from the raw materials going into one door of a factory to the finished product rolling out of the opposite door, it remains important to specialize in higher-value-added parts of the transnational chain—generally the ones that involve the most complex processing or design—rather than in the lower links of the value chain that add little value.

Global value chains do not eliminate the need for national industrial policies, but they alter their details. While a developing country in the 1950s might have tried to create an entire vertically integrated automobile industry within its borders, today it might make more sense for the country to try to specialize in the most lucrative, high-value-added portions of an international automobile manufacturing supply chain.

Mexico, for example, has become an important link in the automobile production supply chain, because of its proximity to the US automobile market and the fact that automobile production tends to be regionalized rather than globalized. In South Africa, automobile manufacturing, primarily for African markets, is the largest part of the manufacturing sector, making up 7 percent of GDP in 2012.13 Tunisia has similarly benefited from its proximity to Europe in developing its textile and clothing sector and, more recently, electronics and engineering.14


Like the changing nature of global supply chains, the changing nature of global manufacturing presents new challenges to late developing nations. Countries like Britain, the United States, Germany, and Japan went through a phase of mass manufacturing employment during the transition from an economy of farmers and farm workers to a service employment economy. The coming robotics revolution may sever the link between mass employment and the most technology-intensive, productive industries but will not render national participation in high-value-added production for both domestic use and exports any less important.

In the twenty-first century, the remaining countries with large, premodern agrarian sectors may skip the phase in which large populations of workers are employed in the manufacturing sector. Labor shed by an increasingly productive agricultural sector may go directly into a burgeoning service sector. But even if high-value-added production industries or lucrative piecework niches in global value chains employ relatively few citizens, they will continue to be important assets for national economies as a whole.

For one thing, manufacturing and other increasing-returns industries have positive spillover effects, generating growth in the domestic sectors of the jurisdictions in which they are located. Already in developed nations, thanks to technology-enabled productivity growth, most workers are employed in the non-traded domestic service sector.

For another, economies that specialize wholly in agriculture, mining, or tourism may find it hard to export enough low-value-added goods or services to pay for the costly manufactured goods and high-tech services that they seek to import in the quantities they desire. This is why, contrary to what one might expect, most world trade is not between countries that export manufactured goods and countries that export raw material inputs and energy, but rather among similar developed industrial nations, which both export and import high-valued-added merchandise.

If societies are not to be polarized into rich minorities of investors and managers and poor service proletariats, some of the gains from the advanced industrial enterprises will have to be shared with the population as a whole, through methods other than the twentieth-century system of high wages for numerous industrial workers. This will probably be as true of advanced developed economies as of late developing economies.

Highly automated enterprises and their managers and investors could be taxed, to support public services like health care, education and child care, and elder care; or subsidies to low-wage workers or employers; or even a universal basic income. A more speculative possibility is allowing citizens to own shares of the robot factories and other profitable enterprises, in some form of “universal capitalism.” And we cannot dismiss the possibility that in at least some countries there will be efforts to socialize the high-tech means of production, undeterred by the poor records of socialism and communism in the twentieth century.


While the context and global conditions continue to evolve, the lessons of two centuries of modernization are clear. Apart from city-states that specialize as financial or commercial entrepôts, and a few well-governed countries rich in natural resources, no country has joined the first ranks in prosperity without being represented in at least some high-value-added, increasing-returns industries. And no country, from the leader Britain through the United States and Germany to Japan and China, has made that transition without having a relatively strong and competent state promoting some kind of high-value-added production within its borders.

Countries with strong states and relatively large domestic markets such as China, India, and Brazil have more options than smaller countries. They can condition access to their markets by foreign producers on transfers of technology or requirements for local production. A large domestic market can also allow them to build up large “national champions” capable of competing in global markets. In contrast, countries that are both poor and small have little leverage with rich countries or rich-country corporations and investors. This is a problem that regional blocs can only partly overcome, given problems of collective action.

But competent governments can still help to diversify the economies of smaller and poor countries and try to move them into increasing-returns industrial sectors. Each nation will need to take stock of its own resource and human endowments, its proximity to global supply chains and markets, and its relations with the great global economic powers. Nations that are small and poor are the most reliant upon international finance and development institutions, which can play a constructive role when they are able to wean themselves off of neoliberal dogma and green post-modernism and remember the distinction between economic development and charity.15

The basic story remains the same. “There is no alternative,” British Prime Minister Margaret Thatcher insisted, so often that TINA became an acronym. Thatcher was wrong that there is no alternative to small-government, free-market neoliberalism. But in the case of economic development, the generations-old developmental state tradition can take credit for every major case of successful modernization and industrialization in the world, from Europe to North America to East Asia. Strong developmental states drive economic modernization by mechanizing, increasing production of traded sector goods, and shifting production toward higher value added goods and services. TINA.

High-Tech Desert

When Bart Fisher returned home from college in 1972, his family’s alfalfa fields outside Blythe in California’s southeastern desert produced 7 tons of alfalfa per acre. Today, the Fishers get 10 tons per acre from the same land. They do it with the same amount of water as a much younger Fisher and his family used four decades ago.

Growing water-use efficiency on farms like Fisher’s is one of the salient features of the evolution of agriculture in the developed world. Nowhere is this more apparent than in Palo Verde and the desert agricultural valleys of southwestern North America. These regions challenge two common narratives about water. The first is that we are blind to a looming disaster, sucking down water and ignoring a reality that will, in the words of Charles Bowden, “slap us in the face and we will have to snap alert. And this slap may come from our kitchen faucet….”1 It is the narrative most famously captured by the journalist Marc Reisner in his polemic Cadillac Desert,2 often read as a prediction that we are on a path toward “an apocalyptic collapse of western US society”.3

The second narrative, common among the technocrats who manage our water systems and the engineering enthusiasts who design them, is that inexorable growth of our population and economy inevitably means that we need more water – bigger dams, new desalination plants, new canals across the continent from wet to dry places, even icebergs towed from the arctic.4

Both narratives hang on a central premise – that more people and more economic activity inevitably means we’ll need more water. Recent experience in the United States southwest suggests both narratives need to be revisited.

In California, farm water use has declined by 40 percent since the 1980s, even as irrigated acreage has risen.6 Farmers like Fisher are growing more food with less water, with some of the saved water being shifted to cities, and modest steps are being made toward returning some of that water to the environment.

In cities too, water use is going down. Every one of the region’s major urban areas that once depended on unsustainable groundwater mining has turned the corner. Conservation and shifts to relatively more sustainable sources of supplies have led to aquifers beneath Los Angeles, the sunbelt of Central Arizona, and Las Vegas that have stabilized or in some cases are rising.5

Contrary to the narratives of apocalyptic doom or a need for ever-growing supply as a result of unsustainable water use, these communities have demonstrated an adaptive capacity that has allowed more people and economic activity using less water. This creates opportunity – to grow more food, to move water to cities, and to begin to reclaim some of the surplus for the environment.

But the needs of a growing population and the likelihood of increasing water scarcity caused by climate change still require coming to terms with our nation’s water use in agriculture. Agriculture continues to use the lion’s share of water taken out of the southwest’s aquifers and rivers for human use, four times as much water as the region’s cities.7 The growing efficiency seen in places like Fisher’s farm is therefore a central feature in understanding how to negotiate an uncertain future.


The Palo Verde Valley is harsh desert with an irrigated heart. Approaching from the east, Interstate 10 drops through a bluff to expose the Colorado River that the eclectic early twentieth-century art critic and travel writer John Van Dyke described as a boundary between wet and dry “drawn as with the sharp edge of a knife. Seen from the distant mountain tops the river moves between two long ribbons of green, and the borders are the gray and gold mesas of the desert.”8 Even in a wet climate, where there is no knife edge between brown and green, water is a landscape’s central characteristic. But in a desert like that of the Lower Colorado River, the impact is profound.

Like much of modern California, Blythe and the Palo Verde Valley as we now know them began their lives as a real estate investment scheme. San Francisco capitalist Thomas Blythe first hired an engineer named Oliver Calloway in the 1870s to drain a swamp and divert Colorado River water at a place called Black Point at the valley’s upstream end. In 1877 Blythe filed one of the oldest water rights claims on the Colorado River. The early ventures were a failure, and Blythe’s death left a legal tangle from which it took decades for valley agriculture to recover.9

But recover it did, and in a big way, because given the goals and policies of a growing nation, irrigated agriculture in this part of the world was inevitable. The combination of fertile desert soil, a year ‘round growing season, and abundant Colorado River water had by mid-twentieth century turned nearly every flat patch of river valley floor from the rugged Colorado River canyon country to the Sea of Cortez into farmland.

Farming has a deep history here. The native people who lived here before the arrival of the Europeans had long ago established a stable existence, practicing “flood recession farming”, planting along the river’s margin as its spring snowmelt peak receded.10 That was far too modest an endeavor for the boisterous newcomers of the young nation of the United States expanding westward from its European immigrant roots. To them, aridity seemed an act of defiance against the vanguard a sprawling empire.

French scholar Emile Boutmy in 1891 described the motivating forces behind the transition sweeping across the North American continent this way: “The striking and peculiar characteristic of American society is, that it is not so much a democracy as a huge commercial company for the discovery, cultivation, and capitalization of its enormous territory.”11

Pushing aside the native inhabitants, we were, wrote historian William Cronon, “bringing (the land) into the marketplace.”12 In the east and midwest, that meant felling timber and plowing prairie into farms. By the time the wave of new settlement reached the arid region west of North America’s hundredth meridian, the land could not be brought into the marketplace without heroic measures to manage the water. Or the lack of water.

With dam-building subsidies from a United States government bent on manifest destiny, each stretch of desert farm valley that could grabbed a portion of the Colorado River’s scarce water such that today the river almost never reaches the sea.  Sixty miles up river from where Fisher and the other Palo Verde farmers take their water, Los Angeles dips its straw into the Colorado with a system of pipes, pumps, and canals that move water more than 200 miles to the growing cities of the coast. But the farms were here first. While the casinos of Las Vegas and the golf courses of Palm Springs and Phoenix stand as the icons of our modern desert civilization’s excesses, it is still agriculture that consumes 70 to 80 percent of the river’s water.

The history matters because it is what makes agriculture central to the communities that grew up here, even if Boutmy’s cultivation and capitalization of the territory has been elbowed aside by a twenty-first-century economy siphoning off water to coastal metropolises that dollar for dollar produce vastly more computer chips and software than food.

Agriculture today makes up just two percent California’s economy and provides four percent of its jobs.13 But the institutions we built to control and distribute the water and the rules meant to support farming, and communities and cultural traditions that grew up around it, were built for a very different world. It is not something easily unwound.


Fisher Ranch is famous for its honeydew melons, and melons are a lucrative, high-value crop for Palo Verde farmers. But in 2014 just 3 percent of the Palo Verde’s land was planted in melons. Alfalfa has always been the backbone of valley agriculture, taking up nearly two-thirds of the Palo Verde Irrigation District’s land in 2014. Alfalfa, used as feed for cattle and dairy cows, is what is often called a “low value crop”, by which we mean the dollars returned per acre of land and gallon of water are low relative to melons or the winter vegetables that now dominate the cash flow through desert agriculture. But with low labor and capital costs and a stable price compared to the ups and downs of the produce market, alfalfa to feed our desires for meat and dairy has always been a smart bet for desert farmers. Our love of hamburgers and ice cream ensures a reliable market.

Critics have long complained that it makes little sense to grow alfalfa in a desert, but such rhetoric ignores agronomic reality. With abundant sun and available water, places like Palo Verde are ideal places to grow alfalfa. In fact in terms of the environmental footprint of agriculture, the desert may be the best place to grow it.

Recall Fisher’s 10 ton per acre production. In a place like Idaho, another major alfalfa producing state facing issues with water scarcity, it takes twice as much land to produce the same amount of crop. The year ‘round growing season, allowing farming 12 months a year with no need to slow down in the winter, makes places like Palo Verde and the other nearby desert valleys by some measures ideal – if they can find the water to irrigate the land. “We don’t take winter or summers off,” explained Tina Shields, water manager for the nearby Imperial Irrigation District.

But in terms of dollars into a farmer’s bank account per acre of land and gallon of water use, alfalfa falls short of other crops. That is why thirsty metropolitan Southern California came calling on the Palo Verde Irrigation district in the 1990s looking to make a deal. The law allocating Colorado River water to the farm district made it impossible for wealthy, rapidly growing Southern California to simply take the water it needed, so the Metropolitan Water District of Southern California came offering money instead.

One of the features of modern water management that sometimes baffles economists is the relative value, in monetary terms, of water in different uses. With Metropolitan willing to pay farmers more for their water than they could make by using it to irrigate land, a simple free market economic transaction would have moved all the water to coastal cities. But Palo Verde’s negotiating team, led by Fisher, was not willing to allow a deal like that. Instead, they negotiated a lucrative arrangement under which some of the valley’s land can be fallowed in a year when Met needs water, paying farmers for the lost crop revenue.

That matched Met’s variable supply needs. While annual municipal water use is relatively stable, Met’s supply comes from three different sources – Colorado River Water, an aqueduct from Northern California, and groundwater and annual runoff within Southern California itself. When one of those supplies runs short, as has happened in recent years with deliveries from drought-plagued Northern California, Met needs to call more on its other sources, especially the Colorado River.

Flexibility in drawing on diverse sources of supply was critical to Met’s resilience, its adaptive capacity. But the leaders of the Palo Verde Irrigation District had some resilience goals of their own. To protect the long term viability of Palo Verde’s agricultural economic base, - their ability to retain, in the words of resilience scholars, the “basic structure and function” of their community - the program was capped. The most Palo Verde land that could be fallowed in any year was 28 percent. The rest of the valley would remain in production. The agreement struck a balance between the economic realities, the needs of growing city populations, and the desire of the desert farm community to retain its agricultural core. The result is a program that in 2015 fallowed 16 percent of Palo Verde’s least productive acres, moving the water that would have been used to irrigate it to Metropolitan.

The same basic story is being repeated again and again in Colorado River Basin agriculture – water moved from marginal lands to more economically valuable non-agricultural uses, while agricultural productivity on the land that remains continues to rise, either through shifts to more lucrative crops, or to more efficient irrigation techniques, or both.


California’s water books are not being balanced on the backs of agriculture alone. Alongside the striking increase in agricultural efficiency is a parallel phenomenon in Southern California’s cities.

Between 2004, the year the Palo Verde deal was signed, and 2015, total water use in the greater Los Angeles-San Diego metropolitan area declined 25 percent, even as the region’s population grew by 6 percent.14 Southern California has unique characteristics because of drought and water supply constraints that have made its water conservation performance better than many modern cities. But it is by no means unique. The decoupling between water use, population, and economic growth is even more stunning in the US municipal water use sector than it is in agriculture.

When it comes to water, the cities of the western United States get the most attention. Aridity and the struggle to overcome it have long defined the place and the stories we tell about it. Chinatown is the cinematic definition of Los Angeles, and Cadillac Desert the region’s non-fiction backdrop. But the pattern seen in L.A. and the region’s other arid cities is not merely a series of localized responses to geographic constraints. Rather, it is part of a pattern not only across the United States but through much of the developed world.

In the first decade of the twenty-first century, water use in the United States passed a remarkable milestone. For much of the life of the nation, water use rose with the growth of the nation’s population and its economy, both in per capita and absolute terms. Individually and collectively, our water use kept going up, contributing to a dominant cultural narrative of scarcity and risk. But beginning in the 1980s, the relationship between water, population, and the economy had begun to break down, with per capita municipal water use flattening out. Sometime between 2005 and 2010, the decline in per capita use became so rapid that total municipal water use in the United States began to decline, even as the nation’s population and economy continued to grow. 6,15

Government has played a central role. The Energy Policy Act of 1992, signed by Republican President George H.W. Bush in October of that year, imposed a federal mandate. After 1994, all toilets sold in the United States would have to meet a 1.6 gallon per flush standard, about half the norm at the time. It set standards for faucets, showerheads, and clothes and dish washing machines. That made water conservation automatic in each new building and whenever an old appliance was replaced, and established a base on which communities like Southern California could build, with rebates to replace old plumbing fixtures. Amy Vickers, the engineer and water conservation expert who wrote the 1992 legislation, has estimated that it results today in a savings of 20 gallons per person per day in the United States.

It is unclear how low the rich world’s municipal water use can go, but there is no reason to think that we are anywhere near the bottom. The World Health Organization offers a boundary – 25 gallons (100 liters) per person per day to meet basic indoor consumption and hygiene needs in a home with multiple water taps and continuous, 24/7 access to water. 28 That suggests a boundary condition well below the current US level of 89 gallons (340 liters) per person per day.6 It does not include outdoor water use for urban landscaping, which is critical in the long run because the water we put on our lawns and trees is generally fully consumed by evapotranspiration, and it does not include water used by businesses. So one should not expect water levels to drop that far.

But survey data and operational experience suggests the American public is willing to cut back far more than it is currently doing if needed.17 California learned this lesson well in recent years, with state-imposed mandatory municipal water use reductions leading to substantial savings beyond those already achieved by municipalities that had long been pursuing conservation measures.18 Per capita water use within the Metropolitan Water District of Southern California’s service area declined 14 percent from 2014 to 2015.26


While the nation’s water conservation success is undeniable, the environment continues to suffer from a century of decisions to remove water from rivers and aquifers for human use. In few places is that more apparent than the tail end of the Colorado River, which for its last 100 miles through Mexico to the Sea of Cortez is routinely dry save a modest amount of agriculture runoff from the farms of the Mexicali Valley.

What is left of the Colorado River as it flows through the deserts between Arizona and California is largely an agricultural water delivery system. Token wildlife refuge lands flank it at several points in its final run toward the sea. But when the river hits Morelos Dam on the Arizona-Baja border, humans divert its entire flow for farms, drying up its channel completely. It is, in the words of the late Philip Fradkin, a “river no more”.19

Even as efficiency gains in upstream farms and cities have increased the system’s reliability for human users, the environment remains a largely ignored water user. But water management decisions built atop growing efficiencies suggest opportunities to begin reversing that trend, including a major policy initiative in a most unlikely place.

Since 2003, agricultural water managers in the Imperial Valley have been quietly carrying out the Colorado River Basin’s largest experiment in shrinking agriculture’s footprint to provide water for a vast once-dry basin called the Salton Sea. Located northwest of the Imperial Irrigation District, the Salton Sea fills a sink below sea level, a shallow bowl created by a quirk in the region’s geology. It was a dry salt flat when early Imperial Valley irrigation works failed in 1905 in the midst of a flood, and the entire Colorado River took a hard right turn. For more than a year the river flowed into the sink instead of the ocean. Human engineering turned the river back to its original course, but agricultural runoff soon came into balance with desert evaporation and the Salton Sea became a permanent feature on the landscape.

The Salton Sea gets little attention in part because the environment at risk is ugly and smelly, not the sort of pristine high mountain trout stream or riparian valley whose aesthetics draw support from traditional environmental constituencies. But with the Colorado River desert’s wetlands all but gone, the Salton Sea has become one of the last toeholds left for migrating birds along the Pacific Flyway.

Its “unnatural” origins and dependence on agricultural runoff rather than a native river complicates the Salton Sea’s place in environmental politics. But the birds don’t care and the sea has become prime bird habitat, as waterfowl left with no natural delta moved to the sea instead.

The effort to protect the Salton Sea and its avian inhabitants demonstrates the potential and the perils of improving water use efficiency for the environment. Agricultural water management practices created the Sea. Changes to those practices have come to represent a major threat.

In the late 1990s and early 2000s, Los Angeles and San Diego came to Imperial farmers looking to buy water. California was living beyond its Colorado River water means, routinely taking 15 percent more than its minimum legal allotment under the rules that divide the water among seven states in the United States and two in Mexico. The rules gave surpluses to California, but a shifting climate and growing demands elsewhere had largely eaten up the surplus.

Within California, as supplies got tighter, the rules also ensured that the farmers in Imperial, Palo Verde, and elsewhere had first dibs on the remaining supplies, meaning any cuts would have to come from municipal water users in the San Diego-Los Angeles metropolitan areas. Faced with a threat to their supplies, the coastal cities of southern California came calling.

The lands of the Imperial Irrigation District are the largest single concentration of irrigated agriculture in the Colorado River Basin. Like Palo Verde, they are spread across a desert basin of rich soil left by millennia of Colorado River runoff, turned green by irrigation water now diverted from the river. Like much of the basin, alfalfa is the largest single crop. 20,21 And like Imperial’s desert neighbors in Blythe 80 miles to the east, the big money is in winter vegetables and melons.

The Southern California cities proposed to pay for irrigation system improvements, and pay farmers to fallow their land. The arrangement included canal lining to save water leaking from Imperial’s system into the sandy desert floor, and payments to farmers to tighten up the efficiency of water delivery on their farms through improvements like field leveling and installation of sprinkler systems to replace the traditional flood irrigation.

These are steps farmers frequently take on their own when water is expensive or scarce, but it requires significant capital investment. With Colorado River water abundant, Imperial farmers had little economic incentive to invest in efficiency on their own. Money from Los Angeles and San Diego changed the dynamic. The resulting arrangement was like the agreement already signed with Palo Verde, but on a grander scale.

But while the deal represented a win-win deal for the farmers and the cities, it presented an existential threat to the Salton Sea. Fallowing land or on-farm efficiency measures like the installation of drip irrigation would reduce the amount of runoff flowing into the Sea and risked upsetting the balance between inflow and evaporation. The decline of the sea also threatened to expose miles of shoreline, worsening the valley’s already poor air quality when wind kicked up potentially toxic dust left behind. So the deal included an important twist. Some of the conserved water, rather than being diverted to coastal cities, would be channeled directly into the Salton Sea.

To date, the largest agricultural-to-environmental water transfer in the Colorado River Basin has worked. Meanwhile left to their own devices to come up with the best approach in response to having less water to work with, Imperial’s farmers have prospered by reducing the acreage planted in low-value crops like alfalfa and shifting their resources to big dollar plantings like winter lettuce.29 And even as the acreage planted in alfalfa has declined, the increased productivity per acre has yielded a remarkable result. Total alfalfa tonnage produced in 2014 was essentially unchanged from the early 1970s.

The current agreement only runs to 2017, and local and state leaders are scrambling to come up with a permanent approach to protecting the Salton Sea. But the project to fallow farmland and use some of the saved water to preserve the Salton Sea provides an important case study in what it takes to exploit the potential environmental benefits offered by increased efficiency in desert agriculture.


As the effort to preserve the Salton Sea demonstrates, none of this is automatic or simple. Water efficiency in many situations can make farming more profitable. But that is no guarantee that water will be reallocated to cities or the environment. Much of the water saved in recent decades has stayed in agriculture, most notably in the expansion of the almond orchards of California’s Central Valley. Rather than moving water from low-value agriculture to cities, a big portion of the water has moved instead from low-value agriculture to higher-value agriculture.

The political and policy response to water scarcity has been equally important. Including water efficiency standards in the Energy Policy Act of 1992 had profound effects. Money from wealthy urbanized Southern California and complex government-to-government deals are what made the Palo Verde and Imperial water shifts possible.

But the direction of travel is clear and the opportunities for the environment are significant. Total agricultural water use in California peaked in 1980, and has been declining since. 6,13 In the face of water scarcity and higher costs, total California acreage devoted to rice, cotton, and alfalfa declined from 2.4 million acres in the 1992 Census of Agriculture to 1.8 million acres in the 2012 census. 26,27 In California just 21 percent of irrigated acreage used high efficiency targeted irrigation techniques like sprinklers and drip in 1985. By 2010, that had more than doubled to 45 percent. As a result, water use per irrigated acre in California has declined 43 percent since 1980.6

Yet what happened in Imperial demonstrates that reaping the benefit of agricultural efficiency for cities and the environment requires more than just market signals and better technology. It requires overt intervention to assure that society’s changing structure and values are matched with the opportunities provided by agricultural efficiency.

Unfortunately, a deeply entrenched water management paradigm continues to stand in the way. Consider the Los Angeles Department of Water and Power, one of the nation’s largest municipal water retailers, serving 4 million people. In a major 2005 planning effort, LADWP managers projected that over the coming decade their water demand would rise by 7 percent. In reality, it dropped by 18 percent. 14,31 Yet despite steady declines, the 2016 version of the agency’s draft water management plan again projects the trend that has resulted in a long term water use decline dating back to the 1990 will reverse itself, with water use rising again from their current low levels.

This pattern of overestimating future demand and underestimating consumer conservation is widespread, and is the major impediment to capturing the benefits that decoupling offers.

In the largest study of its kind ever done, the US Bureau of Reclamation in 2012 published a tome looking in detail at water and future demands, both municipal and agricultural, across the seven states that depend on water from the Colorado River Basin. The study drew headlines because of its projection of an inexorably rising need for more water for the region’s cities and farms.

The study’s conclusions and the underlying assumptions on which it is based have contributed to growing tensions between water users across the region as communities fear a future spent trying to cope with what conventional wisdom suggests is sure to be a gap between water demands and supplies. Significant money and energy are being spent both on lawyers preparing to fight over water and on expensive engineering schemes to augment the region’s supplies with ideas as extreme as a major new North American aqueduct to bring water from the Midwest.

The old water paradigm, and the tensions it creates, also makes environmental policies harder, because water users afraid of shortfalls are less likely to be willing to cooperate in reallocating supplies to environmental restoration. But contrary to the study’s projections and the conventional wisdom, in the three years since it was published, water use in the Colorado River Basin has declined. In fact, total Colorado River Basin water use peaked in the late 1990s, and has steadily declined since. 4,33


In 2014, the United States and Mexico conducted an unprecedented experiment that suggests the promise if we can embrace decoupling. In the midst of a deepening drought and dwindling reservoir supplies, the two nations released water from Morelos Dam, the last major dam on the Colorado River, into the desiccated Colorado River delta. A river channel that is usually dried was wet for a few months, triggering new growth of riparian habitat and equally important celebrations in Mexican communities like San Luis flanking the usually dry riverbed. The water was pushed down the channel in a pulse designed to mimic, on a tiny but ecologically important scale, the spring floods that once ran wide in the old river delta.

The international agreement that allowed the experiment was built in part on the promise of increased agricultural efficiency in Mexico, funded by environmental groups and governments on both sides of the border.34 It was the promise of decoupling, made real for the Mexican communities along the river’s path and the environment flanking the once dry river channel.

Sunjoy Joshi

Ning Li

Kristin Zaitz

Heather Matteson

Welcome Breakthrough Generation 2016

Each summer, the Breakthrough Institute welcomes a new class of Breakthrough Generation fellows to join our research team for 10 weeks. Generation fellows work to advance the ecomodern project, by deepening our understanding in the fields of energy, environment, technology, and human development. 

Breakthrough Generation has proven crucial to the work we do here. Past fellows' research has contributed to some of our most impactful publications, including Where Good Technologies Come From, Beyond Boom & Bust, How to Make Nuclear Cheap, Lighting Electricity Steel, and Nature Unbound.

Over 70 fellows have come through Breakthrough Generation since its founding in 2008. We are delighted that the following scholars are joining their ranks:

Diana Kool

Diana Kool has a BSc in Public Administration and Governance from Utrecht University and holds a Masters degree in Environment, Politics and Globalisation from King’s College London. While doing her Masters she worked at the Green Economy Coalition as a part-time researcher. More recently she worked for the Director-General Enterprise and Innovation at the Dutch Ministry of Economic Affairs to stimulate European innovative solutions to global challenges through the TEDxBinnenhof conference. With a background in social and political sciences she is keen to understand the processes through which climate science, climate action and environmental policies have become highly politicized and contested.

James McNamara

James McNamara is a conservation biologist interested in understanding the relationships between people, wildlife and the environments they share. He works to achieve this through scientific research, film and photography. Although his specialist area is the wildlife trade in West Africa, his professional interests are broad, and at their core is a desire to further a deeper understanding of how, as a society, we can work towards safe guarding our natural heritage through informed and sustainable development. His work and personal travel have taken him to a variety of locations, from Papua to Afghanistan. More recently he drove overland from London to Mumbai with a close friend documenting the work of conservation projects in central and southern Asia.

George Livingston

George Livingston holds a B.S. in Biology from the University of Michigan and a Ph.D. in Ecology from the University of Texas. He recently completed a postdoc at the University of California, Davis. George has led diverse research projects on ecosystem resilience, biodiversity, agriculture, and science policy. He is the author of over ten publications. With the support of fellowships, he has spent time living and working alongside scientists in Mexico, Costa Rica, France, and Japan. He has also worked with the California Dept. of Pesticide Regulation and citrus growers. In the fall, George will be a AAAS Science & Technology Policy Fellow at the Dept. of Energy in Washington D.C. He grew up in the wilderness of Idaho and enjoys fishing, whitewater rafting, and cooking. 

Emma Brush

Emma Brush received her B.A. from Dartmouth College in 2013 and her M.A. in the Humanities from the University of Chicago in 2015, where she specialized in environmental literary studies and ecocriticism. Her research is positioned at the intersection of the textual and the environmental; specifically, she is interested in the ways in which our literary practices inform, and are shaped by, our understanding and treatment of varied species, landscapes, and ecosystems. Emma's current interests center on notions of interdependence and hybridity as they pertain to conservation efforts and the rehabilitation of biodiversity.

Mark Nelson

Mark Nelson holds an MPhil in Nuclear Energy from Cambridge University, where he studied at Pembroke College as a W. W. Allen Scholar. Prior to Cambridge, he took degrees in mechanical and aerospace engineering, and Russian Language and Literature at Oklahoma State University. He has studied and lived in St. Petersburg, Russia; conducted mechanical engineering research at Los Alamos National Laboratory; and worked in a pants factory in Tajikstan, making pants. Mark's studies in nuclear engineering led him to the broader problems of electricity production and carbon emissions.  His current research interests include tracking electricity system emissions, particularly for France and Germany, and speculating on the role of energy returns in society: how do changes in energetic limitations manifest themselves in political and social life? When not thinking about electricity data, Mark is a middle-distance runner, abuses his long-suffering camera, and thinks about electricity data. 

Synthetic Abundance

We often talk about how bountiful nature is. But in reality, without engineering and enhancement by humans, natural ecosystems are very sparse in their supply of material goods.

For centuries and millennia, humans have innovated and adopted new technologies in an effort to overcome nature’s scarcity, and to create more abundant material welfare. This has so far been accompanied by an ever greater footprint on the environment, causing great loss of nonhuman life. Yet the means by which humans create abundance also hold the key to shrinking the footprint. Creating abundance and sparing nature, often seen as clashing values, are in fact two sides of the same coin.

Before the invention of agriculture, Earth was only able to support a few million people. It didn’t take long for our Pleistocene ancestors to find out that the supply of meat from wild animals was very limited. Ever since, humans have repeatedly faced constraints to nature’s supplies of material goods. The amount of nitrogen fertilizer one can get from natural processes, such as through slash and burn or planting legumes, is restricted by the amount of land available. The same is true for energy from fuelwood, and draft power from horses. Relying on whales for lighting fuel, like people did in the 19th century, meant relatively high prices and limited amounts of lighting. Today, we’re coming up against the limits of how much fish we can get from wild stocks.

Two processes have been essential to overcoming this scarcity: substitution and intensification.

Substitution tends to go in the direction of increasingly artificial means of providing material goods. This often happens in three stages. It starts with harvesting wild plants and animals, like bushmeat, capture fisheries, fuelwood from unmanaged forests, or lamp oil from whales. The second stage we might call farming, where things are grown in engineered ecosystems, like crops, tree plantations, pastured meat, and extensive aquaculture in coastal or inland waters. The third and final stage is the factory, where we produce goods in closed systems, often using non-renewable resources. Examples of factory production are synthetic fertilizer, nuclear power, closed-loop aquaculture, LED-powered vertical greenhouses, and in-vitro meat.

Along with these cases of substitution, food production has intensified, in order to grow more food on less land. Over centuries, farmers have gone from one harvest every 10 years to two or three harvests per year, using up to 30 times less land to produce the same amount of food. New seed varieties, together with fertilizers, pesticides, and irrigation allowed cereal yields to triple in the last 50 years even as the land area remained stable.

As intensification and substitution created abundance for humans, they also vastly reduced the amount of environmental harm of producing a given good.

Agricultural intensification often comes at the expense of farmland biodiversity, but this tends to be more than offset by the reduced need to convert new land to farms and the attendant habitat loss. The technologies associated with the Green Revolution cut global farmland expansion by half since the 1960s, sparing an area the size of Western Europe from conversion.

Synthetic fertilizer allows farmers to grow crops without land-intensive legume cultivation to fix nitrogen. With kerosene, we could get lighting without killing whales. Synthetic rubber and fiber allow for abundant supplies of these materials with a minimal land footprint. While we are coming up against limits of how much fish we can catch in the wild, aquaculture, which now supplies more than half of fish for human consumption, can continue to increase supply while taking pressure off the oceans.

There is a pattern to these instances of substitution. Going from harvesting to farming allows us to stop killing the things we’re trying to save, like whales, wild fish, and terrestrial mammals. Going from farming to factory production tends to radically reduce the amount of land needed to produce any given good. In these cases, humans have spared nature not by using it sustainably or more efficiently, but by relying less on it for our material welfare. Bioenergy can be produced sustainably, strictly speaking – harvesting no more than the annual regeneration – from crop-based biofuels or woody biomass. But if covering a very large area, the biodiversity and habitat loss would be devastating.

To date, the savings associated with greater land use and resource efficiency have been overwhelmed by increases in population and consumption, in part enabled by those very efficiencies. But there is good reason to believe that we are now at an inflection point. At a certain level of material abundance, people’s consumption of material goods starts saturating. And greater material abundance and security – associated with well-paying jobs in services and manufacturing, as well as the education and healthcare that wealthier societies can afford – tends to result in lower fertility rates, further lessening aggregate long-term demand for land and resources.

Of course, technology can be a double-edged sword. More powerful equipment made whaling a ruthlessly efficient industry in the 20th century, capable of killing tens of thousands of whales per year, even as substitutes eventually made whaling unprofitable. Agricultural intensification in the tropics spares land globally but can lead to farmland expansion locally. Most forms of substitution have required large amounts of energy, which, as long as it comes from fossil fuels, contributes to climate change. The direction and nature of substitution, and technological change more broadly, matter. Nor do substitution and intensification eliminate all environmental impacts. As a rule, they spare nature by trading more benign impacts for worse ones.

A world where environmental impacts peak and decline doesn’t have to be a world entirely devoid of organic farming, grass-fed beef, wild-caught salmon, or wood stoves. Communities and societies will continue to choose to rely in part on these more natural goods, sometimes for biodiversity reasons, and sometimes to preserve cultural landscapes or because they enjoy the taste and feeling of wild foods. Nor will everyone live in high-density cities. 

But with 7 going on 9 or 10 billion people, striving towards higher material standards of living, if we are to leave space for nonhuman species, most of our food, energy, and materials will have to come from more artificial and concentrated sources. Intensive farming, feedlots, fish grown in tanks, nuclear power, and cities are the way to environmental salvation, not damnation. Substitution and intensification may finally break the bonds between human welfare and environmental destruction, ushering in abundance for humans as well as nature.

Energy Access Without Development

Below are my remarks from the event. In the coming months, Breakthrough will be releasing more research on the role energy plays in human development. 

In recent years, a growing cohort of scholars, journalists, and policy-makers have recognized the connection between energy consumption and human well-being. As a general rule, societies that consume more energy score better on most measures of human development, including life expectancy, health outcomes, educational attainment, and economic security. 

As a result, efforts to alleviate poverty have increasingly focused on improving access to electricity and modern cooking, heating, and transportation fuels. In 2010, the United Nations declared 2012 the Year of Sustainable Energy for All, with the goal of ensuring universal access to modern energy services. In 2013, the Obama Administration launched the Power Africa initiative with the goal of doubling access to power in sub-Saharan Africa. 

Unfortunately, these efforts too often conflate access to modern sources of energy with consumption of modern energy. It is the latter, not the former, that is strongly correlated with better human development outcomes. The International Energy Agency defines energy access as consuming 100 kWh of electricity per year, about enough to power a single lightbulb. The United Nations Sustainable Energy for All initiative doesn’t explicitly call out a specific threshold, but the four key technological pathways identified by the initiative give some sense of the scale of its ambitions: clean cookstoves, biomass-based mini- and micro-grids, decentralized solar generation technologies, and energy efficient solar lanterns and lighting. 

Consistent with that focus, other international aid groups, governments, and multilateral development agencies have distributed tens of millions of clean cook stoves to reduce the smoke from fires and make wood and dung burn more efficiently. Millions more solar lanterns have been donated or sold to the energy poor as a replacement for candles and kerosene lamps. 

Now there is nothing per se wrong with these efforts. But the question we are raising in our work is, what exactly can we realistically expect from them? Is this an exercise in charity, trying to make deep agrarian poverty a bit more bearable? Or is it an exercise in economic development, focused on creating development pathways that can move large numbers of people out of agrarian poverty? Can economic development and growth proceed without very dramatically raising the amount of energy consumed, an order of magnitude or more beyond the threshold that IEA defines as constituting access? Can higher energy consumption, consistent with modern life expectancies, health outcomes, educational attainment, and social and economic mobility be achieved without moving most of a nation’s population out of the agricultural sector and rural social and economic arrangements? And can small scale interventions at the household level, cookstoves, mini-grids, and solar lanterns support economic enterprise at the scales necessary to expand off-farm economic activity and raise incomes sufficiently that large populations are able to consume energy, or energy services if you will, at levels that are recognizably modern?

Our conclusion is that there are no large-scale precedents for what is being proposed at UNSE4All and elsewhere. My interest is not to pick on the United Nations. The idea that these efforts can lead to something approaching modern levels of energy consumption is widely held and, at best, ahistorical. 

More broadly, the track record of these kinds of micro-interventions is not good. Whether one looks at micro-credit programs such as those initiated by Mohammed Yunis and the Grameen Bank, or Jeffery Sachs Millenium Villages, or major efforts to distribute clean cook stoves and solar lanterns, none of these initiatives have demonstrated much success at moving large populations out of poverty. Again, they have helped some individuals and families improve their lives to varying degrees. But that is different than achieving broad human or economic development objectives.

So what does work? Having spent a number of years looking at how nations around the world have achieved universal access to modern levels of energy consumption, I can tell you that there are number of consistent features in just about every nation that has succeeded. 

The first is that no nation has achieved universal access to electricity and modern fuel without moving most of their population out of agriculture and into cities. Subsistence farmers can’t pay for electricity and can’t afford appliances to convert electricity into useful energy services. Universal access to modern energy services requires very major growth in off-farm employment and incomes.

Second, household electrification has generally proceeded as a side benefit of energy development for large scale economic enterprise and infrastructure. Even in the United States, the provision of electricity from Pearl Street Station to wealthy households was a sideshow. The first widespread applications of electricity were for trolleys and factories. With large loads and initial generation and transmission facilities in place, the cost of extending connections to households was relatively low.

Third, most people throughout history have gotten access to electricity by moving to the city, not by having electricity connections come to them in the country. Rural electrification, where it has occurred, has occurred as the last step toward universal electrification, after most people have achieved access in cities.

Fourth, there is no substitute for functioning public institutions. That doesn’t mean that there can be no tolerance for corruption or inefficiency. Energy infrastructure development has proceeded under a variety of institutional circumstances and with more or less transparency to reasonably good effect. But there is no shortcut for basic infrastructure development and governance. 

The past, of course, is only prologue. But as we consider contemporary efforts to end energy poverty, there are a number of lessons from that historical record that we might apply today: 

1. Prioritize energy development for productive economic enterprise, industry, mining, agriculture and manufacturing.

2. Focus on extending grid access in cities. Huge populations already live in close proximity to electrical grids and centralized generation capacity. We are currently undergoing the most rapid rates of urbanization in human history. The best way to increase energy consumption for the energy poor is to work with those trends, not against them.

3. Where energy access programs do target rural communities, target applications that raise agricultural productivity and on-farm incomes.

4. Plan for the future as well as the present. Developing world economies are evolving rapidly. Investments in energy technology, particularly off-grid technologies, need to anticipate continuing energy development, infrastructure, and integration. 

Low-Carbon Portfolio Standards

Expanding existing state Renewable Portfolio Standards (RPS) into Low-Carbon Portfolio Standards (LCPS) would more than double the statutory requirements for clean energy in the United States. Such a policy shift would prevent the premature closing of many of America’s nuclear power plants and assure that nuclear power plants will be replaced with low-carbon electrical generation when they are retired.

Read the full report here.

In aggregate, the existing RPSs require a total of 420 terawatt-hours of annual renewable generation across 30 states and Washington, DC by 2030. If nuclear were included in new low-carbon standards in all states that currently have both RPS policies and operating nuclear plants, the mandated amount of clean energy would increase to 940 terawatt-hours of annual carbon-free electricity. Assuring these additional 520 terrawatt-hours of electricity remain low-carbon would prevent 320 million metric tons or carbon dioxide emissions, or 17% lower than would otherwise be the case. 

Replacing carbon-based sources of energy with low-carbon sources remains the most reliable indicator of long-term progress toward emissions reduction goals. In the power sector, Renewables Portfolio Standards have been a driving force behind deployment of renewable energy technologies in the United States. However, nuclear generation in the United States has stagnated over the last 20 years and may decline substantially in the coming decades. If existing nuclear plants close prematurely and are replaced with natural gas, as is likely, much of the gain in low-carbon share of US electricity generation associated with renewables deployment will be lost.

The loss of clean and reliable nuclear power will make the challenge of decarbonizing the US power sector much more difficult. This loss of clean electricity will threaten US climate commitments made as part of the Paris Agreement under the United Nations Framework Convention on Climate Change in December of 2015. This is true even if wind and solar replace a substantial share of the lost generation from retiring nuclear plants, since every megawatt-hour of nuclear replaced by renewables is a megawatt-hour renewables can’t replace from fossil fuels. As such, transforming RPSs into LCPSs would move the ball sig- nificantly farther downfield towards full power-sector decarbonization by midcentury.

In this report, we recommend expanding state Renewable Portfolio Standards into more ambitious Low-Carbon Portfolio Standards to include existing nuclear power plants. An LCPS would assure that premature retirements of existing nuclear plants do not erode some or all of the carbon and clean energy benefits of continuing deployment of wind and solar power in the short-term, while significantly raising the statutory requirement for deployment of low-carbon electricity generation over the long-term. LCPSs would also include new nuclear power, hydroelectric power, and fossil fuels with carbon capture and storage. 

No. 5 / Summer 2015

Passion and Pragmatism

Along with Stewart Brand and Michael Lind, David was among the first people to bluntly tell me that if I were really serious about climate mitigation, I would need to learn to utter the N-word: nuclear energy. For someone who had still only recently shed his environmentalist identity, supporting nuclear was the final frontier, the thing that would mark my final break with the contemporary environmental movement.  It was one thing to suggest that environmental regulations and energy efficiency wouldn’t be enough to achieve deep reductions in global carbon emissions, quite another to suggest that splitting the atom, the existential threat that arguably gave birth to the modern environmental movement, would be the key to addressing the present existential environmental threat.

It would take a couple of more years for the message to fully sink in. Again and again, as my colleagues and I looked at where and how modern economies had succeeded in decarbonizing at rates consistent with meaningfully mitigating climate change, nuclear energy kept showing up as the central technological driver.

In late February 2011, Michael and I gave a speech at the Yale School of Forestry and Environmental Management entitled “The Long Death of Environmentalism,” in which we finally came out as explicitly pro-nuclear. Two weeks later, Fukushima happened. 

Despite Fukushima, or maybe because of it, the last five years have seen growing recognition that nuclear energy will need to play a critical role in the effort to power a planet of 10 billion people while mitigating climate change. The accident, and the sometimes hysterical reaction to it, illuminated as perhaps nothing else could the deep disconnect between the vastly overstated risks of nuclear energy and the very real and existential threat of climate change. In this too, David led the way. At our best, those of us who believe that nuclear energy must play a role in any serious effort to deeply reduce emissions approached the problem as David did – analytically, with equanimity, and without the hot air.

So when I learned last summer that David had been diagnosed with terminal cancer, it was obvious who we would give the Breakthrough Paradigm Award to this year. Michael and I spent an afternoon with David in Cambridge last September and told him we’d be giving him the award.

David was thrilled by the news. But mostly he wanted to talk about his latest project, the Global Carbon Calculator, which allows anyone to go online, make choices about technology and consumption, and understand the consequences for global emissions, atmospheric concentrations of carbon and temperatures. It is a valuable learning tool, one that allows anyone with a computer to get online and do the math of climate mitigation themselves.

This winter, it became clear that David was unlikely to survive long enough to collect the award at our annual Dialogue in June, and almost certainly would not be able to travel. So we asked Mark Lynas, who has come to know David very well, to present the award in person.

Last month, just a few weeks before his passing, Mark did just that. What followed was a remarkable conversation, which Mark videotaped. It is classic David, eminently reasonable, pragmatic, and matter of fact and, at the same time, passionate and animated.

To the end, David cared most of all that we have an honest conversation about the scale of the problem and options we have to address it. With some prodding, Mark got David to lay out what he considered the optimal pathway to emissions reductions in the UK. But David always understood that arithmetic could not tell us what to do, only better inform our choices.

It is a lesson that all of us, whatever our political and technological predilections, would do well to take to heart. There are not many free lunches when it comes to climate and energy. Virtually all paths come with trade-offs - between global poverty alleviation and climate mitigation, decarbonization and land use, and planning and decentralization - to name just a few.   Our values and politics, not physics or spreadsheets, will determine what we do. Negotiation, compromise, and engagement with multiple perspectives and multiple stakeholders will be the keys to making progress. In this, we will all do well to be skeptical of both claims that climate change can be solved at little or no cost, economic or otherwise, and claims that the only way to solve the problem is to vanquish those we disagree with. That for me, was David’s most important contribution of all.

Lightbulbs, Refrigerators, Factories, and Cities

How much energy do people need to rise out of subsistence poverty? What do we mean by the phrase ‘modern levels of energy consumption?’ The answers to these questions have important consequences not just for human development, but also climate change, infrastructure investment, and governance.

Fortunately, this complex discourse is gaining coherence. To wit: the Center for Global Development’s (CGD) latest policy brief "More Than a Lightbulb: Five Recommendations to Make Modern Energy Access Meaningful for People and Prosperity." The report explores important questions relating to the definition of energy access, the tools used to measure it, and the policy implications that come with it. "More Than a Lightbulb" is the product of conversations among the Energy Access Targets Working Group, which in addition to CGD includes representatives from the World Bank, the ONE Campaign, the Clean Air Task Force, the African Development Bank, the Breakthrough Institute, and many others. CGD released a short video explainer along with the paper:


The CGD brief is just the latest promising development towards a better understanding of energy for human development. The recent shift toward addressing energy as a development goal, with the UN Sustainable Development Goals (SDG), Sustainable Energy For All (S4EAll), and the Power Africa initiative, is a positive indication of the growing recognition of the role energy and electricity play in development overall. In 2013, Breakthrough and the Consortium for Science, Policy & Outcomes published Our High Energy Planet, which detailed the importance of electrification and high levels of energy consumption in the movement toward a globally modern and equal planet. The discourse shift signals an embrace of this thinking.

However, in order to meaningfully achieve electrification goals, international efforts must seriously reconsider current data, tools and targets in play. CGD’s report shines a light on the present targets and data, and provides constructive solutions for the way forward.

In the overall goal of achieving sustainable development, the energy access targets identified by the International Energy Association (IEA) are largely insufficient. The IEA’s definition of “modern energy access” is currently 100 kilowatt-hours per person per year (kWh/cap/yr) for urban areas and half that for rural areas, an amount that only allows for “a city dweller to power a single light bulb for five hours per day and charge a mobile phone.” While this amount of energy might improve life in the short-term, it does little to guarantee upward mobility or to secure greater opportunity access. As such, CGD recommends re-labeling this current threshold as the “extreme energy poverty” line. Not only would the new label be a more accurate representation, but it might also encourage countries and international development banks to invest more in electrification to raise their populations above it.

In the same vein of improving the way data is represented, CGD identified a pretty muddled landscape of data collection methods, which produce uneven and inaccurate representations of energy access across countries. The World Bank, for example, collects an aggregate that fails to distinguish between household-level consumption and other consumption, such as at the commercial enterprise or industry-level. It “measures electricity consumption per capita by measuring a country’s total power generation, minus estimates of distribution losses, divided by population.” Inequalities between urban and rural areas, or unequal consumption by small pockets of elites also remain hidden. Pooling together household and other types of consumption make it difficult to understand what per-capita household electricity consumption, and consequently, the life of an average citizen, look like.

Though it’s somehow often ignored in ongoing conversations about how to electrify poor countries today, we actually have lots of success stories in the recent and distant past. Countries as diverse as Indonesia, Brazil, Tunisia, China, South Korea, and the United States have achieved universal or near-universal electrification and high rates of electricity consumption. Lessons from these case studies, in terms of investment, governance, regulation, and technology, can prove very instructive. (Keep an eye out for more research on universal electrification from Breakthrough in the coming months.)

Arbitrarily low targets and thresholds for energy access discourage countries from investing in the large-scale infrastructure that has traditionally been required to electrify economies to bolster economic growth, and encourages them instead to focus on small-scale energy sources, such as renewable micro-grids.

To illustrate data more accurately, CGD recommends the widespread adoption of the S4EAll Global Tracking Framework, which provides a five-tier system that correlates with the attainment of different levels of energy services. Currently, however, the tiers are communicated ineffectively, e.g. “move from ESMAP Tier 3 to ESMAP Tier 5”. CGD recommends adding the following categorizations in association with the S4EA five-tier system in order to translate the tiers into more graspable terms that resonate with governments and citizens, and will encourage countries to invest in electrification.  

In addition to the extreme energy poverty line, the CGD paper recommends adding the following two measurements:

  • Basic Energy Access at 300 kWh/person/year, which would enable running basic appliances, such as a fan, a shared refrigerator, or a television, that are commonly in demand once families have modest additional incom
  • Modern Energy Access at 1,500 kWh/person/year, a level of consumption consistent with the label “modern” that includes on-demand usage of multiple modern appliances, including air conditioning

And, the categorization of countries as one of the following:

  • Extreme low energy (national average of less than 300 kWh/person/year)
  • Low energy (300–1,000 kWh/person/year)
  • Middle energy (1,000–5,000 kWh/person/year)
  • High energy (greater than 5,000 kWh/person/year)

The incorporation of these new thresholds and the more specific data collection tactics -- such as remote sensing of electric utilities or mobile phone surveys -- into multinational development institution and national government’s monitoring and evaluation schemes are better suited to track and incentivize progress. All of this is in service of establishing a more nuanced, and more appropriate, picture of energy consumption in society.

The report makes the important point of correlating increased energy access with other development outcomes such as health and education. Energy consumption as a development priority exists within a context of broader developments – including, importantly, the empowerment of women. The issue of energy access, as touched on by the report, is a gendered one. Lack of electricity tends to disproportionately affect women and girls who are usually charged with the collection of biomass to use as fuel. “The efficiency gains from electricity access would allow girls more time for education and women to participate in the labor force at higher rates, earning more income and gaining a more empowered position within the household and society,” the CGD report concludes. Countless studies discuss the added benefit, both economically and socially, gained through the incorporation of women into the formal economy. Measuring increased energy access alongside the gender development index will likely yield important insights into the success of the overall development agenda, and the relationship between energy access and human development.

Not Dead Yet

Last year the success of wind and solar power made headlines as installations of new turbines and PV panels soared. Meanwhile, “nuclear is dead” think pieces mushroomed in the press as old plants closed and new projects floundered in delays and cost over-runs.

But while the “rise of renewables” is indeed reason to celebrate, the “death of nuclear” storyline has been greatly exaggerated. Far from being moribund, in 2015 the global nuclear sector quietly had its best year in decades. New reactors came on line that will generate as much low-carbon electricity as last year’s crops of new wind turbines or solar panels. The cost of building those reactors was less than one third the cost of building the wind turbines and solar panels, and typical construction times were under 6 years. The conventional wisdom that nuclear projects must be decade-long, budget-busting melodramas proved starkly wrong last year. In crucial respects the nuclear renaissance has hit its stride and is making a fundamental contribution to decarbonization—one that will accelerate if the industry gets recognition and support for what it is doing right.

Last year ten reactors—eight in China, one in South Korea, and an experimental fast reactor in Russia—connected to the grid with 9.4 gigawatts (GW) total capacity, twice as much new capacity as in 2014 and the most since 1990.1 That pace should accelerate in 2016, with at least eleven more reactors expected to come on line in China, South Korea, Russia, India and the United States.2

How does that stack up against wind and solar? At first glance it seems like last year’s worldwide addition of 64 GW of wind power3 and 59 GW of solar power4 dwarfed the new nuclear capacity. But that raw count can be misleading, because a wind or solar gigawatt is not the equal of a nuclear gigawatt in productivity and longevity. Wind turbines have an average capacity factor—the electricity they generate divided by what they could generate if they ran at full capacity all the time—of about 25 percent,5 while solar power clocks in at about 15 percent on average.6 Those low capacity factors reflect the productivity deficit from the fickleness of wind and sun. Nuclear reactors are not weather-dependent and can produce at maximum power most of the time, so they have much higher capacity factors: close to 90 percent for the Chinese and South Korean pressurized water reactors, and about 75 percent for the Russian reactor.7 Nuclear plants also last longer. Most reactors in the US have received license extensions to 60 years, and applications for 80-year extensions are in the offing; wind farms last only 20 to 30 years while solar plants last 25 to 40 years. With a higher capacity factor and longer service life, a nuclear gigawatt can generate six to eight times as much electricity over its lifespan as a wind or solar gigawatt.

Last year’s crop of nuclear reactors therefore has a productive potential comparable to that of the new wind or solar capacity. Those 9.4 GW of nuclear power will generate 71 terawatt-hours (trillion watt-hours, TWh) each year, close to the 78 TWh produced by 59 GW of solar power. The 64 GW of wind capacity will generate twice as much electricity per year, 140 TWh, but over a 60-year lifespan the reactors will produce the same amount, about 4,200 TWh, as the wind turbines will produce during a 30-year service life—and 35 percent more than the 3,120 TWh the solar capacity will produce in 40 years.  Last year’s new reactors will thus make as large a total contribution to the world’s low-carbon energy supply as the new wind turbines, and substantially more than the new solar panels.


New Global Wind, Solar and Nuclear Capacity in 2015

Assumes capacity factors of 25 percent for wind, 15 percent for solar and 85 percent for nuclear
Cost assumptions in footnotes 10, 11 and 12

And the nuclear capacity cost much less. Wind and solar generators have seen sensational drops in construction costs to as low as $1,300 to $1,400 per kilowatt for large-scale installations in some places. But because of its greater productivity, last year’s new nuclear capacity was cheaper. “Overnight” construction costs, excluding financing costs, for reactors in both China and South Korea ran about $2,300 per kilowatt; adding financing costs brings that to about $3,100 per kilowatt.8 (The Russian reactor cost about $4.8 billion for 789 megawatts of capacity.)9 So while last year’s increment of new wind capacity cost about $109 billion10 and new solar over $92 billion,11 the ten new reactors cost about $31 billion12—one third the cost or less, for nuclear plants that can equal or outstrip the life-cycle electricity production from last year’s wind or solar additions. A dollar invested in new nuclear reactors thus yielded, on average, more clean energy than one invested in wind or solar power.

That comparison is especially telling in China, which is deploying both renewables and nuclear on a massive scale.

New Chinese Wind, Solar and Nuclear Capacity in 201513

Assumed costs of $1,300 per kilowatt for wind and solar, $3,100 per kW for nuclear.
Assumed capacity factors of 22 percent for wind, 15 percent for solar and 86 percent for nuclear.

These numbers show that the huge preponderance of nominal wind and solar gigawatts in China is outweighed by the greater output and lifespan of nuclear gigawatts. In 2015 the reactors China completed cost a little over one third as much as the new wind and solar generators, but can produce over twice as much electricity each year as the solar panels and almost as much as the wind turbines. Over a 60-year service life the reactors will produce 20 percent more electricity than the new wind and solar capacity combined over a shorter 30- to 40-year lifespan. Despite adding four times more wind capacity than nuclear capacity, China saw its nuclear generation grow more than its wind generation in 2015, 38 terawatt-hours for nuclear compared to 32 terawatt-hours for wind. The country got almost as much electricity from its nuclear sector as a whole as it did from its wind sector, 171 TWh to 185 TWh,14 even though the total grid-connected wind capacity, 128 GW, was almost five times larger than the country’s 27 GW of nuclear capacity.

Those disparities in productivity yield a cost advantage for nuclear power that is reflected in Chinese electricity prices: new nuclear plants received a feed-in tariff of 0.43 yuan (6.5 cents) per kilowatt-hour in 2015,15 while wind capacity got 0.49 to 0.60 yuan and solar 0.9 to 1.0 yuan.16 China isn’t an anomaly; in South Korea, nuclear power is the cheapest electricity on the grid—cheaper even than coal-fired power.17 These per kilowatt-hour price comparisons do not count the additional grid costs of intermittent renewables, like the billions of dollars being spent on transmission lines to relieve curtailments of wind and solar surges, which wasted 15 percent of China’s wind output and 9 percent of its solar output last year.18 Nor do they capture the higher quality and reliability of nuclear electricity.

Nuclear Costs, High and Low

Last year’s experience shows that nuclear can often be the most economical way to bring clean electricity onto the grid. That’s nothing new. Although the nuclear industry has a reputation for ever-escalating costs, budget blowouts have been more the exception than the rule outside of the United States. A peer-reviewed study by the Breakthrough Institute’s Jessica Lovering, Arthur Yip and Ted Nordhaus shows that in France, Germany, Canada, Japan and India nuclear construction costs have been fairly stable—and low.19

China and South Korea are following that pattern by playing from a familiar playbook: leveraging economies of series and expertise through large-scale, systematic deployments of mature reactor designs with long production runs. Seven of last year’s new reactors were the Chinese CPR-1000 model, an updated version of the workhorse Generation II light-water designs that make up most of the world’s reactor fleet.20 China has now built 14 units of this design. South Korea’s reactor was the tenth unit of its OPR-1000 model, another updated Gen II design.21 In both countries, the nuclear industry is a partnership between state-owned utilities and construction companies with decades of continuous experience. With proficient project managers and construction crews and well-developed supply chains, the average construction time for these reactors was 5 years and 8 months.22 (The Korean project was delayed a year to yank out and replace electric cables that were discovered to have fake safety certifications.) The epic length and cost of nuclear projects in the West aren’t typical of the global industry.

These successes illuminate what’s gone wrong in the United States and Europe. The great model change-over to Generation III+ reactors with novel design features has gotten mired in delays and over-runs in Western countries that haven’t built new reactors for decades. The EPR reactor, designed by the French company Areva, is the poster-child of nuclear dysfunction: construction of the Flamanville unit will take eleven years and its price has tripled to 10.5 billion euros ($7,200 per kilowatt).23 The EPRs proposed for the Hinkley C nuclear plant in Britain are budgeted at a whopping GBP 24.5 billion, or $10,542 per kilowatt (financing included).24 Many explanations for these huge costs and delays, both plausible (high wages and regulatory red tape) and far-fetched (a “negative learning curve”), have been proposed. But the deeper problems are basic stumbling blocks to industrial efficiency: immature reactor designs that haven’t had the bugs worked out; inexperienced managers, workers and suppliers; and sporadic, piece-meal deployments that don’t let builders develop expertise.

Not all Gen III projects are fiascos. South Korea grid-connected its first APR-1400 reactor, a Gen III model, in January after seven years of construction (including a two-year delay to replace faulty cables and valves); overnight construction costs for the reactor are $2,450 per kilowatt.25 Japan built four Gen III ABWR reactors in the 1990s and 2000s, on schedule and for an overnight cost averaging $3,000 per kilowatt.26 These projects were successful in part because the models are incremental developments of familiar designs, and because the South Korean and Japanese nuclear construction industries had not rusted during a decades-long hiatus, as happened in the West.

The flagship Western Gen III+ designs, Areva’s EPR and Westinghouse’s AP-1000, are a different story. These designs feature more radical departures from previous models. The EPR is a study in complex redundancy that includes two nested containment walls. The AP-1000 has an innovative passive safety system that uses gravity, convection and cycles of evaporation and condensation to cool the reactor in an emergency. These models were supposed to be cheaper and faster to build than Gen II designs. Unfortunately, all the EPR and AP-1000 projects, in the West and China, have struggled with setbacks and overruns.

The Vogtle project in Georgia, where two AP-1000s are being built beside the plant’s two existing reactors, shows how excruciating the teething pains can be for a new model. The plant’s novel design has caused innumerable delays, gloomily chronicled in oversight reports from the Georgia Public Service Commission’s staff.27 They began even before construction officially started, when the AP-1000’s license approval was delayed for months while the Nuclear Regulatory Commission vetted the redesign of the shield wall for the containment building that houses the reactor.

Indeed, the AP-1000 design was so new that in important respects it was not even finished when they started building it. The “engineering packages”—the detailed drawings and specs needed to build the plant’s parts and structures—hadn’t been completed by Westinghouse when construction began in 2012, and delays in furnishing them continue to impede contractors [39]. When they were available they sometimes proved faulty. Flaws in the specs led to a seven-month delay in the pouring of one reactor’s concrete base mat foundation while improperly installed rebar was torn out and replaced. Some of the sub-modules—the pre-fabricated assemblies that comprise much of the plant—had to be redesigned because the original plans proved “impossible to physically construct,” according to GPSC staff.28 New NRC licensing procedures were expected to eliminate difficulties encountered on previous builds when mid-construction design changes imposed by regulators forced lengthy delays; unfortunately, the immaturity of the AP-1000 design itself has brought similar problems to Vogtle and other projects.

Design missteps have been matched by pitfalls in manufacturing and construction. A centerpiece of the AP-1000 design is its modular construction philosophy: factory-built submodules would be shipped to the site, assembled into modules and hoisted into place. This method is supposed to reduce cost and construction time by replacing much on-site construction with more efficient factory construction; to implement it, a sub-module factory was built in Lake Charles, Louisiana. But practice fell woefully short of theory. Workmanship at Lake Charles was poor, inspection and documentation of quality standards sloppy (the NRC doesn’t take kindly to that), and time was wasted when defective submodules got sent back for re-work. Production lagged way behind due dates. [40] The problems grew so bad that Westinghouse outsourced some of the submodule fabrication to other contractors, who in turn struggled with slow pace and quality issues. Fabrication of the shield wall’s panels, a novel design consisting of a sandwich of steel plates with concrete filling, also fell behind schedule. Even basic on-site construction tasks at Vogtle—welding and concrete-pouring, which is what people mainly do when they build a nuclear plant—have been plagued by poor quality and re-work, shortages of skilled tradesmen and low productivity. America’s long-dormant nuclear construction industry—the last plant went on line in 1996—wasn’t yet fully awake for the project.

With all this muddle, the construction time for the Vogtle reactors has swollen to almost eight years and the budget has risen 29 percent to $17.2 billion,29 putting it up in EPR nose-bleed territory at $7700 per kilowatt (financing included). More delays may follow. As the GPSC staff puts it, that’s what happens when “the design is new, the modular construction of nuclear power plants is new, the regulatory environment…is new and most of the people involved are new to new nuclear construction.”30

But the travails of Vogtle and other builds shouldn’t toll the bell for nuclear power. For one thing, the project’s huge expenses still aren’t all that expensive. GPSC staff estimate the two new units’ lifetime “revenue requirement”—the money needed to recoup all the expenses, both capital costs and annual operating and maintenance costs over the plant’s service life—at $65 billion.31 That sounds like a lot, but spread over their 60-year output, a prodigious 1,056 terawatt-hours, it comes to 6.2 cents per kilowatt-hour. That’s not the cheapest power around but it’s not outrageously expensive. (Vogtle’s parent utility Georgia Power paid 4.33 cents per kilowatt-hour to buy wholesale power last year.)41 South Carolina’s V. C. Summer project, with another two AP-1000s, is coming in considerably cheaper at $5,800 per kilowatt (financing included),32 while recent estimates for the first Chinese AP-1000 plant at Sanmen put overnight costs at about $2,700 per kilowatt.42

And it is important to acknowledge Vogtle’s challenges as what the industry politely calls “first-of-a-kind” issues. Many of the mistakes and unpleasant surprises that tripped up Vogtle should be resolved once a few AP-1000s have been completed. There won’t be another licensing delay while the NRC ponders the shield wall. The builders now know how to do the base mat rebar. Lake Charles, GPSC staff monitors recently noted, has gotten its act together with up-to-snuff submodules and paperwork. Westinghouse engineers will fine-tune a buildable set of blueprints.

The lessons learned at Vogtle and other Gen III+ projects will thus help subsequent builds go smoother, faster and cheaper—maybe even as well as last year’s Gen II projects. But that will only happen if there are subsequent builds. China is planning to build many more AP-1000s along with its own upsized clone, the CAP-1400; that deployment will support a well-oiled supply chain, sub-module factories and an experienced construction workforce. The AP-1000’s future is iffier elsewhere: Westinghouse is pursuing projects in Britain and India, but has no firm orders in the United States. If none materializes, Lake Charles could close, experienced project managers and skilled workers will disperse to other industries, suppliers in the United States will abandon the nuclear sector and perfected blueprints will sit on the shelf.

Avoiding that future will require smart industrial policy. Large-scale, systematic nuclear construction programs run by state-owned utilities have proven their worth in the past in France—still the most successful example of decarbonization—and now China and South Korea. That approach isn’t fashionable anymore in the deregulated electricity markets of the neoliberal West. But less dirigiste industrial policy, in the form of subsidies and mandates, lend crucial support for renewable energy in the United States and Europe. Nuclear power could receive the same supports. One approach being explored in some states is a low-carbon electricity portfolio standard that includes both renewables and nuclear and lets utilities decide what mix of energy sources will best meet decarbonization targets.

Accelerating the global nuclear renaissance will also require a robust international supply chain that lets countries with a thriving domestic industry export their know-how. Modularization, though it faltered at Lake Charles, will be a key element of that, allowing standardized factory production to pre-fab the bulk of a plant instead of inexperienced onsite labor. China will likely be the modular epicenter. Nurtured by a huge domestic deployment, Chinese factories could produce cheap sub-modules and parts for nuclear projects around the world, just as they now make the world’s solar panels. China has also designed a reactor based on an Areva design the Hualong 1, for domestic builds and export. South Korea is ahead of China in nuclear exports; it is supplying four APR-1400 units for the Barakah plant in the United Arab Emirates, now half built. Russia is also building several plants in foreign countries.

Growing an international supply chain for both parts and foreign-designed reactors will require flexibility on nuclear trade and regulation. Domestic-content rules strictures may have to be relaxed. Standards among national nuclear regulators could be harmonized to make it easier for reactor vendors to get licensed in foreign countries, which would spur competition and lower costs. One reason Britain’s Hinkley C project will build the hugely expensive EPR is that it’s the only model currently licensed by the UK’s Office of Nuclear Regulation; the AP-1000 and the ABWR aren’t yet approved even though they have already been licensed by the gold-standard US Nuclear Regulatory Commission. The South Koreans have applied for a US license for the APR-1400, which beat out the EPR for the Barakah project; it could make an attractive option for cost-conscious nuclear projects in the West.

Another much-disputed but valuable principle—bigger is better—should inform nuclear planning. The advantages of size accrue to whole plants as well as individual reactors. Studies find that multi-reactor builds see sizeable declines in construction costs on later units. Big plants also have lower operating costs because payroll and overhead get spread over more terawatt-hours. The RE Ginna nuclear station -a small, money-losing plant in upstate New York that’s likely to close soon- has a 581-megawatt reactor and 600 workers. That is twice the staffing-to-output ratio of New York’s profitable Indian Point plant, which generates 2,070 megawatts from two reactors with a staff of 1,050. Inefficiencies like that help push Ginna’s production costs to 5.6 cents per kilowatt-hour, twice the average for US nuclear plants.34 Economies of scale are why China and South Korea are going big, building enormous plants with up to six gigawatt-size reactors apiece. Construction projects on that scale are hard for private utilities to finance—another reason to try public financing and ownership in an era of cheap government borrowing costs.

The most important lesson to take from last year’s accomplishments in the nuclear industry is that it’s not really what you build, it’s how you build it. Gen II reactors have experienced many epic cost blow-outs, delays and cancellations, but 2015 showed (once again) that they can be built cheap and fast with help from supportive industrial policies. Those policies can benefit the new Gen III models as well. Cheap nuclear power doesn’t require technological leaps, just steady optimization of the designs we have.

How Much Radiation Is Too Much?

Everyone knows that the dose is critical when you are taking a prescription medication: a small amount can provide significant benefit, but a large dose can kill you. This “non-linear” effect is taken for granted in pharmaceuticals, but is not generally adopted for regulating the risks of radiation. Dr. Edward Calabrese is a professor and toxicologist at the University of Massachusetts Amherst's Department of Environmental Health Sciences. He has spent his career studying non-linear effects in different carcinogens. From hundreds of studies, he has concluded that radiation should be treated more like pharmaceuticals, and regulators needs to change how they think about radiation risks and harm.

What’s the history of the linear-no-threshold, or LNT, framework, and how did it come to be the standard?

The rise of LNT theory was really the result of a political motivation by a group of radiation geneticists. I’m sure they believed that dose response was linear, but they also wanted to scare the hell out of society to increase their stature and grant funding for their research, and they wanted people to think they were the only ones who could save the world from the harms of atomic weapons testings, etc. It was a paternalistic behavior.

We know it’s the dose that makes the poison, but is it also the dose that makes the benefit? Or are some substances simply benign or have no effect at low doses?

It depends on how you define “low” and how you define “benign.” Essentially everything has a dose response relationship, meaning everything is harmful at high enough doses, but then we get into some complexity. A lot of things are essential, like minerals and vitamins, and it may well be that radiation is essential at low doses. Most agents are going to be toxic at high doses, but we’ve found that most agents have a biphasic relationship.

How do regulatory bodies like the Nuclear Regulatory Commission or International Atomic Energy Agency set dose limits for radiation?

They have a long history, and they make use of their long history. When you regulate carcinogens, you assume that the dose response in linear. If it’s a non-cancer endpoint, you assume there’s a threshold. If you’re dealing with radiation, you have much more stringent controls. The buy-in to linear dose response has a huge effect on industry. And this decision to go linear, which was made in the 1950s, was the most impactful environmental regulatory change in the modern world.

What’s the evidence that LNT is not the right model, can you give a concrete example?

From 1985 on, I began to study this and tried to take a look at alternative models, including the hormetic dose response model, which was marginalized and neglected. I found that this phenomenon occurs quite widely. The EPA’s default models, for example, are arbitrary. Numerous studies are cited in my publications that support the hormesis model. The reader simply has to read the literature and get educated. While this is not an easy exercise, it is necessary in order to understand the breadth and significance of the hormesis database and its impact on toxicology and risk assessment. Such an assessment will show that the hormesis phenomenon is far more common than previously recognized and outcompetes the LNT and threshold models in head-to-head competition. This is documented in the peer-reviewed literature.

How do you determine the threshold between beneficial and harmful?

You have to do this through study, and that’s what we’ve done. There’s a lot of historical clinical research that does exist which we have now summarized and evaluated in the biomedical literature. If people wanted to use this, the dosages are pretty well worked out. This country needs to overcome its often irrational fear of radiation to take a more informed approach to exposure and use the newly assessed information for clinical benefit.

What’s been the experience of others who had tried to replicate your results?

I have a long experience conducting my own lab experiments. But I have also done a large amount of research assimilating the work of others. My laboratory work is very replicable. In the general world of hormesis, these studies are more challenging than most. In the traditional work of toxicology, you’re looking for the dose that produces harm. In hormesis, the beneficial effects are very constrained. You have to have enough doses and replications of your findings to ensure that your results are reproducible and real.

If regulators shifted from relying on an LNT model to a hormetic dose model, how much would limits actually change?

Getting rid of regulations based on the LNT theory would result in an increase in the acceptable dosage by at least several hundred fold. And that would have a huge impact. This would have a positive effect on human health as well as save billions and billions and billions of dollars. The regulatory agencies are kind of a cult, but they don’t know they’re part of a cult.

One challenge with low dose radiation, is that certain radioisotopes accumulate in the food supply or in certain organs. If we had higher limits for radio-isotope releases, would we need to pair that with better monitoring of food or personal doses, making sure that individuals don’t exceed the threshold and go into high dose range?

Well, I think that that would be true. They’d have to look at every situation. The optimal dose is very close to the level where it crosses over into being harmful. And there are personal differences as well, what’s optimal for you might be different for another person. But it’s an important challenge, you have to solve the problem differently now. We can’t just keep saying “lower is always better.”

For vaccines we know the mechanism for how they work, what’s the mechanism for the benefits of low-dose radiation.

Our understandings of hormetic mechanisms in the last decade have really exploded. We’ve learned a lot about signaling pathways and so many different components for the mechanisms have been identified. I can’t say that the mechanisms are fully understood, but we can tell which receptor and cell signaling pathways are involved in mitigating or promoting a certain effect.

Finally, it seems like there has been some movement at NRC to revisit and maybe revise LNT, have you been involved in this effort and what’s your prognosis?

I submitted comments during the review process. I never thought I’d live to see the day where a regulatory agency would take our efforts seriously. I hoped they would, but I never imagined it would happen. They opened it up for four months of comments. I read all of them, most comments weren’t serious. About 60% of those that were serious were in support of the change. There’s little question that the science is on the hormesis side. You know, we use hormesis in the whole pharmaceutical industry, and if the NRC is taking any of these medical drugs they are already relying on hormetic science.

Ammonia is Everest Base Camp for Clean Energy

In September 1987 twenty four countries signed the Montreal Protocol, beginning the phaseout of chlorofluorocarbons (CFCs) and other materials that destroy the ozone layer. The international community decided the impact of a small group of industrial chemicals was simply too dangerous, and outlawed them.

Perhaps it is time to take a hard look at another industrial chemical with dangerous global warming impacts — ammonia. Specifically, ammonia that is produced from fossil carbon, with high CO2 emissions. Fossil ammonia.

A phaseout of fossil ammonia would do more than cut CO2 emissions from the fertilizer industry.  It is in fact an innovation policy in disguise. The real effect is to drive the technological innovation we need to take on the main game — the decarbonization of energy.

We are what we eat, and what we eat is ammonia

Ammonia is a vital source of nitrogen fertilizer. Perhaps half of all nitrogen delivered to crops comes from synthetic ammonia. About 80% of all the nitrogen in our bodies comes from synthetic ammonia. Without it, the global population would be halved.

Ammonia is indispensable, and I am not suggesting a phaseout of ammonia itself, but rather ammonia made from a very specific industrial process — the greenhouse intensive Haber-Bosch synthesis using natural gas.

Fossil ammonia

Ammonia, NH3, is a simple molecule made up of nitrogen and hydrogen. In the Haber-Bosch process nitrogen from the air reacts with hydrogen to make ammonia. So far so good. The problem is the source of the hydrogen.

The Haber-Bosch process makes ammonia from nitrogen and hydrogen

Hydrogen is widely conceived of as a ‘clean’ material. It could be, but in practice it is made from natural gas, which is methane, or CH4. The methane is combined with hot steam, releasing hydrogen but also producing carbon dioxide. For every tonne of hydrogen, 5.5 tonne of CO2 is released. Though there are clean alternatives, almost all hydrogen comes from natural gas. Today’s hydrogen is a fossil fuel byproduct, the result of mining, or fracking.

Fossil hydrogen, made from natural gas, liberates fossil CO2

Made with this fossil hydrogen, ammonia is a large greenhouse emitter. In 2010 world ammonia production was 157m tonnes, with CO2 emissions of 300m tonne, about 1% of world greenhouse emissions.

Fatima Fertilizer ammonia plant.

Clean ammonia

The good news is there are other ways to make hydrogen, with practically no emissions. Electrolysis of water using renewable energy or nuclear power is one way. This is quite inefficient and expensive, and is not carried out at large scale.

But the efficiency could be doubled by performing the electrolysis at high temperatures, up to 850 ºC, using nuclear or renewables for heat and electricity, which could produce hydrogen at a cost competitive with fossil sources.

We can also use thermochemical cycles, such as the sulphur-iodine cycle, that use a sequence of chemical steps to split water. All the intermediate chemicals are reformed within the cycle. There are no waste products, only water is consumed, producing hydrogen without emitting any greenhouse gases.


High temperature chemical water splitting with the sulfur-iodine cycle

Then there are hybrid processes that combine thermochemical cycles with electrolytic steps, such as the copper-chlorine cycle which runs at a more accessible 530 ºC. This cycle is expected to be cheaper than current steam reforming methods at larger scales, above 10 tonne per day.

Methane cracking

One particularly intriguing clean hydrogen process actually continues to use fossil methane as the hydrogen source. Methane ‘cracking’ breaks CH4 into hydrogen and solid carbon. A remarkably simple process has recently been described — bubbling methane through a column of molten tin splits off hydrogen, leaving pure carbon floating on top, which can be removed and buried. This is precombustion carbon capture and storage. The resulting hydrogen, though fossil in origin, produces zero CO2 emissions.

Carbon produced by methane cracking; Karlsruhe Institute of Technology.

Methane cracking could be a game changer for clean ammonia because it keeps most of the existing infrastructure. Ammonia plants are already supplied with methane. Cracking could replace the conventional steam reformation process while keeping the balance of plant, and does not have to wait for the development of a new clean hydrogen supply chain.

Solid state ammonia synthesis

It’s even possible to bypass hydrogen and Haber-Bosch altogether, and make ammonia directly from water, air and electricity, using a modified fuel cell technology.

In Solid State Ammonia Synthesis (SSAS) water electrolysis and ammonia synthesis are combined in a single step. A hot ceramic membrane separates water vapour on one side and nitrogen on the other. An electric current strips protons from the water, which pass through the membrane and react with nitrogen to make ammonia.

There is no need for hydrogen at all in this process. And provided the electricity and heat come from nuclear or renewables there are no CO2 emissions.

None of these new clean ammonia processes has been deployed commercially. The high temperature electrolytic and chemical processes await availability of nuclear or solar heat sources, and methane cracking and SSAS require much process development. But all have been demonstrated and could be realized. Until then, carbon capture and storage (CCS) can be used now to decarbonize these plants.

Ammonia is Everest Base Camp for clean energy

Ammonia is only 1% of world greenhouse gas emissions. Eliminating these emissions would by themselves have no impact on global warming. Why then focus on ammonia? 1% of emissions won’t get people marching in the street.

But there is larger game afoot. Most of our CO2 emissions come from burning fossil carbon, to make electricity, to power transport, heavy vehicles, agricultural machinery and aircraft, and provide heat and raw materials to the petrochemical and process industries. Nearly all these processes can be substituted by the same technologies that are required to clean up ammonia.

Success with ammonia means we will have developed and commercialized, at scale, with viable economics, infrastructure and supply chains, the following new technologies: CCS, SSAS, methane cracking, conventional and high temperature electrolysis and thermochemical water splitting for hydrogen production, nuclear heat sources and small modular reactors, and solar heat sources and renewable electricity of sufficient reliability to be integrated into high volume must-run industrial processes.

These are all nascent technologies that must be developed if we are to move beyond carbon—further proof that we do not actually have the technologies we need to meet the COP21 climate goal of limiting global warming to 2 ºC.

Ammonia production is a much narrower technical context in which to develop these technologies than fossil fuel replacement. It is one input (fossil hydrogen) to one process (Haber-Bosch) in one kind of plant (ammonia) for one product (fertilizer) for one market (agriculture). Technology development is hard, and limiting variation, impacts and stakeholders weights the odds in favour of success. Ammonia is a ‘sandbox’ development environment for clean energy technologies.

Success with fertilizer would bring one more surprising benefit: ammonia is itself a zero emission vehicle fuel. Unlike hydrogen, it is readily liquefied, and can be burnt in internal combustion engines and jet engines. This makes it suitable for powering the heavy transport sectors that can’t be electrified — diesel powered prime movers for road freight, mining trucks, combine harvesters and tractors, and aviation — that don’t otherwise have good alternatives.

The ammonia-fueled Toyota GT86-R Marangoni Eco Explorer

Ammonia is not the end; it’s the beginning. Ammonia is Everest Base Camp on the way to the summit of fossil fuels. Its only one percent of emissions, but success creates the technologies needed to clean up everything else.

Beating the Iron Law

Why treat ammonia as a special case, instead of simply another sector subject to an economy-wide carbon reduction scheme? If we simply priced carbon, surely ammonia producers would heed the price signal and mobilize technologies to decarbonize, in line with an increasing carbon price?

The short answer is, this hasn’t worked. The Paris agreement now includes no explicit reference to carbon trading, offsets, or other forms of carbon pricing. After twenty years of experimentation, carbon pricing has failed to produce anything more than marginal changes in the cost structure of production. And marginal cost changes do not drive step changes in the underlying production technologies.

The reason for this failure is well explained by Roger Pielke’s “Iron Law of Climate Policy”: When policies for economic growth collide with emissions reduction policies, economic growth always wins. Virtually all the energy input into the economy derives from carbon. Pricing carbon is hard because carbon is, essentially, the entire economy. The Iron Law ensures that any carbon price will remain low, which means it will only drive marginal improvements in technology.

Ammonia is not the whole economy. Like CFC’s, it is only one small sector, albeit a very important one. Economically, the scope is much more limited than a whole-of-economy carbon price. Instead of bearing the cost of decarbonizing the whole economy, we would decarbonize a small sector. But we’d do it completely. Even more importantly, we’d earn a fully developed commercial supply chain for the complete set of technologies needed to decarbonize fossil energy.

A strategic priority for Mission Innovation and Breakthrough Energy Coalition

After years of failure at global climate talks, a new direction emerged from COP21 in Paris: the need to develop new technologies to replace fossil fuels. A phaseout of fossil ammonia is an innovation policy disguised as an emission reduction initiative, and should be seriously considered at COP22 this year in Marrakech. The government-led Mission Innovation and the private sector’s Breakthrough Energy Coalition should adopt it as a strategic priority.

CFCs were phased out in a decade. Ammonia will take longer because the necessary technologies are more complex and less mature, and there can be no compromising the food supply.

But an early start could be made with CCS and water electrolysis in locales with cheap hydro- or nuclear electricity. A number of ammonia plants using hydroelectric electrolysis operated in the last half century, but most have closed, being supplanted by cheaper methane reformation. This technology could be revived, with more modern and cheaper electrolyzers, and the advent of methane cracking and small modular reactors would eventually allow ammonia production anywhere in the world.

Efficiencies can also accelerate the phaseout. The global nitrogen efficiency of cereal crops fell from 80% in 1960 to 30% in 2000, so there is a lot of room for the agricultural sector to respond to any short term restriction in fertilizer supply. A shift to diets that use less fertilizer, such as vegetarian diets, or genetically modified organisms like C4 rice, can also reduce pressure on the supply chain while the technology is in transition.

We’ve been here before

Agriculture has been through this process many times. There are many fertilizers and pesticides that were once thought indispensable that have been banned for health or environmental reasons. A notable precedent was the banning of DDT and other organophosphate insecticides under the Stockholm Convention, due to unacceptable ecological impacts.

The US EPA and similar organizations around the world police the use of chemicals in the food supply. The EPA also recently regulated CO2 emissions from power plants, as it views climate change as an environmental impact within its purview. It is wholly within precedent that the EPA would regulate agricultural inputs with high CO2 emissions, such as dirty ammonia.

Consumer activism was critical to the CFC phaseout. As early as 1975 the Natural Resources Defense Council urged consumers to boycott CFC-based aerosols. Consumer pressure led to McDonalds eliminating CFCs from its packaging, and campaigns around the world ensured the success of the Montreal Protocol. Consumer groups concerned with global warming, the integrity of the food supply, and natural gas extraction might well get behind elimination of dirty ammonia.

Onward to Marrakech

Deep decarbonization of the whole economy should start with deep decarbonization of a single sector, not shallow decarbonization across the economy. If that sector is ammonia, we will have clean ammonia for food, the beginnings of a clean liquid fuel supply, and a suite of technologies that can take us into a clean future.

Evolving Toward a Better Anthropocene

This article is cross-posted  with permission from Future Earth

Humans have now transformed Earth to such a degree that a new epoch of geologic time, the Anthropocene, may soon mark the emergence of humanity as a “great force of nature.” The big question is why? Why did humans, and no other single multicellular species in the history of the Earth, gain the capacity to transform an entire planet? What is the nature of this new global force? Can we guide this force to create better outcomes for both people and nonhuman nature?

To answer these questions, we must go beyond geophysics, geochemistry and even biology. The principal cause of the Anthropocene is social, rooted in the exceptional capacities of Earth’s first ultrasocial species: behaviourally modern humans. Here I review a new transdisciplinary theory, called sociocultural niche construction, that explains why human societies gained the unprecedented capacity to transform an entire planet and how this relates to the challenges and opportunities of the Anthropocene.

Map showing when humans began to intensively use the land in various parts of the world, through heavy farming and other practices, dating from thousands of years ago (brown and orange) to just a few centuries or less (purple). Map: Erle Ellis

In prior work, I’ve focused on the ecological consequences of human actions, not the causes, such as in the rapidly-changing village landscapes across China. I’ve also worked globally to map “anthromes,” the ecological patterns that emerge when humans transform landscapes around the globe. But I always wanted to go deeper, to explain why humans so profoundly transform Earth’s ecology. One motivation was my envy of biogeography —  where classic “natural” biomes are shaped by global patterns of climate. Tropical woodlands form under warm and moist conditions, for instance, and tundra appears in colder and drier regions. I wondered what are the analogous global forces of humanity that shape and sustain the anthromes? How do these forces of “human climate” reshape the biosphere over the long-term, from centuries to millennia? These questions require an approach fundamentally different from efforts to understand the day-to-day or even the year-to-year dynamics in social-ecological systems: the “human weather."

It took me years of intensive and extensive transdisciplinary learning — standing on the shoulders of giants (see references below) — to develop a theory explaining why human societies transform ecology over the long-term. I call it anthroecology theory.

A new evolutionary synthesis

At the core of this theory is sociocultural niche construction, a new evolutionary synthesis, which weaves together existing theories of niche construction, the Extended Evolutionary Synthesiscultural evolutionultrasociality and world systems theory.  As implied by the name, sociocultural niche construction holds that the ecological niche of humans is social, cultural and constructed. Human societies have transformed Earth because their social capacities to construct the human ecological niche have scaled up and intensified through long-term processes of evolution by natural selection. To understand these changes, we begin with theory on niche construction and cultural evolution.

Classic ecological theory holds that species adapt to live within environmental constraints over which they have no control, such as heat or moisture. A species’ adaptations within these constraints form its ecological niche. Niche construction theory replaces this “one-way street” with the observation that many species, especially those known as “ecosystem engineers,” also alter their environments. That includes the building of dams (beavers) and nests (ants and many other animals) or the release of toxic chemicals that inhibit the growth of competitors (many microbes and plants like the creosote bush). When these environmental alterations affect a species’ ability to thrive, or the survival of other species sharing their environment, niche construction theory describes this as an ecological inheritance — a heritable consequence of environmental alteration with beneficial, detrimental or neutral consequences. In addition to ecological inheritances, many species also receive adaptive benefits when individuals learn behaviours from each other, such as the song dialects of birds and whales, which can be critical to reproductive success. These socially-learned adaptive behaviours are known as cultural inheritances.

The Extended Evolutionary Synthesis (EES), first called for in the 1980s by Stephen Jay Gould and formalised early this century by Massimo Pigliucci, Russell  Bonduriansky, Étienne Danchin and others, melds cultural and ecological inheritances together with genetic, epigenetic and parental inheritances. This synthesis holds that these inheritances evolve together to produce evolutionary changes in the phenotype — the expressed traits of a species. The EES also incorporates the concept that inheritances can move not only “vertically,” as genetic traits pass from parents to progeny, but also “horizontally,” or among unrelated individuals within a single generation — as do genetic traits in microbes and cultural traits in birds or whales. Inheritance can also pass “obliquely,” or from older to younger generations.

The EES has huge implications for understanding the rise of cultural and ecological inheritances in the evolution of human societies and their transformation of ecology. When environments change rapidly, within the span of a single generation, the EES predicts that adaptive traits inherited horizontally, like cultural traits spread through social learning, can enable more rapid responses to environmental challenges than is possible through genetic adaptations, where the spread of adaptive traits takes generations. For this reason alone, cultural traits can be favored over genetic traits in rapidly changing environments.

Many species are social ecosystem engineers, including the social insects and naked mole rats.  Yet behaviourally modern humans take sociality to the next level: ultrasociality. The capacity of behaviourally modern humans for social learning is unrivaled among species — even human sociality itself is socially learned. The very organisation of human societies is determined by culturally inherited traits, and human individuals commonly depend completely on relationships with non-kin individuals for survival.

In behaviourally modern human societies, sustenance and other necessities may be gained through complex social relationships among unrelated and even unknown individuals — by sharing, gifting, bartering, buying or just ordering online using a credit card. The need for foraging, farming or even shopping at the supermarket may be optional. The human ecological niche is thus largely sociocultural, constructed and enacted within, across and by individuals, social groups and societies based on socially learned behaviours. Long-term changes in the structure and functioning of human societies and their transformation of environments is the product of evolution by natural selection acting on these processes of sociocultural niche construction.

In the more than 50,000 years since behaviourally modern humans spread across the Earth out of Africa, human societies have evolved a tremendous diversity of complex cultural forms, all with profoundly different effects on their environments. This rapid diversification is partly explained by the observation — originally made by Darwin — that cultural traits can evolve far more rapidly than genetic traits. Human societies have often experienced runaway bursts of sociocultural niche construction, in which one change leads to such great social and environmental consequences that they must be adapted to by other, even more transformative changes in niche construction. As humans began to cultivate soils to grow crops, for instance, the fertility of soils dropped off. People compensated by harvesting and using manures to replace nutrients taken up or lost from the soil, a practice altering the entire social and ecological system of farming.

This runaway effect has marked human history: people began as hunter gatherers, using traps and projectiles and clearing vegetation with fire to enhance their ability to obtain food. Later peoples built on these cultural and ecological inheritances to form agricultural societies to produce even more transformative ecosystem engineering regimes. These included domesticating species, tilling and irrigating land and exchanging food and other needs and wants through barter and in early marketplaces. In doing so, agricultural societies grew larger and more complex with ever more specialised, diverse and unequal social organisation, from urban dwelling craftsmen to traders, artists and taxmen. These societies were reliant on ever more powerful and productive technologies for cooperative ecosystem engineering and the use of domestic livestock, wind and water power to supplement human energy.

As people built increasingly complex agricultural, then industrial, societies, and developed new ways of producing energy and transforming the land, human evolution was driven less by genetic inheritance and more by social factors like cultural inheritance and sociocultural niche construction. Graphic: Ellis 2015

Over more than 50,000 years of evolution in sociocultural niche construction, socially-learned strategies for cooperative ecosystem engineering have evolved from sustaining up to 10 hunter gatherers per square kilometer in a good year to sustaining more than 1,000 industrial citizens on the same area of land every year. As humans exploited new sources of energy, first from biomass and now mostly fossil fuels, per capita energy use has increased by more than an order of magnitude. Flows of material, energy, biota and information across human societies have become essentially global. Human lifespans today average nearly twice those of hunter gatherers and early farmers.

All of these trends have evolved through complex and convoluted trajectories in response to many different pressures, and different human societies continue to transform ecology in different ways. Nevertheless, the emergence of such a diversity of societal forms, and the rise of large-scale human societies capable of acting as a global force in the Earth system, is best explained as a process of evolution by natural selection acting on human sociocultural niche construction at the level of individuals, social groups and societies.

Are societies evolving towards a better Future Earth?

In recognising the Anthropocene as a new epoch of geologic time, we are confronted with the reality that our societies now directly shape Earth’s functioning. For better and for worse, our planet now changes with us, not apart from us. While for most people times may never have been better, the opposite is true for most other species. There are also strong indications that anthropogenic global changes in climate and biodiversity might derail future prospects for societal development. For example, the costs of climate change adaptation and the loss of wild pollinators might be unbearable burdens for some societies. We shouldn’t forget that evolutionary processes produce both adaptations and extinctions. Even the most successful large-scale societies will not last forever.

The processes that enable societies to adapt and to thrive in the face of long-term social and environmental challenges are not fully understood — a prime area for greater research investment. However, existing archaeological evidence suggests that societal capacity to anticipate, sense, interpret and respond to both external and internal challenges through adaptive processes of social change is essential to societal resilience. Some societies readily adapt to massive environmental challenges that overwhelm the adaptive capacities of others; some build irrigation systems, while others collapse. The archaeological record has repeatedly falsified the hypothesis that hard environmental limits, such as climate, alone determine the fate of human societies. The capacity of societies to thrive sustainably is a social capacity.

Behaviourally modern human societies have always engineered ecosystems to sustain themselves. Human societies are not sustained by the “balance of nature” but by a sociocultural niche constructed through cooperative ecosystem engineering and the social exchange of food and other needs and wants. Hunger is not caused by environmental limits to food production but by social limits to food distribution. That is why today, despite adequate food production, some still remain unfed.

Erle Ellis during a recent talk on the Anthropocene. Photo: Erle Ellis

For these reasons, we must focus on the social and cultural capacities of society to understand and address the societal challenges of the Anthropocene. Humans are a sociocultural species living in a sociocultural world on a used planet. “Getting back to nature” is not going to help. The focus must be on building strategies to shape nature more beneficially for both humans and nonhumans. Yet it remains to be seen whether societal efforts to intervene in the human climate system can generate better long-term outcomes for humanity and nonhuman nature.

The accelerating resource demands of increasingly wealthy urban populations are driving potentially disastrous changes in the Earth system from global climate change to mass extinctions and other unprecedented environmental changes. Yet these same global trends have brought benefits, too: we are using land more efficiently to produce food, and our energy systems now rely less on combusting biomass. We have even begun the process of replacing fossil fuels with more sustainable energies. Moreover, global communications have led to an acceleration in social learning, knowledge sharing and other interactions, potentially generating new solutions for global problems. Urbanisation and societal upscaling in general — the economies of scale — offer real planetary opportunities to spare more of the biosphere for nonhuman species while improving quality of life for humans.

Humans have always been far more than “destroyers of nature.” Human societies have reduced and eliminated pollution, have protected and restored endangered species and their habitats and may even now be undergoing a massive shift in energy systems that could prevent catastrophic global climate change. The boom in Anthropocene discussions might itself indicate that societies are waking up to the realities of becoming a global force in the Earth system.

Without intending to, human societies evolved the capacity to force Earth into the Anthropocene. Sociocultural strategies for avoiding the worst consequences of anthropogenic climate change, mass extinction and social inequality might already be evolving. Or perhaps, societies will just adapt to living in more and more unequal societies on a hotter, less biodiverse planet.  Either future is plausible, as are others.  But it should be clear — to everyone — which future is better.

Human societies and their cultures of nature will shape the future of life on Earth for the foreseeable future. By engaging with, not against, the processes of sociocultural niche construction, we might guide this new “great force of nature” toward better outcomes for both humanity and nonhuman nature. It is time to embrace what makes us human, ultrasociality, and turn it towards the grand challenges of the Anthropocene -– to intentionally build better societies and cultures of nature.

Towards Peak Impact

In the past few years, decoupling – breaking the link between economic growth and environmental impacts – has become the new catchword in environmental debates. The OECD has declared it a top priority, and UNEP’s International Resource Panel launched a report series on the topic in 2011. And last year, interest in the idea shot up after the publication of An Ecomodernist Manifesto which declared decoupling a central objective of ecomodernism.

Some observers, though, have questioned whether decoupling has occurred, and whether it is even possible. The Guardian columnist George Monbiot declared that humanity can’t consume more and conserve more at the same time. Responding to An Ecomodernist Manifesto, Jeremy Caradonna and seventeen co-authors described decoupling as a “myth” and a “fantasy.” In this essay, I’m going to review the evidence for relative decoupling (impacts growing more slowly than the economy) and absolute decoupling (impacts being stable or declining in absolute terms). I’ll look at what trends are positive, what challenges remain, and what the future might look like with the right policies and technologies.

Trends in consumption of materials are often used as an indicator of whether or not decoupling is happening. By this measure, a landmark analysis by Thomas Wiedmann and others found that “achievements in decoupling in advanced economies are smaller than reported or even nonexistent.” By allocating all raw material extraction to the places where the final consumption of goods occur, they were able to account for the fact that most developed countries increasingly buy their goods from abroad rather than producing them domestically. Wiedmann et al. concluded that the total per-capita consumption of material resources (including fossil fuels, metal ores and industrial minerals, construction minerals, and biomass) had kept pace with or even exceeded growth in GDP in most developed countries between 1990 and 2008.

However, the study’s findings are not quite as discouraging as they first appear. As Chris Goodall has pointed out, most of the rise in developed economies’ resource consumption was in the form of indirect use of construction materials, especially in China. A big part of these construction materials went into built structures that will last for decades; linking these on a year-to-year basis to consumption in developed economies likely exaggerates their material footprint.

More importantly, trends in material consumption are not necessarily good indicators of trends in environmental impacts. Decoupling, as Ester van der Voet, an industrial ecologist at the University of Leiden, observes, “is not just a matter of weight.” Wiedmann et al. are clear on this too, acknowledging that the material footprint ”does not provide information on actual environmental impacts of resource use but only on the potential for impacts.” Without factoring in the technologies that are used to extract raw materials and turn them into goods, we know very little about trends in environmental impacts like greenhouse gas emissions, land-use change, water depletion, or pollution.

There is no better illustration of this fact than the case of mined resources like metal ores, industrial minerals, and construction materials. These make up the lion’s share of the material throughput of most countries, yet do not rank high in terms of global environmental threats, especially in terms of biodiversity loss.

Mining of iron, bauxite, copper, gold, and silver – which together account for the vast majority of metals used globally – occupies about 0.007% of earth’s ice-free surface, according to a recent study. Metals and minerals make up more than half of the raw materials consumed in the EU, but account for less than 10% of global warming potential and land-use competition (although they are estimated to represent about one third of human toxicity).

In terms of overall human impacts on the environment, the burning of fossil fuels and consumption of biomass loom much larger than mining for minerals and ores. Biomass consumption, in the form of crops, meat, and wood, is the world’s major source of habitat and biodiversity loss, as well as a big contributor to greenhouse gas emissions and pollution of air and waters. Yet despite increasing per-capita demand for crops and animal products, the total amount of biomass harvested per capita worldwide has remained nearly constant over the last century according to a 2013 study by Fridolin Krausmann of the Institute of Social Ecology in Vienna.

The total biomass harvest has remained constant because while food demand has risen, the use of biomass for energy has declined dramatically. Starting in the 19th century, coal replaced wood as the leading source of primary energy globally. Around the same time, tractors, trolleys, trains, and automobiles began to displace draft animals for motive power, reducing demand for animal feed. And a variety of technological improvements raised the efficiency with which biomass is converted into final goods.

Looking more closely at wood, total global consumption has been stable since the late 1980s, suggesting absolute decoupling in this regard. The actual impacts of wood production – especially land-use change – are harder to measure, both because declining wood harvesting from natural forests has been partly offset by expanding tree plantations and because it is entangled with cropland expansion and other drivers. Nevertheless, the plateauing demand for wood combined with a larger share of plantations – which tend to have far higher yields – suggests that the total area impacted by wood harvesting may have declined in absolute terms.


With declining demand for biomass for energy and more efficient wood production, the environmental impacts of biomass consumption have shifted toward food production. Consumption of crops and meat, the main driver behind farmland expansion, has increased dramatically in the last half century, with global food calorie consumption roughly tripling and per-capita consumption going up by about one-third since 1960. But that doesn’t mean that the land footprint of agriculture has risen proportionally.

The total area of cropland and pasture grew by about 10% between 1960 and the mid-1990s. But since then, FAO data indicate little further growth. (For the last five to ten years, the aggregate trend reported by FAO may not be reliable, as there has been an uptick in harvested cropland area, and the decline of pasture in the same period may be an artifact of reporting.) Meanwhile, the amount of farmland used in food production per capita has declined by about half since 1960, in spite of much richer diets, thanks to higher yields and cropping frequencies as well as improved efficiency in the conversion of feed to meat.


Thomas Kastner and others have shown how this pattern holds for every world region, where the amount of cropland needed for final consumption – including crops grown for livestock feed – has declined over time, on a per capita basis.*

Source: Kastner et al., 2012

In large parts of Europe, this decline has been so significant as to offset even growth in population, meaning that the amount of cropland needed to meet food demand in these regions has fallen in absolute terms. What’s even more interesting is that the cropland requirement is not necessarily higher in rich countries than it is in poor countries. For instance, the per-capita cropland requirement is about the same in Western Europe as it is in West Africa. This is explained by the combination of low yields and meager diets in West Africa, and high yields and rich diets in Western Europe.

Much of the improvement in farming efficiency has been put to richer diets rather than farming less land. In particular, the larger share of meat in diets as countries get richer has further increased the amount of crops required for the average diet. Yet with further economic growth, pressures tend to ease somewhat. Food calorie consumption stabilizes by the time countries reach high-income status, and while meat consumption continues to grow, per-capita demand for resource-intensive beef has declined markedly in developed countries.

While many of the trends are promising, it’s too early to declare “peak farmland.” Forecasts suggest the world will need to produce 70-100% more crops by 2050. Doing so without expanding cropland area will require a rate of crop yield improvement on a par with that seen during the second half of the 20th century, if not larger. Maintaining historical yield trends may not be enough to stop farmland area from growing in the next few decades. Accelerating the pace of crop yield gains is therefore among the chief environmental and conservation challenges of this century. (The extent of pasture, on the other hand, might continue to fall as it is intensified and beef production systems become increasingly grain-based.) And it’s important to remember that even peak farmland might not stop deforestation in the tropics, since agricultural production is gradually shifting from temperate to tropical countries.

The agricultural intensification that has enabled the yield improvements to date has been associated with other environmental impacts, like greenhouse gas emissions and nutrient pollution. But here too, there are encouraging trends. While intensive agriculture uses more energy and other inputs, which generate carbon emissions, this is more than offset by the lesser amount of deforestation that results from higher yields. Jennifer Burney et al. estimate that agricultural intensification over the past several decades has substantially reduced emissions compared to a counterfactual scenario without intensification. Astonishingly, a study published in January 2016 finds that total greenhouse gas emissions from farming – from operations as well as land-use change – peaked in the early 1990s. If these results hold up to further scrutiny, they constitute the most important and underreported instance of absolute decoupling to my knowledge.

Another recent study by Xin Zhang et al., published in Nature, suggests cause for optimism also when it comes to nitrogen pollution on farmland. They analyzed trends in nitrogen surplus – the fraction of applied nitrogen that is not taken up by crops and thus contributes to runoff and eutrophication – in 113 countries over five decades. In 56 of these countries – representing 87% of global nitrogen fertilizer consumption – the rate of increase in nitrogen surplus has slowed or leveled off, and in half of these 56 it has even started to decline, following the classical Environmental Kuznets Curve pattern. The remaining countries, mostly in the developing world, are yet to reach the peak, but are likely to do so as technology and regulation catches up.

Source: Zhang et al., 2015  Adapted by permission from Macmillan Publishers Ltd: Nature 528.7580 (2015): 51-59, copyright 2015

Trends are not as encouraging in other areas. While relative decoupling of carbon emissions from economic growth has been the norm in developed economies, and some countries have dramatically decarbonized their power sectors, absolute decoupling has not occurred on a broad, sustained basis. A study by Glen Peters and colleagues, in which emissions embodied in imported goods are accounted for, greenhouse gas emissions continued their upward trajectory in developed countries between 1990 and 2008. A similar analysis for the United Kingdom showed that emissions associated with domestic consumption kept rising until the recession.

Source: BP. Historical Data Workbook. Stat. Rev. World Energy, 2014

There are, of course, many other important types of environmental impacts, such as overexploitation of wild animals, other forms of pollution, as well as water extraction. Including these only makes it clearer that that there is no single metric that can tell us whether decoupling overall is happening now, or whether it could ever happen. Trends differ greatly between types of environmental impact, and their interpretation depends on what timeframe you look at, and the the phase of economic development in the country or region in question. Still, among the most important forms of environmental pressure, relative decoupling is the norm. Absolute decoupling is happening in some, but far from all, cases. (For more detailed discussion of decoupling trends and drivers, see Breakthrough’s publication Nature Unbound: Decoupling for Conservation.)

What about the future? The fact that many impacts have continued to grow in absolute terms over the last few decades does not necessarily mean that they will continue to grow forever. Population growth is slowing down and we may (or may not) reach peak population in the second half of this century. As countries become wealthier, demand begins to saturate for at least some categories of good, like food and construction materials. And while continued technological advances should not be taken for granted, there is every reason to think they could continue, given large gaps in global crop yields, the carbon intensity of energy, and so on. Taken together, these trends suggest that relative decoupling has a good chance to turn into absolute decoupling this century. How soon and at what level peak impact occurs will depend greatly on policies, investments, and other choices made by governments, corporations, and civil society. Decoupling is possible, and for now, I remain cautiously optimistic that human development and a flourishing natural world can coexist.


*Kastner et al. use the same consumption-based accounting as Wiedmann et al.

Adaptation for a High-Energy Planet

Download a PDF of the report here. 

Even as adaptation has more recently gained mainstream acceptance as an unavoidable response to rising global temperatures, it continues to be a sideshow to the main event of limiting greenhouse gas emissions through international climate negotiations. This misses enormous opportunities for effective action to reduce human suffering due to climate and weather disasters, and to lay a stable foundation for cooperative international efforts to address both climate adaptation and mitigation.

With global population growth, more accumulated wealth, and other socioeconomic changes, the number of people and amount of property exposed to and thus potentially vulnerable to climate risk will continue to increase, regardless of anthropogenic climate change and how well (or poorly) we address it. Societies can do a much better job in maximizing their resilience to climate-related risks.

With this in mind, we propose, as an animating goal for an adaptation agenda, the progressive and continual reduction of average number of deaths each year from natural disasters, including those disasters that will be exacerbated by a changing climate. With an agenda for action that is attentive to peoples’ well-being, equity, and livelihoods, adaptation policies focus on opportunity rather than cost, promising benefits that are near-term and certain.

To bring adaptation to the fore, we emphasize two strategies. The first is to adopt progressive and decisive reductions in loss of life from disasters worldwide as a direct measure of adaptive success. The empowering lessons of regions as socioeconomically distinct as eastern India and the Netherlands show that such a goal can be within reach for all nations and people. The second strategy is to put adaptation at the center of the climate change policy agenda, along with energy access and innovation. “Low-regret” and “win-win” efforts to directly improve peoples’ lives while supporting mitigation efforts offer attainable objectives for creating a more prosperous and resilient world that resonates with a diversity of values and worldviews.

Success in preserving human life in an often-capricious and frequently harsh environment is the result of innovative adaptations. Far from being a new activity undertaken in response to anthropogenic climate change, humans excel at the kinds of innovation-led adaptation which have lessened vulnerability to climatic and other challenges, and allowed humans to flourish in an incredibly diversity of climates.

We look at innovative adaptations that address challenges including food security, rising sea levels, and public health, in places as different as Nepal, the Netherlands, and inner- city Chicago. Two key lessons emerge from these examples:

  1. To adapt for rising exposure to climate change, innovate toward a range of possible futures: Dealing with uncertainty requires flexibility and foresight to keep pathways open.

  2. High-energy adaptation means reduced vulnerability: Reducing vulnerability to natural disasters depends on prioritizing socioeconomic development through modernized energy systems and other pragmatic initiatives. Successful adaptation will occur only on a high-energy planet.

In the following report, we evaluate opportunities to increase climate resilience through standard concepts used broadly to assess and mitigate risk; consider successful adaptations in a variety of different global contexts in search of key lessons for international climate adaptation; and consider what an alternative climate adaptation framework that takes adaptation as its primary objective might look like.

Download a PDF of the report here. 

Bill McKibben’s Misleading New Chemistry

The Harvard study, in fact, suggests nothing of the sort. The Harvard researchers measured atmospheric concentrations of methane across North America and compared them with levels over the Pacific Ocean. They concluded that methane emissions in North America had risen significantly since 2002. But they also concluded that while the United States has seen a 20% increase in oil and gas production since 2002, “the spatial pattern of the methane increase… does not clearly point to these sources.”

While the Harvard study doesn’t clearly point to a source of the increase in atmospheric methane concentrations, another prominent and widely covered study—one that McKibben surely must have been aware of—does. Using a novel method to trace atmospheric methane measurements back to their source, that study, published this month in the journal Science, concludes that the rise in atmospheric concentrations of methane since 2006 is most likely attributable to agricultural sources, not oil and gas production.  A pause in the rise of methane concentrations between 1999 and 2006, the authors conclude, was attributable to “diminishing thermogenic emissions, probably from the fossil-fuel industry.” Renewed increases in atmospheric methane concentrations after 2006 are “predominantly biogenic, outside the Arctic, and arguably more consistent with agriculture than wetlands.”

In reality, if America has a methane problem, it doesn’t have much to do with the shale gas revolution. Most natural gas production in the United States is not produced from shale. Despite dramatic expansion of shale production, it still only constitutes less than half of total natural gas production, according to the US Energy Information Administration. Most leakage associated with natural gas production does not appear to be associated with hydraulic fracturing. And most leakage associated with natural gas infrastructure isn’t associated with the use of gas in the power sector.

With regard to the actual rates of methane leakage, McKibben cherrypicks studies, several of them from avowed fracking opponents, that find that leak rates are much higher than EPA estimates, in order to argue that methane leakage erodes much, if not all of the climate benefit of switching electricity production from coal to gas. These studies are outliers. What the balance of evidence shows—including studies by EPA, the Environmental Defense Fund, and the University of Texas—is that leak rates, while in some cases higher than EPA estimates, are well below levels at which they would begin to significantly erode the benefits of switching from coal to gas.

But even if one accepts the higher estimates that McKibben relies upon, the impact on the climate is marginal. As my colleague Alex Trembath demonstrated in a literature review last summer, modeling of the overall warming impact of a large scale shift from coal-to-gas generation in the power sector (as opposed to simply using theoretical global warming potential conversions) consistently finds the contributions from methane to be a marginal factor in determining overall warming impacts.

Some modeling finds the climate benefits of a coal-to-gas shift to be very marginal, others find it to be quite significant. But the assumed rate of methane leakage is largely irrelevant compared to other factors. What determines the climate benefits of switching from coal to gas are the assumed thermal efficiency of future coal plants and whether the switch to gas is a bridge to zero carbon generation in the second half of this century or a final destination.

From a climate perspective, what all of the modeling of a large scale shift from coal to gas has demonstrated is that while natural gas can be a bridge to a 500 or 550 ppm world, it can’t get you to 450 ppm, much less lower. Natural gas is a fossil fuel. Its carbon intensity is significantly lower than coal, but it is nonetheless substantial. This alone is legitimate reason to oppose increased natural gas production and combustion.

But switching to natural gas can also reduce emissions substantially in a world in which, despite increasingly desperate entreaties from climate activists, emissions continue to rise. It is, along with nuclear and hydroelectric power, one of the only energy sources that has ever succeeded at decarbonizing a large, modern economy at rates that even begin to approach those necessary to mitigate climate change. But it won’t get us all the way there.  

One can decide which side of this perfect-versus-good debate one wishes to be on. But the willful misrepresentation of the evidence on natural gas production, like the longstanding misrepresentations of the costs and benefits of nuclear energy, isn’t helping anyone make particularly well-informed decisions.

McKibben deserves credit for bringing climate change to the public’s attention three decades ago, and for building a climate movement that demands far-reaching action to address the problem. But I’ll say now publicly what I have said to him privately for many years. So long as the climate movement is limited to NIMBY fracking opponents, anti-nuclear greens, and renewables fabulists, it is unlikely to achieve either the broad social consensus that will be necessary to advance aggressive action, nor action that is particularly likely to achieve the levels of carbon reduction that will be necessary to significantly mitigate climate change.  

Exceptional Circumstances

This excerpted essay is reprinted with permission from Issues in Science and Technology. Click here to read the full and original essay. 

The threats to democracy in the modern era are many. Not least is the risk posed by the widespread feeling among different segments of the public in contemporary democracies that no one from the political class is listening. Such discontent reaches from the Tea Party in the United States and the UK Independence Party (UKIP) in the United Kingdom to the Alternative for Germany (AfD) Party in Germany and the National Front in France. But worryingly, similar sentiments can be found in the climate science and policy community.

The well-known climate researcher James Hansen, who has been publicly sounding the alarm on global warming since his influential 1988 testimony before the U.S. Congress, summarized the general frustration when he asserted in 2007 that "the democratic process does not work." In his 2009 book, The Vanishing Face of Gaia, James Lovelock, another long-time scientific voice of warning, compares climate change to war, emphasizing that we need to abandon democracy to meet the challenges of climate change head on. To pull the world out of its state of lethargy, "nothing but blood, toil, tears, and sweat" is urgently needed.

Dale Jamieson, professor of environmental studies, philosophy, and law at New York University and author of Reason in a Dark Time (2014), exemplifies the skeptical view of our present political order’s ability to cope with the consequences of global warming. He warns that climate change presents us “with the largest collective action problem that humanity has ever faced, [but] evolution did not design us to deal with such problems, and we have not designed political institutions that are conducive to solving them.” He adds: “Sadly, it is not entirely clear that democracy is up to the challenge of climate change.”

I do not disagree with Jamieson about the enormous challenge global warming likely offers. But I do disagree strongly about the implicit medicine, the rationale for which is beginning to come from scholars in diverse fields. The historian Eric Hobsbawn’s long-time skepticism toward democracy extends in his 2008 book, Globalisation, Democracy, and Terrorism, to strong doubts about the effectiveness of democratic states in solving complex global problems such as global warming. And Nobel Laureate Daniel Kahneman says: “the bottom line is that I’m extremely skeptical that we can cope with climate change. To mobilize people, this has to become an emotional issue. It has to have immediacy and salience. A distant, abstract, and disputed threat just doesn’t have the necessary characteristics for seriously mobilizing public opinion.”

Climate scientists, social scientists concerned with climate change, and the media refer to a future of “exceptional circumstances.” However, the same groups also assert that no one is listening to their diagnosis of potential incomparable dangers. An elite of climate scientists believes they are reading the evidence that others fail to acknowledge and know truths that others lack the courage to fully confront. In light of the extraordinary dangers to human civilization posed by climate change, democracy quickly becomes in their eyes an inconvenient form of governing.

In the past, warlike conditions and major disasters typically were seen to justify the abolition of democratic liberties, if only temporarily. The term “exceptional circumstances” refers to conditions often invoked to grant governments additional powers to avert or tackle unforeseen but threatening political, economic, or environmental problems. The present appeal to exceptional circumstances echoes this sentiment, demanding the elevation of a single socio-political purpose—specifically carbon emissions reductions—to ultimate political supremacy.

The implication of the position is that democratic governance of society must be subordinated to the defeat of the exceptional circumstances. The single purpose of defeating the exceptional circumstances legitimizes the suspension of political rights and liberties. But for how long can one defer liberties? At least in the case of war, in democratic societies the answer is that, in economist Friedrich Hayek’s words, “it is sensible temporarily to sacrifice freedom in order to make it more secure in the future.” However, is any massive absorption of powers in the hand of the state and its representatives easily reversible? And, are the potential consequences of climate change the equivalent of (abrupt) warlike conditions? How can one pinpoint the onset of exceptional circumstances? Or, perhaps even more troubling, their endpoint?

The deficiencies of and the short-term as well as long-term challenges faced by democratic governments are many and go far beyond the problem of climate change and its societal consequences. What alternatives do these impatient scholars have in mind? After all, authoritarian and totalitarian governments do not have a record of environmental accomplishments; nations that have followed the path of “authoritarian modernization” such as China and Russia cannot claim to have a better record, despite the high status of scientists and engineers in their societies.

To those who see climate change as a uniquely overwhelming threat to human well-being, democracy itself seems inappropriate, its slow procedures for implementation and management of specific, policy-relevant scientific knowledge leading to massive risks and dangers. The disenchantment with democracies continues to advance as the democratic system designed to balance divergent interests appears to have failed in the face of these future threats.

The discussion in the climate science and policy community about the shortcomings of democratic governance resonates, at least superficially, with assessments coming from the social sciences of the present and future state of democracy, which have reached similar discouraging conclusions about the efficacy of democratic governance in many nations. So, for example, political scientist and former UK Member of Parliament David Marquand sees “a hollowing out of citizenship; the marketization of the public sector; the soul-destroying targets and audits that go with it; the denigration of professionalism and the professional ethic; and the erosion of public trust.” Many social science observers see contemporary democracy—whether by design of self-interested actors such as large corporations, or as an unintended outcome of structural economic, political, and moral changes—as tending toward increasingly autocratic forms of governance.

But social scientists and climate scientists diverge profoundly in their analyses of the necessary remedy. Social scientists such as political historian Pierre Rosanvallon and sociologist Colin Crouch see the need to restore the vitality of the core function of democracy through more active participation of large numbers of citizens in shaping the agenda of public life. Climate scientists and others whose chief concern is climate change seem instead to believe democratic governance to be inherently incapable of coping effectively with large-scale environmental problems.

What should be the role of climate science knowledge and climate scientists in political deliberations about climate policy? Can science, and thus should scientists, tell us what to do? For the Massachusetts Institute of Technology historian and philosopher of science Evelyn Fox Keller, the answer is clear: “where the results of scientific research have a direct impact on the society in which they live, it becomes effectively impossible for scientists to separate their scientific analysis from the likely consequences of that analysis.” To Keller, this seems to then add up to a compelling case for an immediately effective, practical political role of climate science, given the seriousness of the problem of global warming:

There is no escaping our dependence on experts; we have no choice but to call on those (in this case, our climate scientists) who have the necessary expertise.… Furthermore, for the particular task of getting beyond our current impasse, I also suggest that climate scientists may be the only ones in a position to take the lead…. [G]iven the tacit contract between scientists and the state which supports them… I will also argue that climate scientists are not only in a position to take the lead, but also that they are obliged to do so.

Complementing the expectation that scientists must lead is the conviction that citizens are unprepared to act. We have already seen how some leading academics believe that the public is not cognitively capable of coming to the right conclusions about climate change’s urgency. Robert Stavins, director of Harvard’s Environmental Economics Program and an IPCC lead author, notes that a “bottom-up demand, which normally we always want to have and rely on in a representative democracy, is in my view unlikely to work in the case of climate change policy as it has for other environmental problems.... It’s going to take enlightened leadership, leaders that take the lead.”

But the idea that science and scientific leadership offer some sort of alternative to democracy has, to put it mildly, major weaknesses. To begin with, scientific knowledge does not and cannot dictate what to do. One of the fundamental flaws in the portrait of an inconvenient democracy is the failure to recognize that knowledge of nature must always enter society through politics (whether democratic or authoritarian)—through decisions about, as Harold Laswell famously put it, “who gets what, when, how.” Knowledge about how such decisions are best made is not particularly available to scientists. Indeed, such knowledge is inherently and necessarily contestable.

The vision of a scientifically rational and beneficent authoritarian regime is thus incoherent because it treats a simple technical goal—the reduction of greenhouse gas emissions—as if the very fact of its articulation should automatically illuminate an optimal pathway for transforming the complex global energy system on which modern societies depend for their survival. But as stressed by Mike Hulme, a climate scientist who has come down clearly on the side of democracy, such notions may be favored by those “who are more likely to conceive of the planet as a machine amendable to control engineering.”

 The pessimistic assessment of the ability of democratic governance to cope with and control exceptional circumstances seems to bring with it an optimistic assessment of the potential of large-scale social planning. Yet all evidence suggests that the capacity not only of governments, but societies, to plan their future is rather limited, perhaps non-existent. The problem is not one of democracy, but of the complexity of social change. From this perspective, the claims that the key uncertainties about the behavior of the natural climate processes have been eliminated does nothing whatsoever to address the uncertainties associated with the social and political processes for taking effective action. Consensus on the evidence of natural science, it is argued, should motivate a consensus on political action. The uncertainties of social, political, and economic events, the difficulty of anticipating the future, are treated as minor obstacles that can be managed by the experts. But contemporary societies show no evidence that these uncertainties are even comprehensible, let alone manageable.

Indeed, this is precisely why democracy, inconvenient as it may be, is not only necessary but, for a challenge of the magnitude and complexity of climate change, essential. To a far greater extent than authoritarian governance, democratic governance is flexible and capable of learning from policy mistakes, which are inevitable when trying to deal with something as complex as climate change. Democratic governments’ ability to learn allows them, as David Runciman explains in The Confidence Trap, his 2013 study of democracies in crisis, “to keep experimenting and adapting to the challenge they encounter, so that no danger becomes overwhelming.” Democracies “have the experimental adaptability and they have the collective resilience under duress.” But Runciman offers a cautionary note, because “the knowledge that democracies have of their long-term strengths does not tell them how to access those strengths at the right moment. That is why climate change is so dangerous for democracies.” Dangerous because the impatience of the climate science community leads it to imagine that other, less open forms of governance might do better than democracy.

An alternative model is therefore needed, and I submit that it will be found only through revitalized democratic interaction in which alternative perspectives can be presented and tested. Climate policy must be compatible with democracy; otherwise the threat to civilization will be much more than just changes to our physical environment. The alternative to the abolition of democratic governance is more democracy—making not only democracy and solutions more complex, but also enhancing the worldwide empowerment and knowledgeability of individuals, groups, and movements who work on environmental issues. As the world gradually transitions toward further denationalization of governance, democracies will produce new, multiple forms of social solidarity and obligations, strengthening local and regional capacities to respond to climate change, and enhancing the awareness of social interdependence. Examples include the widespread community and regional support of renewable energy in Germany—and the success of wind energy in Texas.

Now is the time to commit to democratic complexification that fosters creativity and experimentation in the pursuit of multiple desired goals. For those who think that there can be only one global pathway to addressing climate change, the erosion of democracy might seem to be “convenient.” History, of both recent decades and centuries, tells us that suppression of social complexity undermines the capacity of societies to solve problems. Friedrich Hayek points out a paradoxical development: As science advances, it tends to strengthen the observation shared by many scientists that we should “aim at more deliberate and comprehensive control of all human activities.” Hayek pessimistically adds, “It is for this reason that those intoxicated by the advance of knowledge so often become the enemies of freedom.”

Nico Stehr (nico.stehr@zu.de) is the Karl Mannheim Professor of Cultural Studies at Zeppelin University in Friedrichshafen, Lake Constance, Germany.

The Fifth Anniversary of the Tōhoku Earthquake

The combined damage from the earthquake and the tsunami reached $235 billion, making Tōhoku the most expensive natural disaster in world history (in terms of economic impact, if not human toll). 

And yet, as we know, the Tōhoku earthquake was overshadowed immediately and permanently by the meltdown at the Fukushima Daiichi power plant. This despite the fact that the radiological impacts of Fukushima will have effectively zero impact on human health. Still, a Google search on March 10, 2016 yielded 18,100 results for "Tōhoku earthquake" and 1.2 million results for "Fukushima nuclear." It's clear which event the world is remembering this week.

Obviously the meltdown had impacts. People were forced to leave their homes, turning villages like Okuma into ghost towns. Many Japanese died in the process. And that's precisely the point. As the BBC asked last week, "is the Fukushima exclusion zone doing more harm than radiation?"

"In my opinion yes it has," radiation expert Dr. Geraldine Thomas told BBC reporter Rupert Wingfield-Hayes. "The radiation has not been the disaster. It's our response to the radiation, our fear that we've projected on to others, to say this is really dangerous. It isn't really dangerous and there are plenty of places in the world where you would live with background radiation of at least this level."

It's a difficult epidemiological and psychological terrain to traverse. As Dr. Thomas relayed in another interview this month with John Humphrys, evacuation wasn't necessarily the wrong response to Fukushima. "I think anybody in that situation would have done exactly what happened post-Fukushima," she says. "But with hindsight we can look back and say: do you know what, we overestimated the damage we were doing by staying there and actually it would have been far better to treat this as if it was like a chemical toxin."

There are lessons we should learn from Fukushima, especially for an ecomodern vision of a high-energy, low-footprint planet. One lesson is to design and deploy safer nuclear reactors with lower risk than the (already very low) current risk of melting down. Another is to better communicate the benefits of nuclear power and the real dangers of radiation, which have been mythologized and exaggerated for decades. 

But it would be a mistake, I think, to let this anniversary pass by focusing on the the bombastic nuclear event and ignoring the much larger losses endured by the people of Japan. The earthquake and the tsunami caused massive direct damage in death and destruction, in addition to less direct damage in the form of trauma, displacement, and loss. As America panics over miniscule radiation leaks thousands of miles from our shores, the people of Japan are still rebuilding their infrastructure and adjusting to the changing landscape of their home country. 


Photo credit: Christopher Johnson - Flickr: IMG_0166, CC BY-SA 2.0

March 31: Ecomodernism Debated: American Association of Geographers to Host Panel on Ecomodernism

Frequently Asked Questions About Population

.Download this FAQ as a pdf here.

Q: Why is population relevant to the environment?

Human economic activities impact the environment through land use, freshwater consumption, pollution, and so on. A larger population can increase these pressures, but not always in a linear manner. Environmental impacts depend not only on the size of the population, but also how wealthy those people are, the nature of their consumption, and how those products are produced.1

For example, the average Northern European consumes more food and a larger variety of foods than the average West African.2 However, those two regions actually require a similar amount of cropland, per capita, for food production, because they use very different agricultural technologies.3 Ultimately, population size is an important, but not the only, factor determining human impacts on the environment.


Q: How fast is the population increasing today compared to in the past?

For all of human history until the 19th century, the world population was less than 1 billion people. Today, it is more than 7 billion (see Figure 1). The 20th century saw an unprecedented growth in the world population.



The rate of population growth peaked in 1970 at 2.06 percent per year, but has fallen from that peak to 1.18 percent today. Although the total population continues to grow, the rate of growth is slowing (see Figure 2).


Falling fertility rates help explain the slowing of population growth. The following maps compare global fertility rates in 1950 and 2015, and show the current population growth rates by country.






Maps courtesy of Martin Lewis (read more)


The total fertility rate shown in the maps above describes the number of children born to each woman, on average. The replacement fertility rate describes how many children, on average, each woman would have in order to keep the population stable. The intuitive answer is that two children are needed to replace two parents. At the population level, however, mortality has to be factored in and unfortunately, not all children live to child-bearing age, especially in poor countries.

In developed countries, most children live to adulthood and the replacement rate is about 2.1 children per woman. In poor countries, however, higher mortality means that fertility rates as high as 3.5 children per woman are needed to keep the population stable.4 This means some fertility rates may seem higher than they actually are: a fertility rate of 6 in a very poor country may only be 2.5 children above the replacement rate.  


Q: What factors influence population trends?

Population is ultimately determined by the rate of births and deaths. But birth and death rates are driven by other factors, like education, economic development, access to healthcare and family planning, and cultural norms.

The rapid population growth of the 20th centuries, for example, was largely due to improvements in health, nutrition, and sanitation that lowered mortality rates. Beginning with the Industrial Revolution in Europe, these improvements started to spread around the world and mortality rates, especially among infants, declined rapidly.5 With people living longer, the overall population grew. Today, people around the world are much more likely to die in old age than in infancy (see Figure 3).



Industrialization, economic growth, improved education systems, and greater contraception use helped bring about rapid fertility declines in southeast Asia in the last half-century. Today, rich countries have an average fertility rate below the replacement level of 2.1 children per woman. The countries that still have high fertility rates in the 21st-century tend to be in poorer countries, especially concentrated in Sub-Saharan Africa (see maps above).6 The widely-observed shift towards lower fertility and mortality rates as countries develop economically is called the demographic transition. Figure 4 shows that higher GDP per capita is associated with lower fertility rates.


There are several reasons why economic growth can lead to lower fertility. In developing countries where subsistence agriculture is the norm, children can be a source of labor and insurance, making larger families desireable.7 In places without access to modern healthcare, child mortality rates are also higher. This leads women to choose larger families in order to ensure that some of their children survive to adulthood.8 Education rates, especially among women, also tend to be lower in poor countries. With more education, women tend to marry later and are better able to access birth control, allowing them to invest more of their resources in fewer children.6

With something as personal as family size, cultural norms also play an important role. In Niger, for example, surveys found that even women with secondary schooling want 6 children on average, due to a cultural preference for large families.6 Conversely, some poor countries can see cultural shifts towards smaller family sizes even without major economic development.6 An interesting analysis in India, for example, found that TV ownership was associated with lower fertility rates. The author argues that parents were influenced by the modern, smaller families they saw portrayed as happy and successful on TV.

Public policy can also influence fertility rates, both directly and indirectly. Programs to promote family planning information and access to contraception can change fertility rates directly. A recent study of 40 high-fertility countries found that half of the difference in birth rates could be attributed to family planning efforts.9 However, since fertility is also related to other factors like healthcare and education, public policies to improve those outcomes can impact fertility rates as well.


Q: Isn’t the world overpopulated?

There have been social observers throughout history who warn that the world is headed for catastrophe due to overpopulation. In 1797, the Reverend Thomas Robert Malthus wrote his Essay on the Principle of Population to warn British elites that unchecked human procreation would lead to resource scarcity and eventual collapse. More recently, in 1968, the biologist Paul Ehrlich published his bestselling book, The Population Bomb, warning that overpopulation would lead to environmental catastrophe and mass starvation. Ehrlich’s book was published at a time when the global population was growing at its fastest rate, and many shared his fear.   

Concerns about overpopulation ultimately stem from the question of how many people the Earth can support. However, the answer to that question is highly dependent on what kind of lifestyles those people lead, what technologies they use, and what social and economic institutions are in place.10 For example, the first Dutch settlers in what is now New York City would never have imagined that Manhattan Island would one day be home to 1.6 million people. That’s because the technologies that enable it, like skyscrapers and refrigeration, did not exist in the 17th century.

In his book How Many People Can the Earth Support?, Joel Cohen argues that his titular question “has no single numerical answer, now or ever.” He explains: “Because of the important roles of human choices, natural constraints and uncertainty, estimates of human carrying capacity cannot aspire to be more than conditional and probable estimates: if future choice are thus-and-so, then the human carrying capacity is likely to be so-and-so” (emphasis original).

Several studies have demonstrated the fungibility of this question. One study from a group of biologists, for example, estimated that using all the Earth’s land, we could support a population of 282 billion people.11 Even in their “save the forests” scenario, the population maximum was 150 billion. These are rather extreme thought experiments, and would create an Earth very different than today’s, but they help highlight that the Earth itself does not provide a single answer for how many people can live on it.

In general, it is safe to say that population growth increases resource pressures and leads to environmental harm, but there is no single “tipping point” or threshold that balances the human population and Earth’s resources.


Q: What would be the significance of a peak in population?

Although most human impacts on the environment continue to grow in absolute terms, many are declining at the per-capita level. For example, although total global freshwater consumption is increasing, it is increasing more slowly than the population, meaning each person is using less water today than thirty years ago. Efficiency gains have partly offset the growth in population.12 This decrease in per-capita consumption is called relative decoupling; if total water consumption begins to decline, that would be absolute decoupling.

A peak in human population offers the potential to achieve absolute decoupling, and with it major reductions in human impacts on the natural world. If the human population is no longer increasing, gains in per-capita efficiency will translate to reductions in total consumption. This “peak impact” could still occur alongside a growing population, but it would require larger efficiency gains.  


Q: How is the world population expected to grow this century?

The United Nations and other organizations publish projections of how the world population could change in the coming decades. The UN’s median projection shows the world population growing to 9.7 billion by 2050 and 11.2 billion by the end of the century (see Figure 5). The high and low ends of their prediction interval suggest population is likely to be somewhere between 9.3 and 10.2 billion by 2050.


The UN and IIASA arrive at different population projections because of the different assumptions underlying their statistical models. In both models, most of the population growth this century will occur in less developed countries, mostly in Sub-Saharan Africa, where the demographic transition has not progressed very far.

The UN projections are slightly more pessimistic about the pace of demographic transition in Sub-Saharan Africa, which produces a higher projection. They base this assumption on evidence that fertility declines in Sub-Saharan Africa are occurring more slowly than other countries did at similar stages of development.13 They concede, however, that their median projections may be too high “should massive efforts to scale up family planning information, supplies and services be realized.”13

IIASA’s projections assume a slightly faster pace of fertility decline in Sub-Saharan Africa. They argue that the stalls in fertility decline observed there are likely to be temporary, delaying the demographic transition by only 5 or 10 years. The IIASA model also includes education as a third demographic dimension alongside age and sex. They argue that including this additional dimension, which has such strong predictive potential for fertility and mortality levels, increases the accuracy of their projections compared to the UN.


Q: Can we expect peak population this century?

Although the general drivers of fertility declines are known, demographers cannot know for sure if today’s high-fertility countries will follow historical patterns.6 However, governments and civil society can play a very active role in creating policy that impacts fertility rates, both directly, through family planning programs, and indirectly, by promoting economic development more broadly. Lowering fertility rates is far from the only reason to advocate for poverty reduction, access to education, and the empowerment of women.

There is a lot of uncertainty about how the world population will change this century, but there is plenty of cause to be optimistic. Both the UN and IIASA median projections see fertility rates declining in all high-fertility regions by 2050. Indeed, the only regions with any projected growth in fertility rates are wealthy countries where birth rates are much lower than replacement. Whether at 9 or 11 billion, continued efforts to promote human development could help the 21st century see the world population peak and start to decline.



Photo Credit: Stylepantry.com

Zero-Carbon in the 50 States

Your education is in environmental science and regional planning, how did that turn into an interest in energy?

I like thinking about complex problems in a holistic way, and about how people can coordinate to solve them. Two problems requiring massive coordination are climate change and poverty. “Energy” (and the lack of it) plays a key role in both. Energy is the life-blood of civilization. The circulatory system. I confess, I was interested in energy before I was concerned about global warming.

Really? You started off with fusion didn’t you?

Well my interest in fusion came from a desire to short circuit energy dependency. As an Iranian-American I became aware of the “resource curse”: the phenomenon of developing countries rich in natural resources paradoxically undermined by that very resource (oil, in Iran’s case). Often, rather than benefiting the general population, an easily controlled resource serves to enable some minority to gain undue control (think dictators, corporations, foreign powers). Corruption reigns. Democracy is suppressed (think Mossadeq, 1953). One day I saw an article by a plasma physicist on fusion energy innovations that were not being funded. I didn’t pay much attention to it until my cousin enlisted in the US army and died in Iraq. In memory of my cousin, I vowed to do everything I could to end our dependence on fossil fuels. Per the physicist, it seemed a fusion breakthrough was imminent and could supply the muscle to get off of fossils and end the resource curse. So I started to get involved in fusion.

Along the way, I became aware of how little we spend on energy research and development in general, and how much energy as a whole suffers from “energy tribalism.”  Coincidentally, it was at this time that I came across The Breakthrough Institute via the insightful Post-Partisan Power report. 

Do you think fusion is the ultimate goal or salvation of the energy system?

Fusion is fascinating and holds a lot of promise. However, we don’t have a working fusion power plant delivering energy to the grid yet. It’s a tough problem to solve. Our best play is to make sure the fusion endeavor has sufficient, reliable funding, and let it roll. Be patient. Check back in from time to time. Keep a look out for the “pleasant physics surprise.”  In the meantime, don’t use the dream of fusion as an excuse to avoid working through our present energy problem set. We need to roll up our sleeves and get busy: replace fossil fuels with renewables, nuclear fission and lifestyle change.

Of course, most people don’t use fusion as an excuse to avoid action. Rather, they dismiss it because it’s not commercial yet. That’s a narrow view. Innovation and experimentation have value. I agree with Bill Gates. We’re best served by a diversified portfolio of energy sources and a steady stream of funding for innovations and alternatives. At present, fusion suffers from yo-yo funding and a premature down-selection of approaches. We have barely scratched the surface of the parameter space of fusion. People say we have spent “unfathomable” sums on fusion research, but Americans have spent much more on Halloween candy.  What does the Halloween candy get you? Diabetes. What does fusion research get you? A profusion of spinoff discoveries and applications (like space propulsion). And it gets you closer to unlimited energy. Imagine the day we can run our civilization by firing up miniature stars to order, here on Earth. Real stars are so inefficient. All that mass and power, just to twinkle.

I see you spent some time in Hollywood, how did that affect you?

Yes. At one point, I went to Hollywood to pursue the dream of being a screenwriter. My genre is Redemptive Environmental Science Fiction (RES Fiction). That’s science fiction with an environmental theme that has a happy ending. This period of my life was an escape. A way to fantasize about happy global outcomes that I feared were not possible, given my understanding of human nature and technology at the time. But the more I researched my screenplays, tracked emerging developments and developed my social skills - the more I realized a redemptive outcome is actually possible. It may not be likely, but it is possible. If we act. Fast. So I decided to stop fantasizing and start living the story: to embark on the quest to achieve climate stability and knock out poverty along the way.

I learned a lot from Hollywood. The key takeaway from screenwriting 101 is this: we all hear the Call to Action. And we all start out by refusing it. My escape to Hollywood was a refusal of the call. Right now, most of the world is refusing the call. This puts our planet somewhere in Act 1 of “the Quest for Sustainability.” This part is also known as the debate beat. There is no way around the debate beat.

Another takeaway: in a Quest (for example, the quest for a sustainable civilization), it’s not so much about the object of the quest (sustainability), it’s really about the people on the quest (us!), how they (we!) change and grow. Ultimately, you cannot achieve the object of the quest without the internal growth and improved relationships. Does that hold for sustainability?

Speaking of relationships: The Divestment campaign sets up the hero/villain dichotomy which is too simplistic isn’t it?

More notes from screenwriting:  A hero and villain are not so different. They make different choices. Also, sometimes the person you think is a hero is not actually the hero, and someone you thought was the villain isn’t the villain. And both hero and villain can find redemption. We all have the hero and the villain within us. Action defines character.

I haven’t looked closely at the divestment campaign, so I’m not sure how it’s being framed. It’s possible some are using a simple “we’re the hero, the oil company is the villain, let’s end them” frame.  The trouble is that the oil company sells its products to all of us. We’re the ones driving cars and heating our homes with gas. It’s easy to vilify a corporation. It’s harder to have the awkward conversation with your neighbor (or spouse) in which you say: “Hey! Are you still driving a combustion engine? Yo! How about community geothermal? Say! Who’s up for a windfarm off the coast or a new nuclear power plant?”  (Quick call to action: Have the conversations! Tell us how it went in the comments).

The “Divestment Play” (not “play” as in theater, but “play” as in “moves to win a game”) is basically a Defense play (as is the “Ban Fracking Play”). Its purpose is to block fossil fuels, in this case by taking funding away. If you take funds away as an investor, but continue to fund them as a customer, you have unresolved conflicts within yourself. To go all the way, you need to combine the Divestment Play with Offense plays, like the “Electrification Play.” You need to inspire individuals to actually trade in their gas cars for electric cars, their gas heaters for geothermal or electric, and to demand zero carbon energy sources to run it all. Of course, these plays are much harder than the Divestment Play. They open up a host of other external and internal conflicts to work through, anything from mass financing of new cars and heating systems, to lithium shortages (for batteries) to the NIMBY that will arise when you go to set up the wind farm or nuclear power plant. And then there’s the “Grid-dle.” Still, it’s the offense plays that move the game forward.

Back to “Team Divestment.” If this is your game, another thing to consider is how best to coordinate your efforts with teams and plays within the oil company: The Shareholder Activist, Corporate Sustainability and Whistleblower plays.

This brings us to “Team Fossil.” Players on this team have some crucial plays. They have to reorganize their companies to set them up for sustainability. They have to actively retreat from the energy field, and transform into stewards of petroleum resources. They have to “go long” (term). What some people haven’t fully taken into account is that petroleum is used in a lot of things other than fuel. You name it, there’s probably petroleum in it. It’s a key component of our civilization. We need it for fertilizer, chemicals, products. It’s insane that we are burning through it as quickly as we can. Basically, we’re burning up the food of the future, the plastic, a ton of other critical stuff. We’re recklessly limiting our future options. Would you burn your groceries to keep warm? Would you burn your smart phone to drive to the grocery store - which doesn’t have much food now that you burned it all? That’s basically what we’re doing. Going out in a blaze of glory.

Fossils are valuable. We need to protect them and make their higher value uses available for generations to come. Not for decades. For thousands of years, and more. I was listening to Adam Frank on NPR talking about how “Climate Change is Not Our Fault”. I like his take on our relationship to fossil fuels, and the way he questioned the narrative of guilt surrounding climate change.  SPOILER ALERT: Climate change may not be our “fault,” but it is our responsibility.

The screenplay metaphor seems to have had a big impact on you. Do you see a role for creative works from Hollywood influencing climate change policies and action?

There’s a lot of potential. The trick is that movies primarily exist to entertain and make money. Climate change movies come across as depressing, cynical, simplistic and unrealistic. The apocalyptic ones (“Mad Max: Fury Road”) might be fun box office – but what kind of action can they inspire? I think most screenwriters don’t get the elements well enough to tell compelling stories about it. It’s the same with music about climate change.

Another problem is something you touched on earlier, the simplified hero myth. Legend has it that George Lucas wrote “Star Wars” after reading Joseph Campbell’s “Hero with a Thousand Faces.” Star Wars turned out to be box office gold. Now everyone sticks to the hero myth. As you see from Star Wars, it’s perfectly acceptable to blow up a planet in the course of the story. That’s just disposable backdrop to the hero’s journey.  It’s all about the hero (and his family dynamics). Climate change is more complex, values the planet above the individual, and can’t be resolved by blowing up a death star. (Speaking of Star Wars, if you want to help restore balance to the Force, please retweet this. Thanks!) How to deconstruct the hero myth? Here is some classic dialogue from “The Other Guys”:

Holtz: This city's dying for a hero.

Gamble: Is it?

Holtz: Yeah.

Gamble: What about nine million socially-conscious and unified citizens, all just stepping up and doing their part?

It’s like this. We are each the hero of our own story AND all our journeys have to add up to the planet’s journey. This movie is “Climate Inside Out”, with all of us as The Emotions, and Earth as Riley.

Is there any book or movie that inspired you to think differently about the future?

Yes. The book: “Year Zero.” It’s hilarious, features aliens and copyright law, and is a great metaphor for the energy problem. I also liked the movie: “Tomorrowland.” I love the montage where the main character keeps raising her hand and asking, “what can we do to fix it?” And don’t miss Michael Moore’s documentary “Where to Invade Next”, an uplifting look at quality of life options.

Back to the challenge of making a movie that captures the complexity of climate change, yet inspires, empowers and directs action. One narrative approach that might work is a sequel to “The Blob” done as a miniseries.

 “The Blob”?

Of course! The original is a 1958 horror film about how the people of a town pull together, overcoming skepticism and mistrust to battle an amorphous enemy from space. Initially, the people that raise the issue aren’t believed (sound familiar?) But then the town rallies. When all seems lost, they discover the Blob can be stopped by (dramatic pause) FREEZING IT. At the end of the movie they discuss what to do with the frozen Blob: “I think you should send us the biggest transport plane you have, and take this thing to the Arctic or somewhere and drop it where it will never thaw.”

Ha Ha Ha! Jokes on us. Now it’s “never.” Who knew climate change could thaw the poles?  Maybe that’s why the Blob chose our planet. Cue the sequel. Now the planet has to come together to solve climate change – to keep the Blob frozen. It should be a miniseries so that each episode can showcase different options for decarbonization and explore the dramatic conflicts that they bring up. The series can be participatory. Invite fans to film their own episodes in their own locale. After all, the whole planet has to act to stop the Blob’s prison from melting. Whenever an episode gets too preachy or overwhelming, you punch it up with scenes of the Blob breaching containment and terrorizing a town or ship or what have you. (The Blob lends itself to this plot device. Any time a tiny piece of it breaks off and melts somewhere, it consumes things and grows. People are always chipping off bits of it and taking it to study in the lab, so there are many opportunities for this to happen.) The new Blobs have to be captured and sent to the poles as well. More Blob, less ice. The clock is ticking.

Of course, for those who don’t need the whole personification (or in this case, “blobification”) of climate change in order to focus on the challenge; for those who don’t want to passively watch a miniseries; for those who want to take the direct approach and deal with climate change ASAP, there is an even better narrative device. Games. Games have much more flexibility to tell that more complex story and engage people in actively pursuing coordinated solutions. That’s what our new initiative takes advantage of.

You founded an organization called Footprints to Wings, which is a great name, but what is the goal of your organization and what motivated you to start it?

Footprints to Wings exists to illuminate and coach “The Race to Zero Carbon” - the massive multi-player real world game we are all playing, whether we realize it or not.  We seek to formalize the race and create an institution that brings people from all walks of life together to win it as fast as possible.

The Race to Zero Carbon is on. At stake are the peace and prosperity of humankind, the ecological vitality of our planet and our classification as an “intelligent species” worthy of a planet. At present, Team Doom is pounding us in our civilization end zone. The Energy Supply Field is dominated by fossil fuels, the Atmosphere Field is swarming with carbon emissions. The clock is in overtime. We may have already lost. However, there may still be a chance for Team Humankind to seize possession of its senses and run the foot(print)ball to Doom’s end zone and sequester it.  Our job as coaches is to empower the Players, clarify the plays, and whip the teams into shape for the win.

The rules of the race are simple. The first state to achieve a net zero carbon economy with the best quality of life, wins. We’re all going to get to zero – but one state is going to get there FIRST.

The first state? Is this for the US or the whole world?

                  Other countries are welcome to join. As it grows, we will set up regional and international divisions and compete for the “World Carbon Cup.” We’re starting with the US because that’s our country. And if we can get to zero carbon in the US, it proves any country can have a great American quality of life with zero carbon emissions.

How about the developing world? Do they need to modernize their energy first? Are they in the race?

They’re definitely in the race. And they need to modernize and get their quality of life up to par. As Hans Rosling says – so many of them are below the “Wash Line”. That won’t do. This is why we include “best quality of life” in the criteria to win the race. The challenge of a race to zero carbon is to figure out how to provide a great life for everyone without the carbon emissions.  If you’re discounting other people’s quality of life, if you’ve got double standards, you’re disqualified from our game.

Why “States”?

It’s State v. State because the STATE is the ideal unit of competition. We have 50 of them, all unique. There can be red state solutions, blue state solutions, wild card solutions. We can give each a handicap (based on mild weather, hydro, urban advantage, for example). There is a lot of data available, aggregated at the State level.

Speaking of metrics, some states have carbon intensive industries but then they export those products to other states. How will you account for that?

You should join the Referee and Scoreboard Committee! That’s a great accounting question. We do need to account for the energy embedded in imported stuff. David MacKay does a great job of attempting to quantify this for the UK. We need to follow suit for each state of the US. Another area of dispute is emissions from agricultural activity. At present, the information on our website is simply the carbon emissions reported by the EIA. We’re working on a more complete picture.

Since it’s State v. State, do you mostly work with policy makers at the state level?

We work with individuals at every level. The framework is this: Your Team is your State as a Whole.  But a State is made up of the PLAYERS - individuals, voter-consumers - the core, indivisible units of the team. Players coordinate to form SPECIALIZED TEAMS within the state team to achieve certain objectives - to execute specific Plays. Specialized Teams can be anything: a corporation, a club, a family, the aforementioned divestment campaign, a senator’s staff. The efforts of individual Players in Specialized Teams adds up to the State score.

It’s important to realize: all the actual work comes from the Players. The effectiveness and scalability of the play comes from their teamwork. The race to zero carbon requires attention to detail, from every member of the team. To quote Bill Walsh, one of the greatest coaches in NFL history:

"From the start, my prime directive, the fundamental goal, was the full and total implementation throughout the organization of the actions and attitudes of the Standard of Performance...I had no grandiose plan or timetable for winning a championship, but rather a comprehensive standard and plan for installing a level of proficiency - competency - at which our production level would become higher in all areas, both on and off the field, than that of our opponents.  Beyond that, I had faith that the score would take care of itself."

Standard of Performance? How does this apply to the Race to Zero Carbon? What’s your strategy?

As they say in Screenwriting 101, “Show, don’t tell.” It might be best if you come see the strategy for yourself. Footprint to Wings would like to invite you to our First Annual Race to Zero Carbon 5K, 10K and interactive festivities on Saturday, May 21, 2016 from 9 am to 2 pm.  The event will be held outside (rain or shine) at the Duke Island State Park in Bridgewater, New Jersey. Runners welcome: Register here! Organizations with a zero carbon play welcome: Apply here to showcase it! And of course, sponsors also welcome: Email us!

In addition to the 5 and 10K, the event will feature a walk-through Zero Carbon Coaching Clinic (ZCCC). The ZCCC will allow participants (the Players) to visualize the many roles they can play in the Race to Zero Carbon; the many options and actions available to move toward zero; and how THEY NEED TO ADD UP for their state to get ALL THE WAY to zero carbon. Yes. All the way. This is the majors. No token effort. We will be there with math, maps and improv to help you see how it all connects.

As for the organizations, in our framework, each is there as a “Specialized Team” that has several “Plays” to execute to move us toward zero carbon. This event is an opportunity for each organization to put on its coaching cap (literally, we will provide coaching caps); to explain how its “Play” will make a difference in the race to zero carbon (if you need help with this, we will coach you); to explore how best to coordinate with other Specialized Teams; and to recruit Players. It’s OK: Players can join multiple teams!  Our goal is to get the Players to understand the whole game, to commit to going all the way to zero, to connect with all the Specialized Teams (like your organization!) that resonate with them. Let’s inform & pump everyone up to execute the plays & get to zero ASAP!

For those who can’t make it to New Jersey on May 21, hold tight, the race will be coming to you. Maybe not today. Maybe not tomorrow. But soon. And for as long as it takes the US to get to Zero Carbon. This is just the first of such Zero Carbon Coaching Clinic events around the country. They will be replicated and tailored to each state. Join us to help make it happen!

Don’t you think zero-carbon is unrealistic? Why not advocate for less carbon, or more clean energy?

If you fight for your limitations, you get to keep them. I know some people say the only practical way forward is “Progressive Incrementalism.” That may be as good as it gets. That’s certainly better than “Token Incrementalism.” But it doesn’t stir my soul. I believe in us. I think we’re capable of extraordinary things. The planet has gone to great trouble and expense to generate sentient beings that are capable of consciously adjusting the climate. Maybe we’re supposed to stop making excuses and level up to this responsibility. Excuses or excellence. We are free to choose.

To be clear, the goal is “net” zero. If you can’t cut the carbon emission in some area (like airplanes), you can offset it through carbon removal (the “Reforestation Play”, “Soil Remineralization Play” and so on). Looking at the question again, there are three questions packed in it:

  1. Is a single-minded focus on carbon the right way to go?
  2. Do we really need to go all the way to (net) zero to stop climate change?
  3. Are we capable of going all the way?

With regard to the first question: It’s not just net “carbon,” it’s anthropogenic greenhouse gas emissions measured in carbon dioxide equivalents (CO2e). True, climate change is more complex than CO2e emissions. However, at this stage of our knowledge, they are a key factor to control, and provide an easily measured goal around which to hold a contest.

With regard to the second question: If we need to go all the way, we need to go all the way. My understanding from reading James Hansen is – we need to go all the way.  Yesterday.

The third question has two dimensions, technical and social. Do we have the technology? Do we have the will? Pep talk aside, the only way to get a genuine answer to this question is for all the players to understand all the plays, and to choose the ones that work for them and coordinate with each other to execute them. That’s the reasoning behind our community based Zero Carbon Coaching Clinics. Many people blithely advocate this or that, without grasping the implications of what they are advocating. Ask a 100% renewable energy enthusiast how many wind turbines they think will be required in their state and where they should be located. Most have no idea. Ask a nuclear enthusiast how many more nuclear power plants are required and where they should be located. Ask people what they know about the mining requirements of their preferred solution. It’s all vague. Without a clear, common understanding of the options, we can’t make informed choices. We can’t get through the “debate beat”. And until we do, we’re not going anywhere. Some of the information you’ll uncover through our process is depressing. Sobering. But stick with it, and you start to see much more on the field. Other possibilities. Your own values and true preferences. Friends in the stands, anxious, cheering. A team member, on the field, waving. The clouds part. You see the play, shining. And you realize what you need to do.

Getting back to incrementalism, let’s invoke coach Marie Kondo who wrote “The Magic Art of Tidying Up.” Her philosophy is to systematically tidy up your life once and for all. Discard everything you don’t need. Keep only the things that spark joy.  What we’re doing is creating a system to help people systematically go through and declutter the excess carbon in their lives, once and for all. A joyful switch.

Finally, in “The Climate Fix,” Roger Pielke repeatedly observes: “No one knows how fast a major economy can decarbonize.” To which Footprint to Wings says – let’s find out. I’ll race you there.

What do you think are the biggest challenges to rapid decarbonization?

Having the conversation. Getting people to stop refusing the call. Debate. Decide. Do. 


Main image: Alex Caranfil

March 15: Jessica Lovering Speaking at Climate One

Max Roser

Gates Continues Push for Energy Innovation

Last week Gates was interviewed by Andy Revkin at DotEarth and Ezra Klein at Vox. He also answered my question on electrification in poor countries, which was pretty cool:

What role should centralized and decentralized technologies play in powering emerging economies, especially in sub-Saharan African countries? —Alex Trembath

Gates: The lowest cost of power when you have density of usage, comes with a large grid. But that requires the utility to have the capital and to get people to pay. And so even though that’s the end goal for urban areas, you want to be able to start out house by house, then create micro-grids and mini-grids and as they grow have technical standards that can coalesce them and lead to that most efficient solution. It’s been great seeing in Kenya, for example, how solar energy at the household and micro-grid level really is coming in for modest energy needs like cell phone recharging, charging lights at night. And there are interesting technologies for micro-geothermal, micro-hydro type solutions that don’t have some of the complexities of large-scale hydro. Creating micro-grids, which are then integrated, will be part of the process of getting Africa electrified.


CityLab's Julian Spector had a great piece on why the cost of nuclear power plants have risen in the United States. Spector's piece focuses on new research from Breakthrough's Jessica Lovering and Ted Nordhaus and Carnegie Mellon's Arthur Yip. He pretty much nails it:

Nuclear won’t make sense everywhere. It is, however, one of the largest sources of low-carbon energy currently operating worldwide. When governments weigh the costs and benefits, they should learn from the American example but recognize that it’s something of an anomaly. The rest of the world has lessons to share, too.


Lovering also wrote about her paper in Greentech Media last week, emphaszing the contexts where nuclear power has seen cost declines over time. For instance, of South Korea, Lovering writes:

There are many potential explanations for the positive learning in South Korea’s nuclear industry. Most notably, South Korea usually built reactors in pairs, often with four to eight reactors at a single site. South Korea also has a single utility that owns and operates all the nuclear power plants. It also happens to design and construct all the plants.


Joe Fassler wrote last week at the Smithsonian on why palm oil is not quite the environmental scourge we all imagine, but the choice we make over where to produce it can have major environmental impacts. Breakthrough's Marian Swain wrote similarly a few years ago.


I appreciated this quick video explainer on reviving mammoths, a major project of the Long Now Foundation:

How to Clone a Mammoth by Beth Shapiro from Princeton University Press on Vimeo.


Michael Shellenberger and Rachel Pritzker are in India this month, and have a great note at India's Observer Research Foundation on why energy transitions are essential for environmental progress. 


Shellenberger could also be found last week on stage at UCLA talking renewable energy with Stanford's Mark Jacobson and Ken Caldeira, NRDC's Dale Bryk, and the Economist's Oliver Morton. Check it out.


With Oscar Season ending this week, I enjoyed this piece by Sarah Goodyear on why doomsday scenarios have become so prevalent in modern cinema.

Why are apocalyptic storylines striking such a nerve at this particular point in history? “I agree with a lot of scholars who say 9/11 is the incision in the American consciousness that changed everything,” says Karen Ritzenhoff, a professor in the department of communication at Central Connecticut State University and the co-editor of the recent book, “The Apocalypse in Film: Dystopias, Disaster, and Other Visions About the End of the World.” “Before 9/11, even if there was a Godzilla who was taking over New York, or there was a tidal wave that broke the Statue of Liberty, in the end you could survive it. There would be some kind of heroes. But since 9/11, there is no resolution, there is no happy end.”


Ecomodernism is, clearly, quite taken with the concept of modernity. That being the case, I greatly enjoyed this essay on modern civilization by Mike Lind at The Smart Set, in which he takes issue with The Stanford Review:

What the editors of the Stanford Review are calling “Western civilization” is really not Western civilization — that is, the civilization of ancient Greece and Rome or medieval European Christendom. What they have in mind is liberal modernity, which is not much older than the Enlightenment and the Industrial Revolution. The history of modernity goes back only three centuries or so, not three millennia. Isaac Newton, John Locke, and James Watt are its founders and culture heroes, not Homer or Aristotle or Aquinas.


'Manifesto' coauthor Erle Ellis has a great piece explaining his new theory of sociocultural niche construction.


Finally, the ecomodernist tweet-of-the-week goes to Breakthrough's own Linus Blomqvist, for dinging Bernie Sanders' strange nostalgia for farming:

Eduardo Porter

Getting Better

The whole thing is great, with observations focusing on the decline in violence but also including long-run progress on education, health, and technology. This bit struck me:

Our new crises of invention are so challenging because the bads are so tightly bound with the goods. Breaking the world’s slave chains was a moral triumph; breaking the world’s supply chains is not an option. Climate change is a crisis of invention. So many more people, living longer, eating better, traveling more to see the world and one another — is it not poignant that these human goods are engendering a mortal danger?


Mark Lynas continues his invaluable thread on the facts, controversies, and dangerous rumors surrounding the Zika virus. 


Over 1,400 scientists signed an open letter in support of the American Society of Plant Biologists' position in favor of genetic modification (GM) research and technology deployment.


This Human Progress piece by Chelsea German and Marian L. Tupy, on why capitalism isn't "starving humanity," references Jesse Ausubel's 2015 article in the Breakthrough Journal on how modernization allows for the return of nature. 


Will Davis at the American Nuclear Society has a fantastic post on where/why nuclear costs have risen over the decades, featuring recent Breakthrough research:

[Breakthrough's Jessica] Lovering tells us that she found two major surprises when researching the global, historic costs of nuclear energy.  First was that every country experienced lowering costs in the early years of nuclear plant construction. Second was that South Korea continues to experience reducing costs, even now.  She attributes South Korea’s continued cost reduction in part to the focus on standardized (in fact, duplicate) nuclear plants being built at various locations. (This was realized and implemented in the US, in the SNUPPS program and also in Duke Power’s ‘Project 81′ program.)


Over at Future Tense, Arizona State's Brad Allenby argues that we shouldn't rush to formalize the Anthropocene as a new geological epoch. 'An Ecomodernist Manifesto' coauthor Erle Ellis made a similar argument in an interview last month, suggesting that it might be too early to understand, let alone perfeclty describe, humanity's impact on Earth. Allenby seems to agree with that, and takes the argument a step further:

Indeed, our planet is today increasingly populated by complex adaptive systems that integrate human and natural components. And as humans increasingly integrate with the technology around them, and as the evolution of that technology continues to accelerate, it is questionable that what we will have in 50 or 100 years will still be anything like “anthro.” We are trying to tie geologic time to a windstorm.


The excellent journal Issues in Science & Technology is featuring a great retrospective by Andy Revkin on his decades as an environmental journalist. Revkin continues to be an indispensible voice in the environmental discourse, and I found it fascinating to read him retrace his path. 


At National Review, Robert Bryce covers the new Save Diablo Canyon campaign being spearheaded by 'Manifesto' coauthor Michael Shellenberger:

To be sure, the clash between the New Guard Greens and the Old Guard involves technology and the belief that technological advances are essential in the effort to help bring people out of poverty. Technological progress is a fundamental tenet of ecomodernism. 

Jason Mark

Todd Allen

Alison Van Eenennaam

Leigh Phillips

Rebound Critics Misfire

In a recent duo of blog posts energy economists Danny Cullenward and Jonathan Koomey broadly challenge the notion that so-called rebound effects are likely substantial and should be dealt with as such. The basis for their claim is their recently published peer-reviewed article, which challenges my 2013 analysis finding consistently large long-term rebound effects across 30 sectors of the US economy between 1980 and 2000.

Cullenwald and Koomey argue that due to fundamental issues in the dataset used there, the large rebound magnitudes reported by Saunders are “wholly without support” and from this, suggest that a 2011 survey of the rebound literature by The Breakthrough Institute, in which my research was first referenced, should not be taken seriously.

Before addressing the specific criticisms of my 2013 analysis, it is important to note that Cullenwald and Koomey greatly exaggerate the importance of my analysis. The Breakthrough Institute review cited over 100 peer-reviewed articles which all pointed to rebound magnitudes significantly larger than that which most energy efficiency advocates have been willing to acknowledge. A range of further literature reviews, conducted by groups ranging from the International Energy Agency, the Intergovernmental Panel on Climate Change, the European Commission, and the United Kingdom Energy Research Center have conducted similar reviews reaching similar conclusions.

Notably, Cullenward and Koomey do not actually demonstrate that the issues they find with the dataset I used significantly affect the reported rebound magnitudes in my paper. Their only claim is that, were these data issues somehow corrected, the reported magnitudes would likely be different, and indeed they are agnostic as to whether rebounds would be systematically larger or smaller, let alone by what magnitude. 

Absent any analysis suggesting that the problems they have identified systematically overstate rebound magnitudes in my analysis, Cullenwald and Koomey simply complain that they raised concerns about problems with my data set at a Carnegie Mellon workshop in 2011. This is indeed the case. I subsequently published my analysis, and it passed peer-reviewed muster, because, as I will demonstrate below, there is no evidence that those concerns are particularly material to the conclusions of my analysis.

Theory Misapplied

To their credit, Cullenward and Koomey take a deep dive into the dataset (which was developed by Harvard professor Dale Jorgenson) and methods used in my 2013 analysis. Regrettably, they misunderstand them. While the theory-based objections that Cullenwald and Koomey raise against my analysis – and to the underlying Jorgenson et al. dataset and analyses – might appear to rest on sound microeconomic principles, they turn out to rest on the wrong microeconomic principles. 

At bottom, Cullenwald and Koomey’s criticism rests upon the supposition that because regional differences in absolute levels of energy prices vary substantially, national averages in those prices cannot be used to accurately estimate rebound. Energy prices in, let’s say,  Texas are different from energy prices in California, and hence, national average prices cannot be used to estimate how individual firms will respond to changes in prices.

This problem may appear to be of serious concern intuitively but is actually of minor consequence analytically. What Cullenward and Koomey misapprehend in their critique is that the rebound magnitudes reported in Saunders – and the energy and other economic forecasts put forward by Jorgenson et al. – all rely at their foundation on measurements of so-called substitution elasticities, which in turn depend for their measurement on observed changes in input prices.  What matters to rebound estimates is not the absolute level of energy prices in any given locality but the variation in those levels across the time series.

The physical intuition for why variations rather than absolute magnitudes dominate in measuring rebound magnitudes is this: a producer contemplating changing her production technology in the face of an energy price increase will look to the incremental value of the energy quantity reductions to be had from deploying the technology – the difference in her profits at the new energy price were she to stick with today’s energy quantity compared to her profits at the new energy price when moving to the new quantity resulting from the different technology.  “Do my profit gains from reduced energy use outweigh the cost of changing the technology?” will be her decision criterion.  Her “substitution response” is then related to how much her energy quantity changes given the change in energy price.  Notably, her behavior depends not on the absolute energy price, but on how much it changes over time.  The same holds true for her competitors in other locations who may face somewhat higher or lower absolute energy prices.

The same dynamic works for energy efficiency gains in the absence of a price movement, but in reverse. Even with no change in energy price, it may make sense to invest in new energy efficiency technology that becomes available.  But such a gain acts just like an energy price reduction, reducing the effective price of energy services (thereby invoking rebound in physical energy use).  Again, there is a substitution response that depends not on the absolute price of energy, but the change in the effective price of energy.

While this shows that differences in absolute energy prices seen by producers matter little to the measurement of rebound, note also that the possible difference between average and marginal price that Cullenward and Koomey worry about is likewise of little or no consequence.  In a microeconomic sense, it is true that producer decisions depend on marginal prices rather than average prices, and that marginal prices tend to be higher than average prices (though they can be lower), but – whether higher or lower – we have seen that the absolute price levels matter little: it is differences in the prices over time, whether average or marginal, that matter.

Empirical Evidence

To highlight the distinction elucidated above, we can look at regional differences in natural gas prices as reported by the EIA, shown below:

As can be seen, while commercial enterprises in different areas of the US paid different absolute prices for their natural gas supply, the price changes they experienced are highly comparable.  Energy being a fungible commodity, energy prices generally are locked together across locales by the underlying movement of energy prices at reference locations, so changes are likewise linked. 

Technically, the role of energy prices in measuring rebound magnitudes is somewhat subtler than this intuitive picture.  The econometric measurement relies on four equations whose form indicates that what matters to the econometric measurement is variation over time in the natural logarithm of the energy price and, related to this, variation over time in the relative change in energy price.  These are the proper comparisons to be made when asking whether regional differences in energy price might significantly affect rebound measurements. 

Without regional Input-Output data, one cannot run this kind of rebound analysis regionally.  But we can look at variations in the natural logarithm of regional gas prices to ascertain whether those changes are likely to significantly alter rebound estimates based on national averages. To illustrate, we can create a historical plot of the natural logarithm of the national average commercial natural gas price vs. the natural logarithm of the California commercial gas price, using EIA data (available for the years 1967-2014):

As seen, over 48 years there has been an extremely tight correlation between (the log of) the national average price of natural gas delivered to commercial customers and (the log of) the gas price delivered to California commercial customers.  What matters to econometric measurement is the variation.  Variations in one translate directly into variations in the other.  The correlation is 0.98 – one rarely encounters an R-squared coefficient this high.  This dynamic applies to the first three econometric equations referenced above, which carry the greatest weight in the analysis.

The fourth econometric equation fits variations in relative (percentage) price changes over the horizon of the data series.  The relevant plot is shown below:

Again, annual relative (percentage) price changes for California are highly correlated with national averages, and the deviations appear to show little consistent bias.  Furthermore, when analyzed across several states (two reported in this post), the errors are not systematically biased one way or the other.

Importantly, the deviations that matter to the econometric analysis are small in percentage terms – a very far cry from the large absolute magnitude deviations (hundreds of percentages) Cullenward and Koomey would have us believe discredit the econometric measurements.

To be complete about it, account must be taken of the other energy sources beyond natural gas.  And while it would be sufficient to undercut the Cullenward/Koomey claims to establish that regional price differences in one state make little difference in measuring rebound (if rebound magnitudes in California are as estimated in Saunders using national averages, such rebound magnitudes become generally credible), we add Texas for comparison.  The table below shows the relative energy use by fuel type and sector for the two states (the electric power sector is excluded, so coal is not included).

Table 1. Fuel use by fuel type

With this in mind, we can summarize the true impact of regional energy price differences on the econometric measurements of rebound magnitudes in Saunders (where a and b are the coefficients in the regression equation state price = a*ln(national average price) + b):

Table 2. Econometric indicators for the log(price) equations

For California, the coefficient “a” of the log(price) equations is very close to unity across all fuel types for Commercial customers, and across all fuel types for Industrial customers with the exception of electricity.  This means variations in California prices track very closely variations in national average prices over the domain of the national data, certainly not enough to substantially affect an econometric measurement.  The fit is tight, with high correlation coefficients.  We see small variations in the “b” parameter showing no systematic bias.  Texas prices likewise trend closely with the national average.

 While it is true that electricity prices for the industrial sectors are not so closely aligned with national averages, from the previous table it is apparent that electricity forms a small portion of industrial energy use (less than 20% for both states), meaning its impact is significantly attenuated. 

Table 3 below reports comparable results relating to the fourth econometric equation (where a and b are the coefficients in the regression equation state relative price variation = a*ln(national relative price variation) + b):.

Table 3. Econometric indicators for the relative price equation

Again, we see close correlations between national and regional relative price changes over long periods and large variations in magnitude, with the exception of electricity.  But this applies to only one of the four econometric equations (and so carries less weight in the measurement) and again industrial electricity consumption in both California and Texas was less than 20% of consumption.

From these results it is evident that national and regional energy price dynamics that matter to the econometric measurement of rebound are in close alignment.  Commercial customers in both California and Texas experienced energy price dynamics remarkably similar to that of national average energy prices.  Likewise, industrial customers in both states experienced energy price dynamics remarkably similar to that of national average energy prices.  And what differences remain are a huge departure from the large absolute energy price differences Cullenward and Koomey claim as determinative in their critique.

It takes some experience running hundreds of econometric analyses to develop a sense of what matters and what does not (experience neither Cullenward nor Koomey can claim).  That, and the above analysis tell this analyst that regional differences in absolute energy price levels are of little consequence, when analyzed correctly with empirical evidence, and such differences in no way undermine the estimates of rebound magnitudes in my 2013 analysis.

Broader Implications

The logic used by Cullenwald and Koomey, were it valid, would invalidate much more than my 2013 analysis of rebound magnitudes in production sectors of the US economy. The models relied on by the IPCC to forecast future energy use (and associated emissions) suffer from the same ostensible shortcomings alleged by Cullenward and Koomey to afflict my rebound analysis, but in far greater measure.  That is, these global models aggregate energy efficiency dynamics not only at sectoral levels (a few) as with Saunders, but at national, regional and ultimately, global levels.  Models used in the latest IPCC report are not reported to be calibrated with econometric measurement but rather assume values for key parameters (such as the central-to-energy forecasting substitution elasticities); and to the extent the models are otherwise calibrated to approximate observed data, these are data aggregated to a high level – in other words, national, regional and sometimes global averages.

The logic offered by Cullenward and Koomey then leads to the necessary conclusion that these analyses are “wholly without support.”  In other words, in such a world one cannot claim finding a pathway toward a 2oC world constitutes an urgent task, since we can draw from these models no trustworthy conclusion about the future path of energy use globally. In this light, how can Cullenward and Koomey even know the task is urgent?

Rebound, of course, makes the situation even more urgent than depicted in their scenarios of projected energy use.  As shown elsewhere, a review of 25 models designed to inform climate change mitigation policy, including those relied on by the IPCC, reveals methodological limitations among virtually all of them preventing proper accounting for rebound effects and delivering systematic understatements of their magnitude. The presence of large rebound effects means we have less time than is commonly believed to devise climate change mitigation solutions – less time than one would infer from IPCC forecasts. 

Rebound means greater urgency, not less.  Provided, of course, one does not follow the line of reasoning explicit in the Cullenward and Koomey critique, which would provide ample reason to reject the IPCC forecasts altogether.


In many ways, the Cullenward/Koomey critique of the Saunders article is reassuring.  They have plainly taken a deep look at the analysis and, finding no methodological issues to criticize, were reduced to challenging the Jorgenson et al. dataset used in the rebound analysis.  But we see that their critique dissolves under careful examination owing to their misapplication of microeconomic theory to the data in question.  One can only feel gratitude in knowing that the Saunders analysis is robust enough to weather such a determined assault on its findings.

It has been known for over 20 years that rebound effects are welfare-enhancing.  The greater the flexibility of the productive economy to respond to energy price movements and new energy efficiency technologies, the greater will be this economic welfare gain – and rebound.

Perhaps the time has finally come to, rather than devote concerted effort to finding supposed flaws in studies showing large rebound effects, instead face the issue squarely and direct effort to finding ways to take advantage of rebound’s welfare-enhancing aspects while limiting its impact on energy savings.  Researchers such as Karen Turner at the University of Glasgow are leading the way in this. 

My 2013 analysis is not perfect, as indicated by what Steve Sorrell points out are “3 pages devoted to listing 'cautions and limitations'” offered in that article.  The Cullenward/Koomey critique can be added as an item to that list, but it does not rise to the level of being anything resembling a fatal flaw.

Much work remains to be done to measure rebound magnitudes on the productive side of the economy.  Far too little attention has been paid this topic, which is puzzling as the vast majority of energy is consumed in this sector globally.  Along the way, it would undoubtedly be useful to upgrade the models used by the IPCC and other organizations to properly account for rebound and so enhance the credibility of global energy use forecasts relied on by policy makers.



Cullenward/Koomey posts: http://www.koomey.com/post/136611184100; http://www.koomey.com/post/137309412653

Cullenward/Koomey article: http://www.sciencedirect.com/science/article/pii/S0040162515002541

H.D. Saunders, “Recent Evidence for Large Rebound: Elucidating the Drivers and their Implications for Climate Change Models,” The Energy Journal, 36(1), 2015.

H.D. Saunders, “The Khazzoom-Brookes Postulate and Neoclassical Growth,” The Energy Journal, 13(4), 1992.

H.D. Saunders, “A View from the Macro Side: Rebound, Backfire and Khazzoom-Brookes,” Energy Policy, 28(6-7), 2000.


Apples to Apples to Atoms

Future energy scenarios are dependent on assumptions about the prices and scalability of energy sources, often relying on historic learning curves to predict the future costs of various fuels or generation technologies. But the academic literature has become overly focused on comparing learning curves for different energy technologies, often in an attempt to divine intrinsic economic qualities about different technologies. In particular, it’s common to highlight the difference between the trends for solar PV panels, which are often described as following Moore’s Law, contrasted with nuclear power, where costs appear to only increase over time. But the metric that matters most, cost of generating electricity, appears to follow no guaranteed trend for these technologies, as new data shows.

Solar power has achieved substantial cost declines over the past decade, with module prices plummeting over 70%. But the policy cost has been substantial, with deployment programs like feed-in tarrifs in Spain and Germany and US net metering encountering serious opposition. Deployment of new solar in Germany has fallen every year for the last three years.

Nuclear power, meanwhile, is assumed to be too expensive to build and even has ‘negative learning.’  On the other hand, nuclear power plants provide large-scale, steady ‘baseload’ electricity once they’re built. And when looking at actual costs of generating electricity, nuclear power is one of the cheapest still, and countries that rely heavily on nuclear power have below average retail electricity prices.

In our new paper published in Energy Policy, we present a more detailed set of historical cost for nuclear power around the world that show a lack of intrinsic cost escalation. There are a variety of cost trends for nuclear, including recent experiences of cost stability and decline.

Historically, nuclear power plants have been constructed, not manufactured – built like airports or highways. It seems nonsensical to expect learning-by-doing in a country like the US where you had dozens of different utilities contracting with dozens of different construction firms to build an ever-changing fleet of reactor designs under ever-changing regulation. Yet this is precisely what many studies assume when they compare the learning curves of nuclear power projects with, for instance, modular solar technologies. This problem is confounded by the fact that most studies rely on costs from a single country, the United States, and compare to global solar panel prices. 

For example, a 2006 paper by Jessica Trancik compares the capacity price of solar panels globally to those of nuclear power plants in the United States. There are several problems with Trancik’s analysis. One problem is that the solar costs examined are not installed costs, but simply the price for the module, whereas the nuclear costs are the full installed costs. Another problem is that a watt of nuclear power provides about three times as much energy over a year as a watt of solar power. A chart comparing the levelized cost of energy for these two technologies would be more enlightening. Lastly, the causes of solar’s decline are assumed to be learning, but could also be factors such as commodity prices, market dumping by Chinese manufacturers, or increased market competition. (See Greg Nemet’s work for a breakdown of drivers of cost declines in solar photovoltaics.) 

Our new data sheds light on nuclear learning, showing that even this complex technology can experience learning in certain circumstances. The problem with showing a global nuclear learning curve is that there are hundreds of different reactor types of different sizes, and any trends amongst specific reactor models are lost. When isolating a single reactor type, say gas-cooled reactors, we can see a robust learning trend across several countries. Gas-cooled reactors are simpler designs, and inherently safer which means they require less redundant safety systems. 

Single countries can show consistent cost declines as well. Just look at the experience in South Korea below. There are many potential explanations for the positive learning in South Korea’s nuclear industry. Most notably, South Korea usually built reactors in pairs, often with 4-8 reactors at a single site. South Korea also has a single utility that owns and operates all the nuclear power plants. They also happen to design and construct all the plants.

But even if we can get truly comparable costs for different energy technologies, it isn’t a true apples-to-apples comparison. Different technologies provide different qualities of energy. Even if the levelized cost of solar is comparable with the levelized cost of nuclear or coal, utilities might place a higher value on reliable baseload power, and an even higher value for peaking power such as natural gas turbines. Papers like Trancik’s argue that large, baseload technologies can’t experience the fast learning rates that smaller, modular technologies can. While this may be true for component costs, renewables may just as well experience cost escalation in installed costs as their market share expands. Utilities will start to require higher quality energy from distributed renewables. Both nuclear power and coal power increased in costs in the 1970s in the United States as they had to meet increased demands for safety and environmental standards. As the share of renewable generation starts to be significant, renewables may face more stringent demands on load balancing, transmission siting limitations, curtailment to protect migratory wildlife or overload of grid, or phase momentum balancing.

What all this tells us is that power sector system costs are the most important consideration. Solar and wind have gotten much cheaper in recent years, but their value to the grid declines as they scale up. Nuclear power provides cheap, reliable electricity, but has proven difficult to finance and construct, especially in deregulated markets. How can we adopt a systemic approach and make sure all our promising zero-carbon power technologies fit together effectively?

Most experts agree that renewables and nuclear both have roles to play in a clean energy future. The question is how to drive progress in different technology categories, and how far the cost declines can continue. While learning rates across energy technologies vary widely, an apples-to-apples comparison adds little insight to the energy system questions that matter most.

The most pressing question is how do we drive the costs down for reliable, low-carbon power? How do we share best practices across diverse technologies and global industries? And most importantly, how do we ensure that rapidly industrializing countries can develop their domestic energy industries at the end of the learning curve, rather than the beginning?


References and Further Reading:

Farmer, J. D. & Lafond, F. How predictable is technological progress? Res. Policy 45, 647–665 (2016).

Jamasb, T. & Kohler, J. Learning Curves For Energy Technology: A Critical Assessment. (2007). at <http://www.dspace.cam.ac.uk/handle/1810/194736>

Junginger, H. & Lako, P. Technological learning in the energy sector. Report (2008). at http://igitur-archive.library.uu.nl/chem/2009-0306-201752/UUindex.html

Lovering, J. R., Yip, A. & Nordhaus, T. Historical construction costs of global nuclear power reactors. Energy Policy 91, 371–382 (2016). http://www.sciencedirect.com/science/article/pii/S0301421516300106

Neij, L. Cost development of future technologies for power generation—A study based on experience curves and complementary bottom-up assessments. Energy Policy 36, 2200–2211 (2008).

Nemet, G. F. Beyond the learning curve: factors influencing cost reductions in photovoltaics. Energy Policy 34, 3218–3232 (2006).

OECD Nuclear Energy Agency. Projected Costs of Generating Electricity 2015. (OECD Publishing, 2015).

Trancik, J. E. Scale and innovation in the energy sector: a focus on photovoltaics and nuclear fission. Environ. Res. Lett. 1, 014009 (2006).

Dan Kahan

Rebecca Hernandez

Vijaya Ramachandran