Breakthrough Dialogue 2017: Biography and Meal Form

Michael Shellenberger to Speak at 18th Annual POWER Conference on March 22

Biotech and Pharma

Biotech and Pharma

In this case study: 
 


The modern pharmaceutical and biotech industries look very similar to traditional nuclear reactor development in the early research and development phases. Both industries rely on early research and development in the public sector—at research universities and national laboratories, in the case of nuclear. To bring a new product to market, both industries spend significant amounts on development and making their way through stringent licensing processes.

However, the market structure and business models for the two industries are quite different. For pharmaceuticals, while the upfront development costs are large, manufacturing costs are almost trivial, leading to significant and immediate profits once a drug is approved. For nuclear, the development costs are similar, but having a design approved is only the beginning. The real proof of concept comes in the construction and operation of reactors, a process that can take decades. Intellectual property also plays a much larger role in the pharmaceutical industry, with larger firms frequently buying out start-ups for their patents. Smaller firms, as a result, can focus on proving the science of their product without worrying about longer-term business models.

The regulators of both industries—the Food and Drug Administration (FDA) and the Nuclear Regulatory Commission (NRC)—each increased their stringency in response to major failures in their respective sectors: the thalidomide crisis for the FDA, and Three Mile Island for the NRC. Increased regulation at the FDA following the thalidomide crisis caused significant consolidation across pharmaceutical firms, as only the largest firms could afford the newly required staged clinical trials. There is, however, a significant philosophical difference between the two regulators. While pharmaceuticals must be proven safe and effective, the FDA also recognizes the large benefits to public health that new drugs bring, and it plays a secondary promotional role for the industry as a result. For nuclear reactors, the technology is regulated purely with the aim of mitigating harm, and nuclear power is treated as a commodity with no recognizable benefits to public health. If there were a single agency regulating the public health impact of coal, gas, and nuclear, this outcome might be different.

The major lessons that the pharmaceutical industry has to offer nuclear apply at the intermediate stage of development, after basic research, when small start-ups are developing their products and undergoing the first stages of licensing. For the nuclear industry, there should be more support for taking new technologies from the university lab to start-up companies. Nuclear also likely needs a staged licensing process, or at least more transparency and finite timelines for decisions. Smaller firms might need to focus more on intellectual property as well, to make them more attractive for acquisitions by large incumbents with the capital to move designs through development and licensing. But most importantly, the agencies overseeing nuclear development—the Department of Energy (DOE) and the NRC—need to more explicitly recognize and promote the benefits of nuclear power compared with other energy sources—namely, clean air and low-carbon, reliable, affordable power.


Read more from the report: 
How to Make Nuclear Innovative

 

 

Brief History of the Biotechnology and Pharmaceutical Industry

The pharmaceutical industry, not unlike the nuclear industry, emerged as a byproduct of World War II. Most of America’s large pharmaceutical companies today originally grew out of the postwar boom, particularly as a result of the state’s demand for penicillin.1 After experiencing rapid growth in the 1950s, the industry was upended by the thalidomide crisis in 1962. The drug was developed in Germany and marketed as a cure for a wide range of conditions, but was particularly useful for treating morning sickness and sleep problems in pregnant women. Prescribed in 46 countries, the drug was consumed more commonly than aspirin in some places. Unfortunately, it took several years to discover that thalidomide was the cause of severe birth defects in over 10,000 children, more than 40% of whom died before their first birthday.2 The United States was one of the only developed countries that did not approve the drug, thanks to a pharmacologist at the FDA who was skeptical of the drug’s safety and repeatedly asked the manufacturer for better evidence and more studies.3

Although the drug was never approved in the United States, the existing regulatory review process was incredibly lax, with no clear methodology for evaluating supporting evidence. To remedy this situation, Congress passed the Kefauver-Harris Amendment in 1962, which included such common-sense measures as a “proof of efficacy” requirement that is still the basis of the drug approval process today.

The immediate effect of this regulatory change was industry consolidation. With the massive investments and long lead times required to bring new drugs to market, only larger firms such as Pfizer, Merck, and Johnson & Johnson had the necessary capabilities to compete.

Concurrently to industry consolidation, the seeds of the biotech industry were sowed. Biotechnology, at least in pharmaceuticals, is distinguished by its reliance on the use of genetic engineering to produce new compounds. A series of breakthroughs throughout the 1970s culminated in the first genetically engineered human insulin in 1982, the fruit of collaboration between Caltech and the first biotech firm, Genentech. This paved the way for the emergence of other small start-up biotech firms in the 1980s, typically only composed of a few successful scientists.4

While the number of pharmaceutical companies remained constant during the 1970s, the 1980s was a period of rapid growth driven by the emerging biotech field.5 Though the industry itself was evolving, little changed in terms of output, not only because drug development takes a long time, but also because biotech research was still a relatively small part of total pharmaceutical R&D. Roughly 20 new molecular entities (NMEs) are approved each year (though year-to-year variation is large),6 and by 1988, only 5 had actually come out of biotech research.7 By the end of the 1990s, however, the FDA had approved more than 125 biotech drugs.8

As biotech grew, the dominance of the largest pharmaceutical firms fell. From 1950 until the early 1980s, the 15 largest pharmaceutical companies were responsible for roughly 75% of NMEs per year; today, their share has stabilized around 35%.9 Generally, large pharma has been good at generating successful follow-on approaches (70% of follow-on approaches come from large pharmaceutical firms) but much less successful in novel treatment options. Biotech has taken up much of this slack. From 1998 to 2008, biotech companies have been responsible for nearly half of new drugs with novel mechanisms (a subset of “especially innovative” NMEs) and 70% of orphan drugs in the pharmaceutical industry.10 Biotech had also increased its share of blockbusters—drugs whose annual sales exceed $1 billion—from 8% to 22% by 2007.
 

The biotech-pharma networked model suggests that smaller firms can play a key role in industry innovation.


For the last few years, anywhere between 5 and 10 biotech drugs have been approved,11 and perhaps more significantly, biotech has been growing at roughly twice the speed of the pharmaceutical industry as a whole (roughly 10% per year since 2009 versus 6%).12

The emergence of biotech also led to a steady increase in inter-firm collaboration, whether in the form of joint research projects, strategic alliances (where one firm does one part of the process, and the other another), or mergers and acquisitions. By 2000, roughly 25% of corporate-financed pharmaceutical R&D came out of joint ventures, 3 times as much as in 1990 and 20 times as much as in 1980.13 From 1990 to 2010, mergers and acquisitions deals increased fivefold. As a subset of the industry, biotech is especially reliant on external forms of collaboration, with nearly half of biotech R&D funding coming in the form of partnerships (since 2009 at least).14

Since most biotech firms are small and rarely have the means to take their drugs all the way from preclinical trials to commercialization, collaboration is a necessity for them. For larger pharmaceutical companies, the growing interest in external collaboration can be traced back to a paradigm shift in research methodology.

With the tremendous advancements in genetic engineering, chemistry, and computational biology, the 1980s opened up the possibility of rationalizing the drug development process.15 Prior to that period, companies principally relied on “random screening” to identify promising compounds for drug development.16 This process involved testing a large number of chemicals to determine how they interacted with the targeted disease. By the 1980s, this process had run into diminishing returns, and the industry switched to a “guided search” model,17 thanks to the advances in computation and genetics. As a consequence of this shift, large research labs and capital were no longer as important to drug research, and the value of genetic expertise increased, bolstering the comparative advantage of highly specialized biotech firms.

According to Gottinger and Umali (2011), this paradigm shift wasn’t in itself sufficient to push large pharma toward more collaboration. Initially at least, pharmaceutical companies were convinced they could do much of this guided research themselves.18 But the early and remarkable success of Genentech changed this perception. Its first two blockbuster drugs, Humulin and hGH, were developed and commercialized in close partnership with larger pharmaceutical companies. Genentech started working with Eli Lilly in 1978 and jointly created Humulin, approved in 1982 and a blockbuster a few years later.19 Genentech also worked with the giant Japanese firm Kabi starting in 1977 on hGH, which was approved in 1985. Genentech’s success helped shake up the pharmaceutical industry and encouraged other big players to seek out partnerships with biotech firms, which they did in much larger numbers from the early 1990s onwards.

While there is no set model for how these partnerships work, some broad trends can be observed. One common pattern is for biotech to focus on the early stages of new drug development, especially preclinical trials. If the early signs are promising, they will enter a licensing agreement with a larger firm. Another option is for the larger firm to simply acquire the start-up, a pattern that is very common. Just last year, large pharmaceutical firms spent a few billion dollars buying the rights to biotech drugs or the companies themselves.20 At this stage, it is rare for any single company to have invented, tested, and commercialized an NME solely internally.

Though again the setup varies from one case to the next, many biotech firms preserve much of their independence even once they have been bought. They maintain their separate offices, research direction, and internal structure, taking on the role of “centers of excellence.”21 Generally, biotech firms boast human capital better equipped to harness the cutting edge of research, as the majority of their employees have PhDs and strong ties to leading research universities.22
 

The nuclear industry needs more support in turning new technologies into start-up companies.


From the perspective of biotech firms, there is little doubt that this collaboration has been beneficial. Apart from Genentech and a few others, very few biotech firms have actually succeeded in bringing their inventions to market. Reviewing the evidence from the 1990s, Owen-Smith and Powell (2004) find that network ties are a significant predictor of performance in biotech.23 More recently, Munos (2009) finds that acquisitions lead to a 120% increase in NME output for small companies.24 Interestingly, increased collaboration or consolidation between large firms isn’t as strongly associated with increases in NME output.

Since the approval of NMEs is actually relatively rare, most partnerships and external collaborations don’t actually lead to a new drug approval. In most cases, they are formed in the hope of achieving the next milestone on the long road toward drug approval. Indeed, the average R&D alliance in biotech lasts less than 4 years, whereas the drug development process takes closer to 10–12 years.25 The fact that large pharma has been willing to bet substantial sums on biotech ventures still far away from commercialization has played a key role in driving the growth of the latter, and has allowed smaller firms to secure funding (often from VCs) despite having a product that is still many years from commercialization.26

In sum, there has been a very clear shift toward a more networked innovation model over the last couple of decades in the pharmaceutical industry. This shift has been driven by three factors: the relative fall in the dominance of large pharma, a shift in the research paradigm, and the emergence of small research-focused biotech firms. However, this three-part explanation is incomplete, as it gives insufficient credit to one of the main drivers of this transition: the state.


Nuclear and Pharmaceuticals: An Industry Comparison

The parallels to the nuclear industry are obvious. In both sectors, development of new products stretches over multiple years (or even decades) and requires significant upfront investment. Both commercialization processes also require significant investment. However, pharmaceuticals are a much larger industry than nuclear in the United States, comprising 23% of all private R&D in 2013. Pharmaceuticals add over $1 trillion to the US economy every year.27 But where the pharmaceutical industry spends over $40 billion on R&D annually,28 the US nuclear industry spends under $500 million. The pharmaceutical industry releases new blockbuster drugs every year, while the nuclear industry is struggling to deploy its first new designs in 30 years.


Role of the State

The pharmaceutical industry has always been closely interwoven with the state. The state’s wartime demand for penicillin created the industry, the Kefauver-Harris Amendment drove consolidation, and the DNA and genetic research conducted in university labs across the world in the 1970s kick-started the biotech revolution. If these contributions are widely recognized, the state’s role post 1980 is perhaps less well known, but no less critical, especially for the development of biotech.

First, the state enacted a suite of legislation to boost the industry. The widely known Bayh-Dole Act of 1980 allowed research sponsored by the National Institutes of Health (NIH) to be patented. The slightly less-well-known Stevenson-Wydler Act of the same year required publicly funded research institutions to form technology transfer offices and to do more to make their research available to businesses. In 1983, the Orphan Drug Act was passed; its aim was to encourage the development of drugs for relatively rare diseases, through generous tax credits, funded research, extended IP, and FDA fast track. The Orphan Drug Act played an especially important role in supporting the nascent biotech industry. By the early 2000s, 90% of the revenue of the four biggest biotech firms—Genentech, Biogen, Idex, and Serono—came from drugs that benefited from the Act.29

Through the NIH, the state also provided direct research funding for breakthrough drugs. NIH funding increased significantly between the mid-1980s and the mid-2000s, increasing at an average of 2.9% per year. Average spending in the 1980s was $10 billion, compared to $35 billion today.30 By 2000, NIH funding accounted for over half of nondefense public R&D, up from 30% in the mid-1980s.31 This research proved remarkably effective for the biotech industry—Vallas et al. (2011) estimate that 13 out of the 15 blockbuster biotech drugs that were on the market in 2007 benefited from NIH funding in the early stages. Equally influential was the Small Business Innovation Research (SBIR) program; set up in 1982, it was tasked with funneling federal dollars to R&D in small businesses, many of which were in the biotech industry.32

The combination of direct funding, subsidies, and supportive regulation was clearly a major factor in explaining the growth and success of the US biotech industry. In fact, the broad range of policy support gave the US biotechnology industry a unique advantage over its international competitors. While other countries also provided direct funding for genetic research, they lacked complementary institutions like the Orphan Drug Act or SBIR, which inhibited the growth of their own biotech industries.33


Drug Development Costs

Estimates of the cost of new drug development aren’t easy to pin down. Certain drugs can cost little more than a hundred million dollars to develop, whereas others can exceed the billion-dollar mark. What’s more, there is no universally agreed method to estimate drug development cost.34 One widely cited study out of Tufts University estimated the average cost of drug development to be $1.4 billion in 2016, a number that includes the cost of failures as well as success. Tufts estimates the opportunity cost of investing in drugs to be $1.2 billion, which is an estimate of forgone return during the 10–12-year investment period. Tufts’s estimate has been the subject of considerable controversy, especially since they haven’t released the raw data from which these numbers are derived. Still, a review of these debates strongly suggests that most drugs end up costing in the hundreds of millions to develop. As a comparison, NuScale will spend about $45 million to have the NRC review their license application, and the process will take 3 years.35 However, Nuscale has spent closer to $450 million and taken 10 years to get to the point of license application with the NRC. 36

The Tufts study is perhaps more useful to get a sense of how the cost of drug development has evolved over time. The center has been estimating the cost of drug development since 1987, and estimates have been on a consistently upward trajectory, even after adjusting for inflation. This increase in costs can be explained by two main factors: the increasing number of failures, and the increasing cost of post-approval R&D (phase IV), from complying with foreign standards or testing for previously unobserved side effects.37
 

Drug Development Timeline

In terms of timeline, the range is slightly smaller than it is for cost, with most NMEs taking between 8 and 12 years to develop. Tufts estimates it takes roughly 128 months to get a drug from synthesis to approval but “only” 96 months from the beginning of clinical trials to FDA approval (provided the drug gets that far). Each phase of the clinical trials typically takes one to two years (and gets a little longer with each phase). The majority of drugs go through three pre-approval stages: stage 1 involves testing the drug for any adverse effects on humans (50–100 participants), stage 2 tests the drug for actual effectiveness (100–300 participants), and stage 3 tests its effectiveness on a much larger pool of patients (1,000–3,000). If a drug successfully passes each of these stages (and the probability that it will increases with each), the firm will submit a New Drug Application (NDA) to the FDA. If the drug is classified as a priority (like an orphan drug), the roughly 100,000-page application will be reviewed in 6 months. If it isn’t, the process takes between 10 and 12 months.

However, the FDA isn’t solely involved in the final approval of the drug; it also works with drug companies to create a schedule for the trial phases, as well as the set of criteria used to evaluate the drugs following each trial stage. To even begin a stage-1 trial, firms have to submit an Investigational New Drug Application (IND) to the FDA. This IND must contain detailed information on animal pharmacology (a large part of preclinical trials involves testing the drug on nonhuman species), the process of drug manufacture, and proposals for the clinical protocols to be followed. This initial FDA review typically engages a range of external experts (often in universities) to help review and refine the trial protocols. This period is also when a firm will receive its patent (and the 20-year exclusivity period begins then).

In contrast, the NRC’s review of license applications can take several years—3 years in the case of NuScale. What then follows is a 5–10-year construction process before the plant starts generating revenue.
 

Importance of Intellectual Property

If a drug successfully clears each of these hurdles, firms can finally commercialize it, and use their patent protection to charge far above the marginal cost of production. While new drugs can be priced at a little more than a few hundred dollars, others can easily cost thousands if not hundreds of thousands of dollars per prescription. Since the marginal cost of drug production is low,38 the firms stand to make a substantial profit and recoup much of their investment.

Clearly, this profit is almost entirely contingent on regulation. The majority of drugs are easy to copy, and were it not for patent protection, other firms would quickly offer cheaper alternatives. The last few years has proven to be a very stark illustration of this fact, as many drugs lost their exclusivity. For instance, Pfizer’s Lipitor, an anti-cholesterol drug, was the world’s top-selling drug for eight years but lost market protection at the end of 2011. In 2012, its sales ranking dropped to 14th, a nearly 61% decline in revenue in one year—from $12.9 billion to $5.1 billion. Bristol-Myers and Sanofi’s Plavix, a blood thinner that was number 2 in sales in 2012 at $9.5 billion, dropped to number 12 the next year, to $5.2 billion. Patents on Plavix expired in several European countries in 2011 and in the United States in 2012. Reviewing a range of drugs in the early 2000s, Conti and Berndt (2016) estimate that prices fall between 30% and 50% in their first year following the loss of exclusivity and even more after that.39 Though sales of the protected drug will fall, total sales of the drug (i.e., adding up branded and generic) tend to increase, as does revenue, consistent with a typical supply-demand story.

If IP is the main profit driver, pharmaceutical companies do also spend substantial sums on marketing. In fact, frequent media reports suggest they spend more on advertising than on R&D. Those figures, however, should be taken with a grain of salt since advertising spending is often bundled up with management and operational expenses.

A meta-study from Johns Hopkins found that total marketing expenditure for the industry was $31 billion in 2010.40 Total marketing expenditure actually peaked in 2004 but has stayed fairly consistent at 8–10% of sales throughout 2000–2010. Most of that marketing money is directed toward physicians and doctors, but there has been an increase in direct-to-consumer spending in the last few years, around $4–5 billion per year. To put these numbers in context, FierceBiotech estimated that industry spent $67 billion on R&D in 2010, or twice as much. And indeed, Statista’s estimate of US pharmaceuticals R&D as a percentage of revenue between 1990 and 2010 hovered around 16–20% in 2015.41
 

Regulator Comparison

The FDA’s 2014 budget was $4.7 billion, $2.6 billion of which came from the public purse, and the remaining $2.1 billion came from user fees. This 50-50 split has been fairly consistent ever since the Prescription Drug User Fee Act of 1992, which allowed the FDA to charge fees to drug manufacturers.42 By contrast, the NRC has a budget of $1.1 billion, 90% of which is funded by user fees (the change was part of a wider effort by then-President George H. W. Bush to cut the government deficit).43

The FDA charges a fixed fee for regulatory review: an IND cost $449,500 in 2013, and an NDA for an NME cost $5 million (if a new drug is a “me too” drug rather than a new molecular entity the fee is reduced to $1.5 million). As mentioned above, the FDA’s review process rarely exceeds a year and is often only six months. By contrast, the NRC has no fixed fee—it charges between $170 and $270 per hour—or timeline by which to deal with new applications. According to their estimates, a new reactor design certification takes five years (and the early site permit review takes three), but its most recent approvals suggest it takes a lot longer than that for innovative designs.

Two designs recently approved by the NRC are the AP1000 and ESBWR. The AP1000 was submitted in 2002 and approved in 2011; similarly, the ESBWR was submitted in 2005 and only approved in 2014. Though reliable estimates are difficult to find, the process seemed to cost in the region of $500 million in both cases. NuScale has reported that they’ve already spent $130 million in preparation for submitting their license to the NRC.
 

The agencies overseeing nuclear development need to more explicitly recognize and promote the benefits of nuclear power.


Another key difference between the FDA and the NRC is their mission. While the NRC’s primary objective is public safety—“to ensure adequate protection of public health and safety”—the FDA has a twin mission: to protect human health by ensuring the safety of new drugs and to advance public health by speeding up the drug innovation process. In other words, part of the FDA’s mandate is to accelerate the commercialization of new technology.

In fact, some go so far as to say that the FDA is too amenable to new drug proposals and, indeed, the FDA approved roughly 90% or more of submitted drugs last year, a rate that has been steadily growing from a low of 60% in 2008. The Senate is currently debating the Republican-sponsored 21st Century Cures Act, which would both lower the bar for approval and speed up the process. Opponents of the bill are concerned it will lead to a lax regime and see a repeat of past scandals, as when, in the late 1990s, two FDA-approved drugs (Vioxx and Avendia) had to be removed following the post-approval discovery of serious side effects.

These debates notwithstanding, it is clear that the FDA is far more amenable to new technologies than the NRC has ever been. If achieving the perfect balance between sufficiently diligent yet pro-innovation regulation is a continuous process, comparing the NRC and the FDA suggests the former has the balance wrong.


Lessons Learned for Nuclear

At a superficial level, the parallels between the nuclear industry and biotech are obvious. The timelines are long and the development costs are very high. One key difference, though, is the marginal cost. Building a new nuclear plant is like building a cathedral: building the second one is only marginally cheaper than building the first, unlike a pill. The development of a blockbuster drug is actually more similar to the building of a plant, since once it’s built the O&M costs are low. Since the marginal cost to new builds is very far from zero for nuclear developers, it would suggest the appetite for innovation is weakened. If the reward for innovation is approximated as the difference between marginal cost and marginal benefit, then ceteris paribus, a higher marginal cost reduces the size of the reward and thus the motivation to pursue it.

A related difference between pharmaceuticals and nuclear is that, for nuclear, older designs’ value doesn’t collapse as soon as the IP expires. A proven design, even if it is fairly antiquated, can still be a winning strategy, especially in countries with underdeveloped technical infrastructure. Rosatom building four of its VVER-1000s in Kudankulam, India, is a case in point. For drugs, there is no significant benefit to using a brand instead of a generic, and indeed, certain drugs will only ever make it to developing countries once a generic version exists. Even if marketing can help expand the life and profitability of drugs, its effectiveness is limited. Again, the imperative for innovation is less critical to survival for nuclear developers than for pharma companies. 

Keeping this in mind, the biotech-pharma networked model does suggest that smaller firms can play a key role in industry innovation, despite being incapable of fully realizing that innovation on their own. It’s also worth noting that large corporate players initially took a lot of convincing to adopt this model. It was only when a few genetically engineered compounds gained commercial viability that larger firms realized that (a) genetic engineering would be key and (b) smaller biotech firms were best equipped to handle these new advancements.

To get to that point, it’s important to recognize the critical role of policy. Whether it was in the form of direct NIH support to genetic research, the Orphan Drug Act, or favorable regulation, Genentech and other early biotech companies benefited from a tremendous amount of public support. And even today, the FDA’s systematized, transparent, and relatively efficient process plays a critical role in facilitating collaboration: every time a biotech company successfully completes a different stage of the drug development process, it boosts its chance of getting acquired.

While the market structure of building nuclear power plants will never look like the high-profit pharmaceutical industry, its early-stage RD&D could shift to be more innovative like the pharmaceutical industry’s did in the 1970s and ’80s. Universities should create more support mechanisms to spin off research into private companies, such as incubators, seed funding, and tech transfer programs. Small nuclear firms should focus more on intellectual property to make their companies more attractive for acquisition. Following the pharmaceutical sector, nuclear companies should invest more in joint ventures and collaborative R&D to solve shared challenges and demonstrate emerging technologies. The NRC may need to develop a staged or phased licensing process. But more importantly, the NRC should incorporate more transparency and offer strict timelines for application review and decisions. Finally, DOE and the NRC should explicitly acknowledge the benefits of nuclear power as compared to the alternatives. DOE should make the case for investing in commercialization of new nuclear designs from a public health perspective, the same way the FDA does for orphan drugs.

Breakthrough Welcomes 2017 Generation Fellows

Breakthrough Welcomes 2017 Generation Fellows

Each summer, the Breakthrough Institute welcomes a new class of Breakthrough Generation fellows to join our research team for 10 weeks. Generation fellows work to advance the ecomodern project, by deepening our understanding in the fields of energy, environment, technology, and human development. 

Breakthrough Generation has proven crucial to the work we do here. Past fellows' research has contributed to some of our most impactful publications, including Where Good Technologies Come FromBeyond Boom & BustHow to Make Nuclear CheapLighting Electricity Steel, and Nature Unbound.

Over 80 fellows have come through Breakthrough Generation since its founding in 2008. We are delighted that the following scholars are joining their ranks:

Christopher Gambino

Chris Gambino is an expert in reactive nitrogen cycling and a product of the Nitrogen Systems: Policy-oriented Integrated Research and Education (NSPIRE). He was trained to be a new generation scientist capable of spanning the boundary between scientific research and public policy decision-making. He earned a PhD in animal sciences from Washington State University.

 

Aurelia Hilary

Aurelia Hillary is a graduate of Imperial College London where she received an MS in environmental technology. Her undergraduate research focused on biomass utilization from agricultural waste and today her interests are in agriculture and water. She and her research partner recently published a paper for nutrient leaching reduction using biochar.

 

Emmanuella Maseli

Emmanuella Omaro Maseli graduated with a Bachelor of laws (LLB) from the London School of Economics and is currently in the process of completing a dual master’s degree between Sciences Po Paris and the University of Tokyo in public policy, specializing in energy, resources and sustainability. She is fascinated by the relationship between energy, resources, and sustainability an interest sparked by the situation in her country of Nigeria. 

 

Jameson McBride

Jameson McBride recently graduated Columbia University with a B.A. in Political Science, Economics, and Sustainable Development. Previously, he has worked as a research intern at the Center on Global Energy Policy, the Council on Foreign Relations, and the Earth Institute. 

 

Abigail Eyram Sah

Abigail Eyram Sah recently earned her MA from the Energy Science, Technology & Policy Program at Carnegie Mellon University. As a native of Ghana, she has always been very passionate about bringing energy to poor areas and has worked on projects in her home country to design tailored energy systems aimed at increasing energy access. 

 

Aishwarya Saxena

Aishwarya Saxena is pursuing a Master of Laws at the University of California, Berkeley. She is a graduate of the International School of Nuclear Law and has earned a university Diploma in nuclear law from the University of Montpelier. Her research has focused on civil liability for nuclear damage, nuclear liability insurance and establishment of a global nuclear liability regime. 

Celinda Lake

Nils Gilman

Dario Gil

Support our work

The Breakthrough Institute is a 501(c)(3) nonprofit and is dedicated to the public interest and as such only excepts charitable contributions from any person or institution without a financial interest in our work. All contributions are tax deductible.

Plenty of Fish on the Farm

 

 

ESSAY

Plenty of Fish on the Farm
Marian Swain

 

 

 

RESPONSES

Plenty of Fish on the Farm
Dane Klinger
Kim Thompson
Ray Hilborn

 

 

 

 

VIDEO

The Future of Aquaculture

 

 

 

For more food and farming news from Breakthrough sent directly to your inbox, subscribe to our mailing list.

Plenty of Fish on the Farm

 

 

ESSAY

Plenty of Fish on the Farm
Marian Swain

 

 

 

RESPONSES

Plenty of Fish on the Farm
Dane Klinger
Kim Thompson
Ray Hilborn

 

 

 

 

VIDEO

The Future of Aquaculture

 

 

 

For more food and farming news from Breakthrough sent directly to your inbox, subscribe to our mailing list.

Plenty of Fish on the Farm

 

 

ESSAY

Plenty of Fish on the Farm
Marian Swain

 

 

 

RESPONSES

Plenty of Fish on the Farm
Dane Klinger
Kim Thompson
Ray Hilborn

 

 

 

 

VIDEO

The Future of Aquaculture

 

 

 

For more food and farming news from Breakthrough sent directly to your inbox, subscribe to our mailing list.

Food Production and Wildlife on Farmland

 

 

ESSAY

Food Production and Wildlife on Farmland
Linus Blomqvist

 

 

RESPONSES

Food Production and Wildlife on Farmland
Claire Kremen
William Price
Andrew Kniss
Ben Phalan

 

 

 

VIDEO

Food Production and Wildlife on Farmland

 

 

 

For more food and farming news from Breakthrough sent directly to your inbox, subscribe to our mailing list.

The Future of Meat

 

 

ESSAY

The Future of Meat
Marian Swain

 

 

 

RESPONSES

The Future of Meat
Maureen Ogle, Jayson Lusk, Judith Capper, Simon Hall, Alison Van Eenennaam, Jesse Ausubel, and Iddo Wernick

 

 

 

 

VIDEO

The Future of Meat

 

 

 

For more food and farming news from Breakthrough sent directly to your inbox, subscribe to our mailing list.

Is Precision Agriculture the Way to Peak Cropland?

 

 

ESSAY

Is Precision Agriculture the Way to Peak Cropland?
Linus Blomqvist and David Douglas

 

 

 

RESPONSES

Is Precision Agriculture the Way to Peak Cropland?
Calestous Juma and Mark Lynas

 

 

 

 

VIDEO

Visualizing Agricultural Innovation

 

 

 

For more food and farming news from Breakthrough sent directly to your inbox, subscribe to our mailing list.

What We Consume Matters. So Does How We Produce It.

I enjoyed reading Linus Blomqvist’s recent essay on wildlife and food production, and the response by Claire Kremen. Blomqvist provides a good overview of the reasons that high-yielding farming—even if it is organic or based on agroecological principles—is unlikely to provide habitat for more than a handful of the original species present in an area. When humans make maximum use of space, light, and water to divert as much as possible of the primary production in an area for human consumption, populations of many other species will suffer. Kremen acknowledges this, but argues that it would be more productive to focus on tackling demand-side issues including meat consumption, population growth, and food waste.

Both perspectives have considerable merit. I agree with Blomqvist’s conclusion that trade-offs between agricultural yields and biodiversity are the norm. While there are opportunities to improve both yields and biodiversity outcomes in many places where farmland currently has little value for either, maximizing both yields and conservation value on the same plots of land is likely to be unattainable in most (if not all) contexts.

I also agree with Kremen that many of the most important challenges revolve around consumption. Where I disagree is with the implication that reductions in demand will make choices about how to produce food along the land sharing-sparing continuum obsolete. Even if current trends are reversed, and global consumption of food declines, land sparing will be important in creating space for habitat restoration and rewilding,1 helping to reduce extinction debt. We must explore the value of both supply-side and demand-side interventions, as recent studies have begun to do.2,3 As part of those explorations, a continued focus on the implications of different strategies for allocating and managing land use is warranted, not least because it matters what land is spared, and where.4 If we believe that other species have intrinsic value5—a point on which I expect Kremen, Blomqvist, and myself are all in agreement—then we conservationists have an ethical obligation to understand what we can do to minimize the harm inflicted on them by how our species uses land.

What has the land sharing-sparing framework told us about this question? We have learned that the majority of species, specialists and generalists alike, are negatively affected when their habitats are converted to agricultural use, even to apparently benign uses such as diverse landscape mosaics with agroforestry plots and fallows. A few species do well, including some not found in the original native vegetation types. We have learned that species richness—the only biodiversity metric used in many studies arguing for land sharing—is a poor way to measure these changes, because it fails to detect the replacement of restricted-range species and forest or grassland specialists by widespread generalists, and because it fails to detect dramatic declines in population abundance.

We have learned that the farming systems that hold the greatest conservation value, such as the lightly grazed semi-natural grasslands of the South American pampas,6 are so low yielding as to make little meaningful contribution to food supply. If we wish to preserve these systems, we are better to do so primarily for their ecological and cultural values, rather than positioning them as a model of production and conservation in harmony. We have also learned that while on-farm biodiversity can make an important contribution to food production, and can in many cases support sustainable increases in yields, actions to promote such biodiversity are insufficient to conserve those species that have little direct service value.8 Such species—probably the majority of life on Earth—will benefit if we can produce the same amount of food on less land, while conserving and restoring native vegetation elsewhere.

The land sharing-sparing framework does not tell us that high-yield farming will result in land being spared for nature: it is an analytical framework for understanding the “biophysical option space,” to borrow a term,3 not a framework for predicting land-use change. The basic framework is a starting point, not the last word, and has already been modified to accommodate objectives beyond biodiversity and food production, and to incorporate more complex land-use scenarios. However, it can tell us how species might respond if we can find ways to make land sparing happen, and it exposes the substantial limitations of land-sharing strategies for reconciling conservation and production objectives. These various conclusions seem to be consistent, so far, across all studied taxa and different geographies (for a review, see endnote 8).

These observations require us to think very differently than before about the role of wildlife-friendly farming in conservation. Their implication is that the priority for conservationists must be to minimize the land area devoted to production, and to increase that devoted to conservation. The other millions of species we share the planet with need space, and if we are to limit the losses underway in what some have termed the “sixth extinction,” we must conserve and restore large areas of native vegetation around the world. There are undoubtedly sound reasons to promote some biodiversity on farmland, for its functional and cultural roles, but those are mostly about the needs of our own species. When we look at the needs of other species, especially those at greatest risk of extinction, what most of them need from us are bigger, higher-quality, better-connected areas of their natural habitats, and protection from additional threats such as hunting, logging, and invasive species.9 If proponents of alternative agriculture really want to help biodiversity, it is not enough to provide on-farm resources for a few (typically widespread and generalist) species. They too must think beyond the farm, to how their activities can support the objectives of halting and reversing habitat loss and degradation in the wider landscape.

It is also clear from studies that have applied the land sharing-sparing framework that efforts to reduce consumption and minimize the amount of land needed for food production can enlarge the “biophysical option space,” and help with this objective of making more space for wild species. Here, Kremen and I are in full agreement. For example, in our 2011 paper on land sparing and sharing in Science, one of the conclusions my co-authors and I reached was as follows:

Measures to reduce demand, including reducing meat consumption and waste, halting expansion of biofuel crops, and limiting population growth, would ameliorate the impacts of agriculture on biodiversity.10

Today, I would add reducing dairy consumption to that list.11 The production of virtually all animal products is more land demanding than their plant-based counterparts, and while there is an important role for livestock in subsistence societies, the imperative for reducing consumption of livestock products in wealthy countries could not be clearer.12 Reversing support for crop-based biofuels in Europe and North America offers another good opportunity to reduce global agricultural expansion, because it is a use of land that was largely created by government policies, with few beneficiaries and dubious environmental benefits.13

Because of what I have learned in the course of my work on land sparing and sharing, I have become mostly vegan, and have been involved in advocacy to halt the use of crops for biofuels. Shifting to more plant-based diets, and reducing the use of land for producing fuel, are two of the most promising ways for our species to increase food yields on a smaller land base.14 While influencing human behavior and changing policy are complex and take time, both can help to make more space available for other species.

Looking to the future, I have a vision that differs in some respects from Kremen’s. I would like to see more landscapes where agriculture is confined to a few productive zones surrounded by a matrix of natural vegetation types—a mosaic of forests, wetlands, grasslands, and shrublands. It is a vision in which humans see themselves as one species living alongside others, embedded in a larger ecosystem, rather than seeing nature as something permitted to continue only within island-like protected areas and in the interstices of human-dominated land use. For now, the closest thing to such landscapes might only be found in a few indigenous territories, with low population densities and often a culture of respect for nature. However, if we take the best of agroecological and agronomic knowledge, our growing ability to restore native vegetation in different parts of the world, and most importantly, our increasing recognition of humanity’s ethical responsibilities towards other species, I believe we could replicate it in other parts of the world with higher population densities. It is a vision that aligns with ideas expressed by E. O. Wilson in Half-Earth15 and George Monbiot in Feral,16 and is consistent with an ecocentric ethic that values all life, not just that of humans.17

What is encouraging is that much of what needs to be done to achieve any of these visions is the same: developing alternatives to the paradigm of constant economic growth; changing what (and how much) we consume; building on existing policies, regulations, and incentives (and developing new ones) to end agricultural expansion; expanding protection for native vegetation and wild species on public, private, and customary lands; improving and scaling up habitat restoration techniques; and developing methods for systems of high-yielding agriculture that respect both human communities and the ecosystems of which they are part.

 

Acknowledgments

I thank Andrew Balmford, David Williams, and Erasmus zu Ermgassen for comments on a draft. The opinions expressed, and any errors, are mine.

America’s Role in Global Nuclear Innovation

A new report from the Global Nexus Initiative reminds us of the serious security and geopolitical implications if America does indeed forfeit leadership in the global nuclear power market. Their report, Nuclear Power for the Next Generation, concludes a two-year project to study the intersection of climate, nuclear power, and security. GNI’s four broad conclusions are:
 

  1. Nuclear power is necessary to address climate change
  2. Nuclear governance needs significant strengthening
  3. Evolving nuclear suppliers impact geopolitics
  4. Innovative nuclear policy requires “break the mold” partnerships


One striking graphic from the report is this map of countries that are currently building new nuclear power plants, and countries that are pursuing new nuclear power programs:

Notice that the countries that have expressed interest in developing nuclear power programs tend to be in more geopolitically tense regions like the Middle East, Africa, and Southeast Asia. Maria Korsnick, CEO of the Nuclear Energy Institute, highlights why this matters for US foreign relations: when the US, or anyone else, builds a nuclear reactor in a new country, they’re establishing a 100-year or more relationship with that country, from siting and building the plant through operation, servicing, and decommissioning.
 

When the US, or anyone else, builds a nuclear reactor in a new country, they’re establishing a 100-year or more relationship with that country.


In our recent report How to Make Nuclear Innovative, we also highlight the long-standing trend of North America and Europe's declining leadership on nuclear R&D, as measured by patenting. You can see this geographic shift by looking at nuclear R&D funding by country over the last few decades as well.

A recent report from the think tank Third Way offers several concrete solutions to this downturn, including creating a senior-level position in the White House to oversee nuclear exports, reauthorizing the US Export-Import Bank and filling its vacant board seats, and increasing and sustaining funding for nuclear innovation through the Gateway for Accelerated Innovation in Nuclear (GAIN) program and the Nuclear Regulatory Commission.

Finally, the GNI report concludes that innovative nuclear policy will require novel, “break the mold” partnerships. Our 2014 report High-Energy Innovation, published with the Consortium for Science, Policy & Outcomes at Arizona State, makes a similar recommendation, calling on the US and European countries to expand their international collaborations on energy RD&D to both accelerate commercialization and to ensure that these technologies are developed where demand is highest.

If the world is to meet the twin goals of reducing poverty through significantly expanded energy access along with drastically reducing greenhouse gas emissions, we’re going to need a lot more nuclear in a lot of new countries. The US isn’t currently positioned to lead on this kind of global nuclear development, but could, with the right policies, investments, and partnerships, regain such a role.

Tom Riley

Caroline Eboumbou

Ryan Phelan

Jeremy Carl

Stephen Smith

Suzanne Hobbs Baker

Gregory Aplet

David Simpson

Demons Under Every Rock

Brandon Keim

Vijaya Ramachandran

Demons Under Every Rock

The tendrils of the conspiracy slowly seem to reach into all corners of the community, culminating with the girls announcing the interrogators themselves to be part of the cult that had abused them. As the case begins to unravel, a social psychologist from Berkeley is brought in to investigate what had gone wrong. The “false memories,” he concludes, had been manufactured through group pressure and persuasion, building an increasingly elaborate—and increasingly social—narrative far removed from the events on the ground.

This disturbing and memorable story has kept coming back to me the last few years, as a cadre of climate activists, ideologically motivated scholars, and sympathetic journalists have started labeling an ever-expanding circle of people they disagree with climate deniers.

Climate change, of course, is real and demons are not. But in the expanding use of the term “denier,” the view of the climate debate as a battle between pure good and pure evil, and the social dimensions of the narrative that has been constructed, some quarters of the climate movement have begun to seem similarly unhinged.

Not so long ago, the term denier was reserved for right-wing ideologues, many of them funded by fossil fuel companies, who claimed that global warming either wasn’t happening at all or wasn’t caused by humans. Then it was expanded to so-called “lukewarmists,” scientists and other analysts who believe that global warming is happening and is caused by humans, but either don’t believe it will prove terribly severe or believe that human societies will prove capable of adapting without catastrophic impacts.

As frustration grew after the failure of legislative efforts to cap US emissions in 2010, demons kept appearing wherever climate activists looked for them. In 2015, Bill McKibben argued in the New York Times that anyone who didn’t oppose the construction of the Keystone pipeline, without regard to any particular stated view about climate change, was a denier.

Then in December 2015, Harvard historian and climate activist Naomi Oreskes expanded the definition further. “There is also a new, strange form of denial that has appeared on the landscape of late,” Oreskes wrote in the Guardian, “one that says that renewable sources can’t meet our energy needs. Oddly, some of these voices include climate scientists, who insist that we must now turn to wholesale expansion of nuclear power.”

Oreskes took care not to mention the scientists in question, for that would have been awkward. They included Dr. James Hansen, who gave the first congressional testimony about the risks that climate change presented the world, and has been a leading voice for strong, immediate, and decisive global action to address climate change for almost three decades. The others—Kerry Emanuel, Ken Caldeira, and Tom Wigley—are all highly decorated climate scientists with long and well-established histories of advocating for climate action. The four of them had travelled to the COP21 meeting in Paris that December to urge the negotiators and NGOs at the meeting to embrace nuclear energy as a technology that would be necessary to achieve deep reductions in global emissions.

So it was only a matter of time before my colleagues and I at the Breakthrough Institute would be tarred with the same brush. In a new article in the New Republic, reporter Emily Atkin insists that we are “lukewarmists.” She accuses us of engaging in a sleight of hand “where climate projections are lowballed; climate change impacts, damages, and costs are underestimated” and claims that we, like other deniers, argue “that climate change is real but not urgent, and therefore it’s useless to do anything to stop it.”

None of these claims are true. For over a decade, we’ve argued that climate change was real, carried the risk of catastrophic impacts, and merited strong global action to mitigate carbon emissions. We have supported a tax on carbon, the Paris Agreement, and the Clean Power Plan, although have been clear in our view that the benefits of these policies would be modest. We have supported substantial public investment in renewables, energy efficiency, nuclear energy, and carbon capture and storage.

Atkin’s story initially simply linked to our Wikipedia page. When I pointed this out to TNR executive editor Ryan Kearney and asked for a correction, he instead added further links that he claimed showed us to be “lukewarmists.” Of those, two were links to criticisms of our work on energy efficiency rebound. One is a link to two footnotes in a book by climate scientist Michael Mann, neither of which is material to the claim either. One links to a blog post that criticizes our view that An Inconvenient Truth contributed to the polarization of public opinion about climate change. The other makes the demonstrably false claim that the George and Cynthia Mitchell Foundation is our primary funder.1

These sorts of attacks, supported by multiple layers of links that never actually materially support the claims that are being made, used to be the domain of a small set of marginal activists and blogs. Atkin herself cut her teeth at Climate Progress, where her colleague Joe Romm has spent over a decade turning ad hominem into a form of toxic performance art.2

But today, these misrepresentations are served up in glossy, big-budget magazines. Climate denial has morphed, in the eyes of the climate movement, and their handmaidens in the media, into denial of green policy preferences, not climate science.

“The ‘moral argument’ for fossil fuels has collapsed. But renewables denial has not,” McKibben wrote in Rolling Stone last January. “It’s now at least as ugly and insidious as its twin sister, Climate Denial. The same men who insist that the physicists are wrong about global warming also insist that sun and wind can’t supply our energy needs anytime soon.”

“We can transition to a decarbonized economy,” Oreskes claimed in the Guardian, “by focusing on wind, water and solar, coupled with grid integration, energy efficiency and demand management.”

This newfangled climate speak is based on newfangled energy math. Oreskes and McKibben, like much of the larger environmental community, rely heavily these days on the work of Mark Jacobson, a Stanford professor whose work purports to show that the world can be powered entirely with existing renewable energy technologies. Jacobson’s projections represent an extreme outlier. Even optimistic outfits, like the National Renewable Energy Laboratory, conclude that even reaching 80% renewable energy would be very technically and economically difficult.

Advocates, of course, will be advocates. But the fact that those claims are now uncritically repeated by journalists at once-respectable publications like the New Republic speaks to how far our public discourse has fallen, and how illiberal it has become. Fake news and alternative facts are not the sole province of the right wing. Inserting links to unhinged bloggers 3 now passes for fact checking for a new generation of hyper-aggressive and hyper-partisan journalists. The righteous community of self-proclaimed climate hawks is now prepared to meet the opposition, exaggeration for exaggeration and outrage for outrage.

The continuing escalation of rhetoric by climate advocates, meanwhile, is unlikely to do much to solve climate change. After eight years of excoriating hard-fought efforts to make headway on the issue by President Obama and candidate Clinton (McKibben in recent years labeled both deniers), we can thank provocateurs like McKibben and Oreskes for helping to put an actual climate denier in the White House.

More broadly, the expansion of the use of denier by both activists and journalists in the climate debate, a word once reserved only for Holocaust denial, mirrors a contemporary political moment in which all opposing viewpoints, whether in the eyes of the alt-right or the climate left, are increasingly viewed as illegitimate. The norms that once assured that our free press would also be a fair press have deeply eroded. Balanced reporting and fair attribution have become road kill in a world where all the incentives for both reporters and their editors are to serve up red meat for their highly segmented and polarized readerships, a dynamic that both reflects and feeds the broader polarization in our polity. It is a development that does not bode well for pluralism or democracy.

Wide-Body Aircraft

Wide-Body Aircraft

In this case study: 
 


The modern commercial aircraft industry, specifically medium and large wide-body aircraft, bears a striking resemblance to the civilian nuclear power industry in both market structure and technological complexity. Both industries are segregated between the vendors that provide the technology—aircraft manufacturers and reactor developers—and the companies that use and operate these products—airlines and electric utilities. The operators in both industries, airlines and utilities, have very small profit margins: roughly 3% for airlines1 and 10% for investor-owned utilities in the United States. Both markets are heavily concentrated, with less than ten major firms. Aircraft production in particular is one of the most concentrated markets in the world (and Paul Krugman argues that the market is really only big enough for one firm2,3).

Both industries can also be strongly affected by exogenous events. With airlines, accidents and security concerns reduce air travel, along with disease outbreaks, blizzards, and volcanic eruptions. A decline in air travel either from an accident or an economic recession greatly reduces orders for new aircraft. Similarly, a major nuclear power accident leads utilities to cancel planned projects and even prematurely close existing plants. Even an unrelated event like a terrorist attack can reduce the demand for nuclear power plants, as they are seen as more prone to risks in general.

Both nuclear power and aircraft manufacturers, finally, share large entry costs for new firms, gradual innovation, imperfect competition, and the fact that many countries consider them “strategic” industries,4 which usually coincides with substantial state support.

And yet commercial aviation appears significantly more innovative and successful than commercial nuclear power, with miles traveled increasing every year and costs per mile and per passenger falling since the 1970s. The lessons that the nuclear industry can learn are simple but not easy. Market consolidation was a key factor in the success of new aircraft designs, but it only worked because firms have significant state support. While aircraft benefited from learning-by-doing on the assembly line, it takes thousands of aircraft before firms see the return on their investments. Therefore, nuclear reactors will need to get a lot smaller to take advantage of similar economies of multiples. Lastly, aircraft can be built in the United States or Europe and flown by most airlines and in most countries around the world. Can the global nuclear regulator, the International Atomic Energy Agency, develop a similar level of international regulation and licensing?
 


Read more from the report: 
How to Make Nuclear Innovative

 

 

Brief History of the Commercial Aircraft Industry

Commercial aviation took off after the Second World War, as excess military aircraft were converted to transport passengers and cargo. Turbojet aircraft were independently invented in the United Kingdom and Germany in the late 1930s, but it wasn’t until 1952 when the first commercial jet airline launched, the state-owned British Overseas Airways Corporation (BOAC). While the introduction of jet aircraft in the1930s affords a very useful study in disruptive innovation,5 this case study will focus on the more recent system of innovation for commercial wide-body jet aircraft, as it bears the most similarities to today’s nuclear reactor industry.

The British dominated the commercial turbojet market through the 1950s with their Comet jetliner. But a series of fatal crashes caused BOAC to ground the entire Comet fleet, and this period opened up space for Boeing to enter the jetliner market with their 707. The novel design of the 707 placed the jet engines underneath the wings, which remains the practice today across all commercial jetliner designs. The 707 proved significantly safer and more fuel-efficient, and led to Boeing and the Americans coming to dominate the commercial aviation industry for the next thirty years. In the 1970s, American aircraft manufacturers had 90% of the free world’s market (excluding the USSR).
 

The modern commercial aircraft industry bears a striking resemblance to the civilian nuclear power industry.


Today, there is a relative duopoly in the market for large wide-body aircraft (over 100 seats) between the American Boeing and the European consortium Airbus, each with ~40-45% of the market, depending on the year.6 There’s a tie for third largest market share between Brazil’s Embraer and Canada’s Bombardier. China’s Comac, currently with the fifth largest share of the market, is starting to make gains with heavy state subsidies. This market duopoly means that Boeing and Airbus are constantly fighting to gain an advantage over the other in terms of aircraft sales.

In the regional jet market (planes with 30-90 seats), Embraer and Bombardier dominate. Embraer was originally state-owned but was privatized in 1994 (but the state still owns 51%). New entrants in the regional jet market include Russian, Japanese, Chinese, and Indian firms, all receiving substantial state aid. Even Bombardier is receiving state aid for its CSeries.7
 

Market Trends

From the public’s perspective, the aircraft industry appears very innovative because the cost of flying has declined so dramatically over the last few decades. However, most of these cost declines have resulted from the way airlines were operated rather than the aircraft technology employed. The major factor was airline deregulation, which began in 1978 in the United States. This forced airlines to streamline their businesses and compete for passengers. Deregulation also led to the bankruptcy of several major airlines. Aircraft represent a major investment for airlines, and they compete to get the lowest prices and then plan to operate their aircraft for decades.

After deregulation, costs were cut sharply through improved operations and better methods for filling seats. Today, the major remaining cost to airlines is fuel, which now represents up to 50% of airline operating expenses. Therefore, upgrading an airline fleet to more fuel-efficient aircraft is the simplest way for an airline to reduce costs, although it is also the most cash intensive. Innovation in aircraft design over the last two decades, as a result, has tended to focus on fuel efficiency.

Airlines have also been able to reduce costs by fine-tuning the size of the aircraft in their fleet and on particular routes. In general, increasing the size of an airplane reduces costs through greater efficiency in terms of fuel per seat and seats per flight. On the other hand, larger planes lose out on flexibility in terms of routes, flight frequency, and passenger preference.

For a long time, the trend was toward bigger and bigger jets. In the early ’90s, Boeing and Airbus formed a consortium to explore a very large aircraft; they were hoping to jointly produce the aircraft to share the very limited market. Boeing eventually pulled out, and the story of the Airbus A380 has become a cautionary tale.8 More recently, airlines (and aircraft manufacturers) have been converging on aircraft with 160 seats as the most optimal size. Finding the right size for an an aircraft is a strategy that the nuclear industry should learn from, although there might be different optimal sizes for different markets.

To give some perspective on the size of the aircraft industry, Boeing has estimated the demand for aircraft (from all manufacturers) over the next fifteen years.9 The last column shows the average catalog price of a plane in each category. Boeing estimates the total value of planes demanded over the next fifteen years will be $5,200 billion.


 

R&D Budgets

Over the last decade, Boeing’s annual R&D budget has been between $3 and $7 billion and between 3% to 10% of their total annual revenue. From 2000 to 2004, Airbus outspent Boeing on R&D by 100% ($8 billion compared to Boeing’s $4 billion).10 More recently, though, Boeing has been outspending Airbus on R&D, with Boeing spending $3-$7 billion annually11 and Airbus spending only $2.7 million in 2013.12 It’s suggested that Boeing draws on R&D funded as part of its defense contracts, and that Airbus may rely more on R&D from universities and national labs. In 2014, revenue from Airbus’s commercial aircraft division was $46 billion,13 while Boeing’s revenue was about $86 billion.


Nuclear and Aviation: An Industry Comparison

Compared with commercial jets, the nuclear industry is less concentrated. While there are only seven reactor developers building around the world today, the largest market share is only 28%, tied between Rosatom and the China General Nuclear Power Group. Following these big two, Westinghouse Electric Company and the Korea Electric Power Corporation each have around 12% of new builds.14 Areva NP, the Nuclear Power Corporation of India, and GE-Hitachi, finally, each have around 6% of the reactors under construction globally.
 

While Boeing and Airbus each spend around $3 billion annually on R&D, nuclear companies invest significantly less.


While there are more competitors, the market for nuclear power plants is much smaller. Boeing delivered 748 jets in 2016, at a market value of $94 billion.15 In comparison, only ten nuclear reactors came online in 2016. In the last ten years, only about five nuclear reactors have come online each year, with a pricetag of $2-$5 billion.

While Boeing and Airbus each spend around $3 billion annually on R&D, nuclear companies invest significantly less. Rosatom invests 4.5% of its annual revenues into research, or about $360 million in 2013. The latest figure for Areva is from 2007—they invested about $644 million into R&D, but that was across their products and services. Across all of the OECD, total spending on nuclear fission R&D was less than $1 billion in 2015 (that includes public and private R&D). However, this figure excludes Russia, China, and India, all of which are investing heavily in nuclear.
 

Major Setbacks in Innovation

While Boeing has been the global leader in jet aircraft production for almost sixty years, their business success has followed a roller coaster. Aircraft production is very capital intensive, and demand follows broad economic trends in addition to responding to exogenous events like terrorist attacks, high-profile crashes, and even volcanic eruptions. Boeing had to significantly cut payroll in the 1920s, during the Great Depression, after the Second World War, in the 1960s, and in the 1970s. After 9/11, Boeing nearly halved payroll, as the entire aviation industry struggled. While Boeing made several business risks that ultimately paid off, other innovative firms were not so successful.
 

The Concorde (and the Tupolev Tu-144)

Since the 1950s, many countries have expressed interest in supersonic transport jets. However, the costs to develop such an aircraft were expected to be huge. The major US aircraft manufacturers at the time—Boeing, McDonnell Douglas, and Lockheed—decided it was too great an investment and didn’t pursue the technology. However, a British and French alliance formed in 1961 to develop the world’s first supersonic passenger jet, and they successfully brought the plane—known as the Concorde—to commercial operation in 1976, at a joint cost of $1.3 billion. The British-Franco team had to overcome significant technical challenges in metallurgy, structural integrity, and the cooling of the heated airframe. But eventually a commercial aircraft was produced that could travel twice the speed of sound, and at such high altitudes that passengers could see the curvature of the Earth. However, while the plane succeeded on technical grounds—it cut the flight time from London to New York almost in half—it failed on economic and political grounds.

Many accused the Americans of intentionally trying to thwart the success of the Concorde, since they abandoned their own supersonic transport program, but the truth is that the Concorde faced many self-inflicted problems. Despite its supersonic speeds, the plane had a short flying range and poor fuel economy, meaning that it had to stop frequently for refueling. For example, the Concorde had to make two refueling stops between London and Sydney, but the subsonic Boeing 747 could make the trip nonstop, which meant it actually got their faster overall. And the Boeing 747, a new plane itself, cost 70% less per passenger-mile to fly.16 Noise was a nontrivial issue, both when aircraft took off and the boom created when they went supersonic. Several countries would not allow the Concorde to fly over their airspace, which limited the routes available to those primarily over oceans. Over 100 Concordes were originally ordered, but only 20 were ever built, and only 14 were actually delivered to British and French airlines. In 2000, the Concorde suffered its first and only crash, killing all 100 passengers, 9 crew members, and 4 people on the ground. Following the crash and a market-wide slump in air travel following 9/11, British Airways and Air France announced the retirement of the Concorde in 2003.

The Russians developed their own supersonic transport, the Tupolev Tu-144, that was mostly a copy of the Concorde, but after it crashed at the Paris Air Show in 1973, it was only flown for cargo transport within Russia.


Role of the State

Because of aviation’s strategic importance to state militaries, the industry has always benefited from strong state support in one form or another. Boeing got a big boost from the government in 1929, when Congress passed a bill requiring the US Postal Service to fly mail on private planes between cities. Boeing’s R&D has also benefited from decades of defense and NASA contracts, where it can shift profits from its military programs to fund development in its commercial programs.

While European aviation firms started out ahead of the Americans, their diversity of small national companies had limited runs of most aircraft lines. To compete with the Americans, a consortium of British, French, and West German aviation companies formed Airbus Industrie in 1970. In the late 1990s, an even larger group of European civil and defense aerospace firms merged to form the European Aeronautic Defence and Space Company (EADS). Airbus receives research and development loans from various EU governments that have very generous terms, often not requiring repayment unless the new jet is a commercial success.

Both Boeing and Airbus accuse the other of taking illegal subsidies and violating World Trade Organization (WTO) rules. A bilateral EU-US agreement from 1992 was meant to reign in state support for aviation by laying ground rules. However, in 2010, the WTO ruled that Airbus had received improper subsidies by receiving loans at below-market rates. Just a year later, the WTO also ruled that Boeing had violated rules by receiving direct local and federal aid, including tax breaks.


Importance of Intellectual Property

Because of the strong competition between Boeing and Airbus, intellectual property protection is very important. Boeing decides whether to patent a technology based upon how visible it is and how easily it can be reverse engineered. If a certain technology is not visible on their aircraft and is difficult to reverse engineer, they won’t apply for a patent, but instead keep the technology as a trade secret.17 Boeing also doesn’t patent any technology they develop for military applications. Boeing is very ambitious in leasing their patents to noncompetitive industries, such as automotives, and they consider their patents a valuable asset.

One of the major technological innovations that defines the Boeing 787 is the use of carbon composite materials. These composites are composed of carbon fibers reinforced by an epoxy resin and are very difficult to manufacture. Because they have joint applications in missile fabrication, Boeing will not share the technology.18 Due to the limited number of potential suppliers for novel materials, and the large demand expected, Boeing entered into a twenty-plus-year contract with the world’s largest producer of carbon fiber. Such long-term supplier contracts are common in aviation to guarantee quality and maintain regulatory compliance. These contracts also help protect IP, and suppliers are precluded from contracting with other firms.


Regulator Comparison

The federal body that regulates aircraft manufacturing and operations, the Federal Aviation Administration (FAA), is actually quite similar to the Nuclear Regulatory Commission (NRC) in many ways. The FAA primarily certifies aircraft designs for “airworthiness” by issuing rules on aircraft design and production. Additionally, the FAA certifies aircraft production facilities (and performs quality control over time) and component and materials production facilities. Unlike the NRC, the FAA spends a lot of time certifying those who operate aircraft as well: airlines, licensing pilots, aircraft mechanics, repair stations, air traffic controllers, and airports.19 While some may ask how nuclear could be regulated more like aviation, such that innovation is encouraged, others argue that aviation should be regulated more like nuclear to place a greater emphasis on reduced fatalities.20 But there are many intrinsic differences between the two industries that require different kinds of regulation.

One of the major differences is that up until 1998, the FAA was in charge of both regulating and promoting air travel. In contrast, the Atomic Energy Commission (AEC) lost this dual designation for nuclear power in 1974. Because of this dual role, the FAA was more concerned with the cost-benefit analysis of new safety regulations, and whether they would impose unnecessary financial burdens on airlines.21

In a 1997 New York Times analysis, Matt Wald argues that the main reason for these differences is consumer choice.22 You can choose which airline to fly and even which aircraft you fly on, but you can’t really choose where you get your electricity. And while utilities can choose whether or not to build a nuclear power plant, they usually can’t opt to get their power from a different nuclear plant down the road if their local reactor is underperforming. Hence, nuclear power operators have collaborated much more in setting standards and sharing best practices.
 

Modern wide-body aircraft encompass a diversity of innovations.


Since airlines are publically traded companies, the FAA has a strong incentive not to highlight accidents, safety violations, or underperformance, as this would negatively affect the reputation—and stock price—of the specific airline, whereas the NRC frequently publishes small accidents, safety violations, and performance records of all plants.23 There is also a much larger bureaucracy for regulating nuclear: for every federal employee of the FAA there are 71 employees regulated in the airline industry, whereas in the nuclear industry there are only 6 employees for each federal employee at the NRC.24

Another major difference is that the airlines were deregulated in 1978. While some utilities started deregulating in the 1990s, there are still over 20 states that have regulated energy markets. Before and after airline deregulation, many worried that deregulation would lead to a moral hazard with regard to safety. Indeed, airlines that spent less per flight on safety had a higher frequency of accidents, and this problem was even more noticeable among airlines with financial trouble.25 Nuclear power, on the other hand, seemed to improve in performance and safety under deregulation,26 although deregulation has made it more difficult to build new nuclear power plants. In contrast, the competition among airlines has seemed to spur innovation among aircraft manufacturers, as airlines demand the newest and most efficient new aircraft models to stay competitive.

However, this lax regulation by the FAA should in theory be offset by a certain amount of self-regulation by the airlines and aircraft manufacturers precisely because airlines are publically traded and consumers have so much choice in air travel. If an airline has an accident, this is almost instantly reflected in a drop in stock value. However, airlines and nuclear power operators both have very minimal market responses in reaction to high-publicity accidents, as compared with less rigorous regulatory bodies like the Food and Drug Administration, the Occupational Safety and Health Administration, and the Mine Safety and Health Administration. But there is a big difference in how much each industry is willing to pay to prevent fatalities. A 1986 report estimated that many of the recent FAA safety regulations would cost airlines about $700 (1986 US dollars) per life saved.27 In comparison, in 1995, the NRC set a new value of $1,000 to prevent one person-rem of radiation exposure,28 a dose that will not result in a fatality.

At the international level, air travel is regulated by the UN agency the International Civil Aviation Organization. Flights and aircraft are governed by a set of standards and best practices referred to as ETOPS (extended operations), which were based on a mix of current FAA policy, industry best practices and recommendations, as well as international standards. Most interestingly, ETOPS is a performance-based standard to some extent. Originally, airliners were tested and certified to an ETOPS-180 standard, which meant they had to prove that the aircraft could fly 180 minutes and land with only one functioning engine (out of two). For their first year of flight, the airliner’s routes were always within 180 minutes of a certified landing strip for just this purpose. But after a year, or 18 months, if the aircraft had performed as expected, it might get approval to extend to ETOPS-240 and later ETOPS-360. These standards were developed with heavy input from both Boeing and Airbus, the airlines, international regulators, and even the Air Crash Victims Families Group. This broad stakeholder engagement on safety led to regulations that serve a dual purpose of protecting passengers as well as allowing efficient long-range flights for airlines.
 


2015 was the safest year on record for the airlines, with the lowest number of fatalities from accidents. (Source: The Telegraph,“2015 was the safest year in aviation history,” January 6, 2016, http://www.telegraph.co.uk/travel/news/2015-was-the-safest-year-in-aviation-history/.)


Despite the seemingly lax regulation of airline safety, the number of fatalities from aircraft accidents has been declining for decades.29 Considering the growth of commercial airline travel over this time period, the relative probability of fatalities has decreased dramatically. Although it is worth noting that the number of annual fatalities is still three orders of magnitude greater than those due to commercial nuclear power.


Major Innovation Success Stories

Most of the innovation taking place in aviation is incremental, with minor improvements in aircraft weight, fuel efficiency, and performance. While many of these incremental innovations have had significant effects on the cost and operability of commercial aviation, below are more detailed case studies of significant (and successful) major innovations in modern aviation.
 

Jet Turbines

The aircraft industry successfully introduced a major technological change with the introduction of turbojets as an alternative to propeller planes. Turbojets were much preferred for commercial airlines because they were faster and came with much less turbulence, meaning more comfort for passengers. But turbojets were originally the pipedream of aerodynamics scientists, not aviation engineers, and they remained a purely academic exercise for almost a decade, with most aircraft manufacturers predicting they would never be practical since their fuel consumption was so high. However, turbojets took a big leap forward as a result of World War II and the invention of radar. Before radar was able to detect incoming bombers, large prop planes would patrol the skies for long periods ready to attack—thus fuel efficiency was a matter of life and death. But with the introduction of radar, planes could sit on the tarmac and only needed to take off when a bomber was detected; as a result, take-off speed and flight speed replaced fuel efficiency as critical concerns. The turbojet filled this niche perfectly.30 Significant money was spent to research and develop turbojets by the British Air Force, which led to rapid prototyping, testing, and deployment. Post-WWII, the American government dumped surplus aircraft into the commercial market at drastically reduced prices, which created a boom in commercial air travel.

But turbojets were still not an initial commercial success. Military turbojets flew very little and sat stationary most of the time. Commercial jets would be flown almost continuously to maximize profit, and therefore military jets were not well suited to commercial use, as many components broke quickly and materials wore out within a few months, leading to accidents and expensive repairs. In 1945, the major airlines created an international cartel to keep commercial prices artificially high, which allowed a buffer for airlines to purchase airplanes at high upfront costs. These bigger planes often flew only half-full of passengers, and airlines were losing money. Thus, they had to introduce new market mechanisms like economy class tickets to attract passengers and turn a profit with these bigger planes (similar to how large nuclear plants need to run 24/7 to be profitable). The high speeds of turbojets required longer and concrete runways to replace grass fields. The noise of such aircraft required them to fly at high altitudes, which meant cabins now had to be pressurized.


Fly-By-Wire

The Airbus A320 was the first airliner to fly with an all-digital fly-by-wire control system, a technology originally developed for military aircraft and used on experimental space shuttle flights. Digital fly-by-wire technology essentially enables planes to be flown by computers. The workload for pilots is simplified and reduced, and many adjustments are made automatically. This has reduced the weight and complexity of mechanical control systems and has also improved the safety and performance of airliners, as it reduces human error.
 

Boeing 787 Dreamliner

In the late 1990s, Boeing was investigating new airplanes to offset the sluggish demand for their 767 and 747-400. Initially, they were aiming for much faster airplanes (Mach 0.98), but after the 9/11 attacks most airlines were focused on reducing costs. In 2003, Boeing announced the development of a radically more fuel-efficient wide-body aircraft that would be available in 2007.

The main innovation that Boeing took advantage of for the 787 was replacing steel with composite materials in many of the plane’s components, most importantly the wings. These composite materials allowed for wing shapes that had superior aerodynamics not possible with conventional materials. These innovations combined to reduce fuel use by 70% and reduce noise footprint by 90%. Noise might not seem like a big deal, but it affects which airports the planes can use and what speed they can go when taking off and landing. The improved fuel efficiency would not only reduce costs for airlines, but also greatly extend the range of passenger routes and increase the load that cargo planes could carry.

The total program cost $32 billion,31 but it will take time for Boeing to realize a full return on this investment. The very first orders for the 787 were placed in 2004 by a Japanese airline, but the aircraft suffered serious delays in production, meaning the first planes weren’t delivered until 2011, four years late. The main cause for the delay was Boeing’s global distribution chain, where subcomponents are manufactured around the world and flown to Everett, Washington, where they are assembled. This process was thought to be revolutionary and was expected to dramatically reduce assembly time, but has proven the opposite. The 787 is assembled from subcomponents manufactured in Japan, Italy, South Korea, the United States, France, Sweden, India, and the United Kingdom. This should serve as a warning for the nuclear industry proposing a similar supply chain structure for large modular reactors like the AP1000. A single 787 costs approximately $224 million.

Below is a chart of the orders placed for Boeing 787s and actual deliveries. Boeing has delivered a total of 318 aircraft as of 2015.


Airbus A380

As mentioned above, Airbus began in collaboration with Boeing on a very large aircraft in the early 1990s, before Boeing left the partnership. Airbus—a European consortium of aircraft manufacturers—continued with the development of what would become the A380, the world’s largest commercial aircraft. Airbus spent the rest of the ’90s exploring options for their next aircraft and performing hundreds of focus groups with airlines and passengers. In 2000, Airbus officially announced the start of a $10 billion program to develop the A380 with 50 firm orders from six airlines. The A380 can accommodate 853 passengers and has 40% more floor space than the next largest aircraft, Boeing’s 747.

Airbus employs many of the same innovative materials and technologies as the Boeing 787, although developed independently. Unique among the A380 jetliners is the development of a central wing box made of composite material and a smoothly contoured wing cross section.32

However, development of the A380 suffered many delays that reduced its economic viability and allowed Boeing to gain market share. Flight tests began for the first A380 in 2005, but trouble with wing failure meant that the design wasn’t certified until 2007. Production delays occurred due to the extremely complex wiring involved (over 500 kilometers of wiring in each aircraft), and the differing wiring standards between German, Spanish, British, and French component facilities. Because of the large size of the A380, an extremely specialized supply chain was developed (shown below) to allow movement of the gigantic subcomponents by barge, whereas Boeing can fly subcomponents by its own planes.
 


 

The first completed A380 was delivered to Singapore Airlines in 2007, but Airbus has so far delivered only 169 A380s. While no longer operating at a loss, they do not think they will ever recoup the full investment cost.33 Each A380 has a retail cost of $450 million.
 

Lessons Learned for Nuclear

Modern wide-body aircraft encompass a diversity of innovations, both technological and in terms of innovative practices in manufacturing, supply sourcing, designs, and pricing. However, the predominant theme is that these innovations were driven by customer demands, whether from airlines or passengers. In an aggressive market with a strong duopoly, the major aircraft manufacturers were constantly looking for a way to gain advantage. Innovation was targeted at reducing costs for the major expense for airlines: fuel. New designs allowed higher profits for airlines along with greater flexibility in routes and longer ranges.

Economies of multiples proved much more important than economies of scale. Airlines had to find the right-sized aircraft to strike a balance between economies of scale and business flexibility. However, large planes came with many challenges. Most importantly, it takes airline manufacturers hundreds to thousands of airliner units before they recoup the cost of investment in a new design (and maybe they still never recoup the cost in Airbus’s case). But Boeing and AIrbus plan for this and structure the retail cost of the aircraft accordingly. Airlines are willing to pay a premium for the first deliveries of a new aircraft to please customers. In addition, developing a robust supply chain takes a large and consistent demand for aircraft. For example, Airbus sold 626 jetliners in 2013, and currently has over 13,000 standing orders across their four major designs.34
 

The lessons that the nuclear industry can learn are simple but not easy.


Both Boeing and Airbus had firm orders from airlines from the very early stages of aircraft development. This allowed them to receive feedback on what customers wanted as well as to work on a strict timeline for delivery (they both failed to deliver on time, but they delivered extremely fast compared with the nuclear industry). Airlines are willing to place orders far in advance for two reasons: they trust the reputations of the manufacturers to deliver, but more importantly, they are competing with all other airlines to offer the newest, most efficient airliners. For the nuclear industry to develop this level of advance orders, several major changes would need to occur. First, reactors, or at least major components, would need to be factory produced and sold at a fixed price. Second, utilities would need a guaranteed delivery date for their orders. Currently, nuclear projects are consistently far behind schedule and over budget, which doesn’t allow utilities to adequately plan for future supply.

Globalizing and diversifying the supply chain proved challenging and led to delays, even for the two giants of the aircraft industry Boeing and Airbus. Large components and novel materials required entirely new manufacturing facilities that had to be built from the ground up around the world. The nuclear industry should take particular note of this experience. China, for example, is trying to indigenize the entire supply chain for their reactor, the CAP-1000.

Oren Cass

James Woudhuysen

Rich Tafel

Matt Winkler

Tisha Schuller

David Schleicher

Kent Redford

Myra Liyana Razali

Shirley Ho

Priscilla Atansah

Urbanization in the 21st Century

First, some numbers. The productivity of workers was 60% higher in urban Uganda compared to rural Uganda, over 150% higher in Bangalore compared to India as a whole, and over triple in Shenzhen compared to the rest of China. The same pattern holds in the developed world as well, with higher per capita GDP in London and Paris compared to the rest of Britain and France. In the United States, the 100 largest metro regions are about 20% more productive than the rest of the metro regions.

Cities have many inherent advantages. Industries tend to concentrate in cities, particularly dense coastal cities, because dense cities facilitate easy trade of goods. Dense cities also facilitate the exchange of skilled workers and ideas. These benefits are called agglomeration economies. As countries modernize their agricultural systems and require less rural labor, people cluster in cities, and this is a key trend driving world urbanization. According to the United Nations, the world became over half urban around 2007, and is projected to be 66% urban by 2050.

But even as urbanization raises standards of living, it raises questions about quality of life. Messy urbanization can even lead to cities without growth. In rapidly growing cities with excessive regulation on building, such as Mumbai, growth is strangled, prices become unaffordable, and residents suffer from overcrowding. If infrastructure is poor, residents of dense cities will also suffer from disease, crime, and traffic congestion.

There is no simple solution to the challenges of urbanization, but growing cities in the developing world must take them head on. To thrive, cities must attract entrepreneurial talent and develop modern infrastructure. To do these things, they must develop strong and honest government.

While Glaeser and Xiong focus on cities in the developing world, there are lessons for advanced economies as well. Many of the largest and most productive cities in the developed world have developed restrictive land-use regulations that prevent people from moving to them. If more people in the United States could move to San Francisco or New York, for instance, this would boost economic activity because SF and NYC have very productive economies. Chang-Tai Hsieh and Enrico Moretti estimate that if land-use restrictions were lifted and people could move freely between cities in the United States, the country’s GDP would be boosted by 9.5%, generating $1.5 trillion of new wealth each year.

As modern cities create wealth, there is the risk that they also concentrate wealth to a dangerous degree. The easing of land-use restrictions in the most productive cities can ease pressure somewhat by allowing more people to move in, but it is not realistic to imagine that the entire United States, for instance, will move into a few large cities. Contemporary urbanization has not just concentrated wealth, but concentrated media and quite possibly contributed to the growing cultural divides that many countries face. Whatever one thinks about Brexit or the election of Donald Trump, these events reflect a dangerous disconnect between urban and rural that can no longer be ignored.

Former Denver Mayor Wellington Webb said, “The 19th century was a century of empires. The 20th century was a century of nation states. The 21st century will be a century of cities.” Ensuring people can thrive in the cities of tomorrow will prove one of the major challenges of this century.

Blue Growth: The Need for Fish Farms

There is no question that any major increase in global fish production will need to come from aquaculture. As Chris Costello and I estimate in a recent paper, global fisheries only have the capacity to increase production by 14-17%, by reducing pressure on overfished stocks and increasing pressure on underexploited ones. Other options for increasing fisheries’ yields—such as harvesting species like krill and mesopelagic fishes that are not now fished—are not economically viable today.

Of course, as Marian Swain mentions in her excellent essay on the issues involved in the expansion of global aquaculture, all forms of food production come with environmental costs, including land transformation, greenhouse gas emissions, freshwater depletion, overuse of antibiotics, soil erosion, nutrient and acid pollution, and biodiversity losses. Both aquaculture and capture fisheries have come under great scrutiny for their performance across some of these categories.

But when looking at the environmental costs of catching and farming fish, it is essential to compare them with alternative modes of producing protein. As discussed in Swain’s article, most capture fisheries and aquaculture systems have considerably lower environmental costs than the livestock sector. Only by using livestock as a reference point—and the extensive habitat disruption it entails—can we fully account for the comparative environmental impact of fish production. Increased animal protein production can come from more pasture and more deforestation or from more aquaculture—that is the comparison that needs to be considered.

From a human health perspective, there is also much to be said for fish and the high concentration of micronutrients they contain, which is difficult to find in other sources of protein. Omega-3 fatty acids are well known, but fish also contain large amounts of calcium, zinc, selenium, iron, and vitamins A, D, and B12. There are differences, naturally, between kinds of fish and modes of production, but in general, there is growing evidence to suggest that fish should form a significant part of our diets. This, in turn, will require producing more fish.

When it comes to how that fish should be produced, there are some elements that Swain’s article does leave unexplored. With regard to energy use, for instance, Swain rightly points out that the greenhouse gas emissions of aquaculture systems depend greatly on how energy is sourced, but systems and species also vary widely in terms of their energy demand. The energy required to produce feed is particularly significant, which is why farmed salmon, increasingly fed with waste products from fisheries, have the potential to perform well on this metric. Even better, farmed mollusks—especially mussels, clams, and oysters—require no feed, freshwater, or antibiotics, and use very little energy compared with recirculating systems. Their yield per hectare is also astonishingly high.

When considering the future of fish production, the environmental benefits of such systems are something to keep in mind. In the words of a favorite poster produced by the US Food Administration during World War I, “Save the products of the land. Eat more fish—they feed themselves.” 

Commercial Spaceflight

Commercial Spaceflight

In this case study: 
 


In the early 2000s, NASA faced a challenge similar to that of today’s nuclear industry.

The space shuttles were set to retire in 2011, ending America’s only method for moving people and cargo into space and to the International Space Station. The shuttle program was also not considered a great success; it had proven far more expensive than originally planned, each shuttle flew less regularly than hoped, and there were two catastrophic failures (Challenger and Columbia), each killing seven astronauts. A successor program was started to replace the shuttles, but funding was limited. In 2004, the White House announced a major shift in NASA’s mission: it was now to support the development of commercial spaceflight, primarily to service the International Space Station. While this announcement was made in parallel with a new NASA program to develop its own successor to the shuttle program, in 2010 President Obama directed Congress to focus funding on the commercial programs.

The new Commercial Orbital Transportation Services program (COTS) dramatically shifted NASA’s policy toward the aerospace industry, changing the way services were contracted, regulated, and funded. Ultimately, the program was a success. Spaceports popped up across the country, and the United States went from zero private orbital launches in 2011 to more launches than Russia and Europe combined in 2014. As anticipated, the competition between private firms reduced the cost of launch services so much that the market has grown significantly, and the possibility of space tourism is becoming a reality, opening up a lucrative new market for US aerospace firms. In 2014, the commercial space launch industry had estimated revenues of $1.1 billion. Both Boeing and SpaceX, recipients of several COTS contracts, are scheduled to conduct human flights with NASA astronauts in 2018.

There are important lessons in the recent history of commercial spaceflight for the growing advanced nuclear industry. Like NASA, the Department of Energy (DOE) will need to shift its mission to explicitly support private-sector innovation. And beyond providing advice and expertise, federal research and demonstration priorities should take explicit guidance from the nascent advanced nuclear industry in exchange for clear requirements for inter-firm collaboration.
 


Read more from the report: 
How to Make Nuclear Innovative

 



History of the Commercial Spaceflight Industry

Since NASA’s founding in 1958, the agency had strong and centralized control over research and development, but left final design and fabrication to a large network of private contractors such as Boeing, Lockheed Martin, and Northrop Grumman. The large geographic spread of NASA research facilities and contractors kept public support for federal spending at record highs. NASA programs have long doubled as means to bring jobs and investment to poorer regions of the country.1 As late as 2014, almost two-thirds of US adults agreed that the International Space Station was a good investment of federal money.2

When Ronald Reagan was elected president in 1981, his goal of reducing the federal budget coincided with efforts inside NASA to explore more commercial possibilities for the space agency to invest in.3 In 1984, NASA changed its charter to encourage commercial spaceflight, but large barriers to entry remained, and there wasn’t a serious market demand. It was not until the Commercial Space Act of 1998 that the US government really started investing in the innovation network for commercial spaceflight and the industry began to take off. The act promoted commercialization across the space industry: the International Space Station, the creation of space ports outside of Florida, and more private launch services.4

The space shuttles were scheduled to retire in 2011, and the United States would need to develop a new vehicle for delivering crew and cargo to the International Space Station. The shuttles had not been an overwhelming success, with each mission exceeding a half-billion dollars,5 two major failures, and significantly fewer launches than expected. Without the shuttles, NASA would have to pay Russia approximately $70 million per crew member to fly to the ISS.

In 2004, President George W. Bush laid out a new Vision for Space Exploration policy, which called for increased funding for a comprehensive reinvigoration of the space program, to include completing the ISS, replacing the space shuttle, returning humans to the moon by 2020, and eventually landing humans on Mars. Congress funded the program with $16 billion,6 but there was opposition to the Constellation program from the beginning, which grew when the Obama Administration came into office. By 2009, in the midst of the recession, the appetite to fund such a program had evaporated, and eventually the Vision for Space Exploration program was canceled.

The Commercial Orbital Transportation Services program, however, upended the old model of spacecraft development in 2008. The policy innovation was to ditch the model of government procurement, a move that had bipartisan support.7 This shift was implemented through a series of competitive Space Act Agreements to demonstrate commercial launch vehicles and sign contracts to deliver cargo to the International Space Station.8 NASA put the entire development process in the hands of private companies—it thought that smaller firms would be more innovative and able to deliver significantly cheaper launch services than the large, incumbent aerospace firms.9 These funded agreements offered much more flexibility to companies than traditional contracts. One of the most important aspects of these programs was that NASA didn’t down-select technologies; it awarded funding to any company that met a predetermined set of criteria, and it allocated funds based on need, not technology. NASA was also explicitly trying to develop a robust commercial spaceflight industry, not solely to find a single company to deliver cargo. On this basis, NASA also awarded non-monetary agreements, which provided companies advice and consulting from experienced NASA engineers.

Through this effort, NASA encouraged collaboration between companies to identify shared challenges, such as educating the investment and insurance communities and developing a customer base for non-NASA orbital transportation services.10 It also helped match companies with vendors, and stimulated entrepreneurship along the supply chain, by hosting events that brought applicants and suppliers together under one roof. NASA’s status as a disinterested third party played a key role in allowing different companies to come together.
 

The United States went from zero private orbital launches in 2011 to more launches than Russia and Europe combined in 2014.


What was unique about this new mission for NASA was that it wasn’t just giving funding to procure or contract commercial services for cargo transportation; it was tasked with fostering an entire commercial spaceflight industry from the ground up. It knew the barriers to entry were going to be large, and that previous aerospace contractors had been colossal manufacturing firms that may not have been able to develop new designs quickly to meet NASA’s needs (which sounds similar to today’s large nuclear firms). But the ulterior motive was that NASA thought that commercial spacecraft could open up a large private market beyond the International Space Station—launching communications and monitoring satellites, for example. NASA wanted to make sure that American companies could take advantage of its experience in spaceflight to be successful in this new market. More importantly, NASA felt that its mission should be on the cutting edge of space travel and research, and that it should find cheaper (and private) ways to perform routine tasks like delivering people and cargo to the ISS and launching communications satellites.11

Before COTS, there already was a fledgling private spaceflight industry, thanks to the launch of the X Prize Foundation in 1996, which offered a $10 million prize to the first team that could launch two people (or an equivalent weight) to a certain altitude, return to Earth, and repeat the launch within two weeks (i.e., successfully launch a reusable craft). The teams that competed for this prize had over $100 million in private investment. NASA wanted to encourage this industry, while also helping it to mature and meet the quality standards needed for transporting humans into space. As former NASA administrator Michael Griffin explains, “Broadly speaking, the market for space services has never enjoyed either the breadth or the scale of competition which has led, for example, to today’s highly efficient air transportation services. Without a strong, identifiable market, the competitive environment necessary to achieve the advantages we associate with the free market simply cannot arise.” Similarly with nuclear, there are many private companies working on advanced designs, but it’s not clear there will be a strong market for them.


Figure Source: GAO. Commercial Space Launch Industry Developments Present Multiple Challenges (2015).
 

Nuclear and Spaceflight: An Industry Comparison

Similar to the commercial aviation industry in the United States, the government had to step in to support the commercial spaceflight industry to return the United States to its former glory. Both in aviation and rocketry, the United States was an early leader, but lost commercial dominance to European and then Asian competitors.12 But carefully crafted federal support in the form of defense contracts, procurement, and R&D investment has returned the United States to dominance in commercial launch services. Perhaps the same success can be seen in nuclear power. Governments around the world are still some of the largest customers of payload launch services (mainly for communication and other monitoring satellites). However, NASA hoped that by spurring a commercial launch industry, costs would decline enough to disrupt the market, opening up launch services to much smaller payloads and customers and eventually creating a space tourism industry.
 

Major Setbacks in Innovation

The failures of NASA’s state-led space shuttle program fostered a bigger push into private launch services. After the loss of space shuttle Columbia during its return to Earth, the public’s impression of the shuttle program was severely affected, and the decision to extend the lives of the shuttles multiple times came under intense scrutiny.13 While commercial spaceflight is a relatively young industry in the United States, it has seen its share of failures. But unlike large, public space programs, commercial spaceflight sees failure as a sign of healthy risk-taking and competition.

NASA originally awarded COTS Phase 1 funding to two companies in 2006: SpaceX and Rocketplane Kistler (RpK).14 However, RpK failed to meet private fundraising milestones, and NASA terminated their contract in 2007. In 2010 RpK filed for bankruptcy. But the failure of RpK freed up NASA to sign a COTS Phase 1 agreement with Orbital Sciences instead in 2008.  

The very first launch attempt for SpaceX’s Falcon 1 in 2006 suffered a fuel leak fire that destroyed the rocket and payload. The second launch attempt of the Falcon 1 more than a year later was successful, but the payload failed to reach the required orbit. The third attempt of the Falcon 1 in 2008 also failed after the booster collided with the engine during staging.15 In June 2015, a SpaceX Falcon 9 rocket exploded two minutes into the flight, destroying the entire capsule full of supplies destined for the ISS. In September 2016, a SpaceX Falcon 9 rocket exploded during a prelaunch test, destroying the $200 million communications satellite payload.

Other companies also suffered failures. In 2007, a test on some of the engine systems for Scaled Composites led to an explosion that killed three employees.16 A 2014 accident during a test flight of a Scaled Composites launch vehicle resulted in the death of the copilot.17 Also in 2014, an Orbital Sciences rocket exploded on launch, destroying 5,000 pounds of cargo headed to the International Space Station. This launch would have been the third of eight contracted deliveries by Orbital Sciences to the ISS.18

Despite hyped promises of radically cheaper payload delivery, the first mission to the ISS for SpaceX ended up costing about the same per pound as NASA’s own space shuttle, between $9,000 and $27,000 per pound of cargo,19 although that figure is already out of date as the price is declining sharply. Elon Musk claims that their newer rocket, the Falcon Heavy, can deliver cargo to orbit at a cost of $2,000-$3,000 per pound.20 Even SpaceX’s main competitor, United Launch Alliance (ULA), has come to admit that they cannot compete with SpaceX on price. Launch services from SpaceX started at $60 million in 2015, with ULA charging the military $125-$200 million per launch.21

Despite the success of SpaceX in returning the United States to dominance in commercial launch services, the Federal Aviation Administration (FAA) continues to overestimate the number of commercial launches it expects to review each year, suggesting the industry is growing slower than expected.22


Role of the State

In the early days of spaceflight, the high cost, complexity, and military importance of rocketry kept the industry largely in the hands of government. And the Cold War drove the United States to invest heavily in spaceflight as means to compete with the Soviet Union. When NASA’s funding peaked in 1966, it was 3.8% of the total federal budget.23 But even as a consumer, the role of the US federal government was significant. Historically, the United States manufactured 70% of the world’s satellites annually (by revenue), and 75% of these satellites are manufactured for the US government. Therefore, the United States is the largest customer, by far, for payload launch services.24 And for most of America’s history in spaceflight, the government was the sole entity launching both public and private cargo into orbit. In 1986, in response to the Challenger accident, NASA ruled to no longer fly commercial payloads on the space shuttle. American companies were already launching commercial payloads, but they were starting to lose out to European and Asian competitors.25

The Commercial Space Act of 1998 set broad goals for the development of commercial space activities, including the promotion of more spaceports. But it wasn’t until 2004 with President Bush’s US Space Exploration Policy, which set targets for space exploration and commercial development, that commercial spaceflight truly boomed. And in 2005, NASA Administrator Michael Griffin allocated $500 million to launch the Commercial Orbital Transportation Services (COTS) initiative,26 which sought to create an innovation support network for commercial spacecraft, as well as fund demonstrations and then procure transportation services for the ISS.

This program had two phases. In Phase 1, NASA offered prizes to any team that could demonstrate specific goals, like delivering cargo to low Earth orbit. The purpose of this phase was to stimulate interest and invest in teams that could then compete in the next phase. Over 20 companies submitted proposals in this phase, and NASA awarded several Space Act Agreements (different from contracts in that they allowed flexibility for failure). Only two of these agreements were funded, with SpaceX and another company sharing the $485 million. The other agreements were unfunded, but NASA offered technical assistance and facilitated their demonstration.27 Phase 2 consisted of a competitive contract, in which teams submitted designs for rockets that could deliver cargo to the International Space Station.
 

NASA was tasked with fostering an entire commercial spaceflight industry from the ground up.


Following on this program, in 2010 NASA announced the Commercial Crew & Cargo Program Office (C3PO), which offered a series of competitive grants for companies to develop vehicles to transport crew to the ISS. The first round granted $50 million to five American companies for research and development of potential spacecraft designs (although they received proposals from almost 40 companies). The second round awarded $270 million to four companies to further develop and demonstrate their designs, and they had to meet a strict timeline of milestones to receive their funding. Three additional proposals were selected without funding, which meant that NASA would consider them for services in the future, but they would have to develop their technology with private capital. However, this selection by NASA did bestow a level of approval, which helped the companies raise funding.

For the third round, NASA took proposals and awarded three agreements for complete designs which had to include spacecraft, launch vehicles, launch services, ground and mission operations, and recovery. These agreements totaled over $1 billion, but all three companies are over halfway through their 18 designated milestones. In parallel, NASA began a program of product certification, which would develop engineering standards, tests, and analyses of the systems’ designs for these three companies.

Now NASA is working with just two companies, Boeing and SpaceX, on demonstrating all the technologies needed for delivering their first human payload to the ISS, originally scheduled for 2017, but both pushed back to 2018.28 While the funding for this entire program was quite high compared to nuclear industry budgets, it’s a useful model of slowly increasing the difficulty and complexity of proposals, winnowing out two ultimate contract awards from an original 40 applicants. NASA’s effort was also aided by ancillary private prizes like the Ansari X Prize, the Google Lunar X Prize, and the Northrop Grumman Lunar Lander Challenge. These prizes generated a lot of excitement and investment from small companies, but it was NASA that ultimately provided the market for these space services. Additionally, it is inspiring what they were able to develop and demonstrate in just a decade.

Along with the awarded agreements, NASA’s C3PO developed systems for knowledge sharing, setting up conferences for proposal applicants to meet with equipment contractors to develop industry standards, share best practices, and build out a supply chain.29 They also helped develop the legal framework for these private companies. NASA had experts in intellectual property, procurement, and commercial law create the structure of the COTS program,30 and NASA officials provided guidance to COTS applicants on the regulatory process.
 

Importance of Intellectual Property

As the spaceflight industry has shifted from public to private in the United States, there has been a noticeable rise in patenting.31 One of the major appeals of the newly created Space Act Agreements was that partners would broadly retain intellectual property rights.32


Regulator Comparison

The birth of the commercial spaceflight industry coincided with President Ronald Reagan's executive order that, among many other things, designated the Department of Transportation (DOT) as the leader agency to enable commercial launch capabilities.33 The DOT delegated responsibility of regulating commercial spaceflight to the FAA.

The FAA’s Office of Commercial Space Transportation oversees and regulates many different aspects of commercial spaceflight, but primarily licenses launch activities. The FAA is responsible for (1) reviewing applications for experimental permits, (2) licensing launch and re-entry operations and sites (spaceports), and (3) completing safety inspections of all licensed and permitted launch activities. Finally, the FAA is also charged with promoting commercial spaceflight, although some contend this role is better played by the Department of Commerce. For launches that fulfill Space Act Agreement milestones, NASA certifies the vehicles, while FAA certifies the launch.34

Most notably, the Commercial Space Launch Amendments Act of 2004 set time limits on the FAA review period for applications, requiring the FAA to make determinations on launch license applications within 180 days and decisions on experimental permit applications within 120 days.

After the passing of the Commercial Space Launch Amendments Act in 2004, the DOT also delegated regulation of space tourism to the FAA. However, the act prohibited the regulation of the safety of crew and other space tourism participants during a learning period that originally extended to 2012, but has now been extended to 2023.35

The federal government also indemnifies FAA-licensed launches against accidental and catastrophic damage. Federal indemnification was seen as a requirement to keep costs low and the United States competitive initially, as China, France, and Russia also offer indemnification.36 However, the federal indemnification program could be replaced with an industry-wide pooled insurance program in the future. The need for such indemnification is already evident, as an explosion during an Orbital Sciences rocket launch in 2014 caused $13-$15 million in damages to the launch pad.
 

Major Innovation Success Stories

While SpaceX stands out as a rock star in the commercial launch sector, federal policies have proved successful in broadly stimulating the commercial spaceflight industry across the board. The United States went from zero commercial launches in 2011 to more launches in 2014 than all other countries combined.37 This growth is largely dominated by SpaceX launches, as they now provide launch services cheaper than those of foreign competitors. NASA has signed Space Act Agreements with a half-dozen companies, and has provided smaller research grants to develop specific subsystems. As of 2016, there were ten commercial spaceports licensed across the country.

SpaceX started developing rockets in 2001 using their own money. In 2005, they announced that they would begin developing a new rocket design, the Falcon 9, which then went on to win an award from NASA in 2006 to complete development and demonstration. The original agreement with NASA had a deadline of 2008 for the first demonstration flight , and 2009 for completion of all three demonstration missions. There were various delays, but SpaceX completed all required demonstration launches by the end of 2010. NASA signed an additional agreement with SpaceX that was fulfilled in 2012 with the successful docking of a Falcon 9 with the International Space Station. To summarize, SpaceX went from announcing the development of a new rocket to berthing with the ISS (the first private spacecraft to do so) in seven years, but it only took four years from announcement to first launch.

The successful delivery of cargo to the ISS by SpaceX was not only important because it was the first private spacecraft to do so, but also because it signified an entirely new paradigm in spacecraft. The initial cost of payload delivery, roughly $9,000 per pound, was comparable with what NASA’s space shuttle had achieved. But the real disruption came from smaller spacecraft and modular engines on the rockets, allowing for faster learning and development.
 

The successful delivery of cargo to the ISS by SpaceX signified an entirely new paradigm in spacecraft.


For the past few decades, the launch vehicles available to commercial customers have been very large and offered infrequent launches. Therefore, satellites evolved to be very large to take advantage of the full launch payload, and to ensure long-lived satellites that would not need repairs.38 This kept many smaller satellite developers out of the industry as they could never afford the large launches.

One of the most potentially disruptive innovations in commercial spaceflight is SpaceX’s development of reusable first-stage vehicles. Reusability could influence all aspects of commercial spaceflight, and other companies—and countries—are beginning to duplicate SpaceX’s innovation.39 Another important technological innovation is SpaceX’s reliance on modular, off-the-shelf engines for launch vehicles. In the past, many rockets had their own unique engine or set of engines, which took up much of the development funding and time.40 SpaceX chose to design their Merlin engine for both the first and second stage (with slight modification for the second stage). They also made a conscious decision from the start to design an engine that could fly aboard different rockets in different configurations, and opted to use the same general design for multiple stages. This has allowed them not only to reduce costs in development and production, but also to build up manufacturing and operations experience much faster. Each Falcon 9 rocket utilizes ten engines.41

The fly-by-wire technology that revolutionized commercial aircraft was originally developed for spaceflight. However, the use of more automated systems and controls developed in aviation are now coming back to launch systems. Particularly, in most of the launches licensed to date, flight abort decisions were made and implemented by humans.42 Many companies are working to develop instruments and systems that can detect and correct for problems without human intervention, reducing the risk of catastrophic failure. The FAA will need to figure out how to license these novel systems.

Lastly, while the demand for launch services is still relatively small today, the drastically lower price to launch may open up new markets for these services. In particular, the cost to launch a person into low Earth orbit is reaching a point where space tourism might finally become a reality.


Lessons Learned for Nuclear

NASA has been able to stimulate intense private-sector activity through a combination of prize funding and network building (both between public and private institutions and between private and private). At least a priori, there is no reason why today’s nuclear-focused public institutions shouldn’t do the same.

The commercial space industry also benefited from favorable regulation, particularly a moratorium on regulation for its first ten years (later extended to twenty years).43 The federal government also provides indemnification against catastrophic accidents exceeding private insurance, up to $500 million per launch. Both of these policies were justified because commercial spaceflight is a young industry and was thought to be uncompetitive with Russian, European, and Chinese firms that are heavily state-supported.
 

The nuclear industry should look to NASA, which was able to stimulate intense private-sector activity through prize funding and network building.


Like NASA, the Department of Energy (DOE) will need to shift its mission to explicitly support private-sector innovation. Beyond providing advice and expertise, federal research and demonstration priorities should take explicit guidance from the nascent advanced nuclear industry in exchange for clear requirements for inter-firm collaboration and publication of results and technical data from publicly funded research and demonstration projects.

DOE should also, as NASA has done, identify explicit innovation targets and provide staged public support to companies that can demonstrate viable technologies to meet them. This approach should apply to both the supply chain, whether it is forging advanced steels or developing novel fuel manufacturing capabilities, and to funding prototypes and demonstrations. Establishing key performance criteria and predicating public funding upon meeting those criteria can incentivize multiple firms to take multiple approaches to solving technical challenges and experimenting with novel assemblages of fuels, coolants, and materials.

The authors would like to thank Prof. Per Peterson for first getting us thinking about commercial spaceflight as a useful case study for nuclear, and Joe Mascaro for his very helpful feedback and insights.

Responses: Plenty of Fish on the Farm

As part of Breakthrough’s Future of Food series, we have invited experts on food, farming, livestock, and resource use to respond to and critique our research essays. We hope this will be the starting point for an inclusive, productive, and exciting new conversation about 21st-century food systems. You can read the responses to Marian Swain’s essay on next-generation aquaculture below.

 

The Value of Sector-Wide Diversity
By Dane Klinger

Aquaculture is a diverse industry, and our capacity to maximize its potential will likely increase with an equally diverse approach to development. Concurrent research and development of a wide range of technologies will allow aquaculture to thrive across a range of environments, adding to the resiliency of the sector as a whole.  

 

The Ghosts of Aquaculture’s Past Haunt Its Future
By Kim Thompson

Public discourse around aquaculture tends to focus on poor practices and mistakes of the past rather than on improvements and potential for the present and future. It is imperative that we focus on the innovation available to move this industry forward, rather than dwelling on the past and missing out on opportunities to ensure a healthy and sustainable food future.

 

Blue Growth: The Need for Fish Farms
By Ray Hilborn

There is no question that any major increase in global fish production will need to come from aquaculture. It is true that all forms of food production come with environmental costs, but fish outperform livestock on many metrics of environmental impact. They also come with significant health benefits. Looking forward, farmed mollusks, which require no feed, freshwater, or antibiotics, may offer unique opportunities for capitalizing on these gains.

 



 

The Ghosts of Aquaculture’s Past Haunt Its Future

There are a number of reasons to support aquaculture’s future, many of which are touched on in Marian Swain’s essay. Strong scientific evidence suggests that seafood can be produced with fewer environmental impacts than land-based animal protein. The same holds true regarding seafood’s role in improving heart and brain health, as the only source of long-chain omega-3 fatty acids. Because wild-capture fisheries have not been able to keep pace with demand for the last 30 years, aquaculture has stepped in to fill this gap. Looking forward, marine aquaculture offers a particularly promising tool for feeding an expected 2.5 billion more people in coming years, while reducing our reliance on limited resources like freshwater and land, combating climate change, and alleviating hunger and poverty.

And yet, public discourse around aquaculture tends to focus on poor practices and mistakes of the past rather than on improvements and potential for the present and future. Like land-based agriculture, aquaculture comes in various forms, with different associated benefits and impacts. From the smallest subsistence farm to the largest commercial farm, from farms producing filter-feeding shellfish to carnivorous finfish, the science and technologies exist to produce healthy, environmentally responsible seafood. No matter what system is used or species produced, there are proven, science-based strategies that can be adapted for specific farm types to support environmentally responsible aquaculture production. These include:

  • Appropriate siting to ensure maximum production efficiency with minimal environmental impact and conflict with other uses
  • Science-based best management practices designed to keep fish, surrounding ecosystems, and people healthy
  • Collaborative, zone-based management that can reduce cumulative impacts resulting from the shared use of watersheds with neighboring farms
  • Science-based monitoring and adaptive management

Successful implementation of these approaches requires access to appropriate technologies and expertise as well as effective management and enforcement. These may be implemented by governments, communities, industry, or some combination of stakeholders, depending on the infrastructure and resources available. Many negative examples of aquaculture today, as with any form of agriculture or wild-capture fishing, are the result of poor execution or lack of access to any or all of these elements.

Offshore marine aquaculture should be an area of particular interest going forward, as it will form an increasingly important part of the protein portfolio for a growing human population. We know it can be done responsibly and that it can supplement well-managed wild-capture fisheries to provide a year-round, affordable source of healthy protein. What is lacking, on the other hand, is political will and public support. Public perceptions toward marine aquaculture, particularly offshore aquaculture, tend to be more negative in developed countries like the United States. Reluctance to embrace and support the growth and expansion of marine aquaculture has resulted in a great loss of opportunity for domestic food security, conservation, and economic support for communities and working waterfronts.

Feeding the future will require a diverse portfolio of innovative solutions from land and marine sources. Marine aquaculture is an important conservation tool that we have the science, expertise, and technologies to implement responsibly. It is important to acknowledge and take lessons from mistakes of the past for aquaculture if we are going to be successful in leveraging the technology to produce food without unacceptable impacts, but we cannot move forward if that is the only foundation from which we continue the discussion. It is imperative that we discuss marine aquaculture in the context of the global food supply and focus on the innovation and advancements available to move this industry forward, rather than dwelling on the past and missing out on this opportunity to ensure a healthy and sustainable food future. 

The Value of Sector-Wide Diversity

As Marian Swain argues in her recent essay on the future of fish farming, open-ocean aquaculture and recirculating aquaculture systems (RAS) are certainly two of the most exciting and powerful emerging systems with the potential to expand the sector into new frontiers. But aquaculture is also a diverse industry, one that grows a wide range of species (over 580 as of 2014, from algae to shellfish to finfish species), occurs across a range of ecosystems (from alpine ponds to the open ocean), and can utilize a multitude of inputs (from naturally provided ecosystem services to artificially produced compounds).

The capacity of society to utilize the full potential of aquaculture—to leverage its ability to provide large amounts of healthy food with minimal environmental impact—will likely increase with an equally diverse approach to development. Because aquaculture takes place across such a broad spectrum of locations with different levels of ecosystem services and resources, a one-size-fits-all solution will likely fall flat.

As a result, research and development should continue on with a multitude of different technologies and production systems. Specifically, technology development should focus on harnessing the comparative advantages of distinct locations and methodologies to produce the greatest amount of seafood given the resources available and the state of surrounding ecosystems.

The list of technologies and approaches that can be further developed to help optimize production in site-specific ways is long and varied. Formulated feeds, selective breeding, genetic modification, and integrated multi-trophic systems will all play an important role in improving the performance of aquaculture systems, as will disease management tools and technologies (like biosecurity protocols, vaccines, and therapeutants), certification schemes, and the adoption of best management practices. Larger-scale interventions and innovations, like marine spatial planning and zonal management, extension programs and technology transfer, and supply-chain enhancements, should also be priorities for the sector as a whole.

Concurrent development of these wide-ranging approaches will allow aquaculture to adapt and thrive across a range of locations and environments. Further, adoption of a broad portfolio of approaches will add to the overall resiliency of the sector, much as biodiversity increases the ability of ecosystems to survive in the face of multiple, stochastic stressors. While an individual aquaculture enterprise may have an incentive to maximize short-term profit by consolidating production and adopting a one-size-fits-all technology, a decrease in sector-wide diversity can bear long-term social costs in the form of decreased resiliency to disease epidemics and other shocks to production. We would be wise to consider this trade-off when developing national and regional policies and regulations that will shape the future of the sector, and its ability to sustainably feed the world of tomorrow.

Dialogue Biography and Meal Form

Balancing Clean Energy Costs and Green Jobs

Imagine that we counted jobs created by manufacturing ATM machines and coding software for them but didn’t account for all the bank tellers’ jobs that have been lost. That’s essentially what we do when we talk about green jobs. We tout the creation of new jobs in the wind and solar sectors but don’t account for the jobs lost in the coal sector, for instance. If you claim, as many have, that the inexorable rise of wind and solar are major contributing factors to the decline of coal, then you can’t ignore the loss of coal jobs when tallying up the job benefits of the growth of renewables.

But there is a further dimension to discussions of green jobs and the costs and benefits of clean energy that has been almost entirely absent from the green jobs discussion—that is the role that energy plays more broadly in the economy. In many cases, the job impacts of energy costs, positive or negative, on the macro-economy dwarf job creation within the energy sector.

The International Monetary Fund estimates, for instance, that a 10% change in the price of oil has a 0.2% impact on GDP. Energy price increases often bring slower economic growth and even, as has been the case on numerous occasions over the last several decades, recession, with attendant slower job creation or even job losses. Falling energy prices bring faster economic growth and job creation. Low natural gas prices over the last decade have proven a boon to consumers, reducing their heating and electrical bills substantially, and for manufacturers, who in recent years have begun bringing production back to the US to take advantage of cheaper energy. Low oil prices more recently have brought similar benefits to consumers and industry. Both have increased employment broadly throughout the economy.

Whether natural gas, wind and solar, or next-generation nuclear, cheap, clean energy will likely bring economy-wide job creation that dwarfs whatever job creation were to occur within the energy sector. Costly clean energy, by contrast, may bring job creation within some segments of the energy sector but the broader impacts on job creation are likely to be negative.

Energy is the master resource and the master substitute, a key input to virtually every sector of the global economy and the key catalyst that enables and moderates complex interactions between labor, capital, resources, and technology. To evaluate the employment benefits of growing clean energy deployment, therefore, it is necessary not only to look at job growth and losses across the entire energy sector but across the entire economy.

Video: Plenty of Fish on the Farm

Read our full essay here.


Demand for seafood is growing, but many wild fish stocks are already under strain from overfishing. Instead of harvesting more wild fish, aquaculture—or fish farming—is poised to dominate the future of seafood production. Find out why clean energy is key for next-generation aquaculture:

Plenty of Fish on the Farm

Plenty of Fish on the Farm

In this essay:
 


Oceans cover two-thirds of the blue planet, and yet remain largely a mystery to humans. The seas are filled with a dizzying variety of life, but the only marine species most of us see regularly are those that land on our dinner plates. Each year 80 million metric tons of seafood are harvested from the oceans.1 Fish remain one of the last foods that humans hunt from the wild at a commercial scale.

There is clearly a role for well-managed wild fisheries in the global food supply, since they offer a relatively low-input source of high-quality animal protein.2 Ray Hilborn, a prominent fisheries scientist, says, “We should be aiming for 100 percent fully fished wild fisheries as our goal.” That does not mean overfishing, however; “fully fished” means the maximum sustainable yield, where the wild population can regenerate and make up for the harvested fish.

Unfortunately, experts at the UN estimate that almost a third of global fish stocks are indeed overfished—that is, fished at a biologically unsustainable level.3 Wild fisheries, even well-managed ones, simply do not have the potential to meet continued increases in demand for seafood from a larger, wealthier global population. Instead, future demand for the fruits of the sea will be met the same way we satisfy demand for beef and chicken: by farming it.


Read more from our series on The Future of Food.


Aquaculture, or fish farming, is both an ancient tradition and, today, a global commercial industry. From Chinese farmers raising carp in flooded rice fields to intensive salmon farms in Norway’s fjords, fish farming is a diverse and global activity. And today, aquaculture stands poised to dominate the seafood sector. The UN’s Food and Agriculture Organization projects that in the coming decades wild-capture production will remain fairly flat, while aquaculture production will surge, increasing almost 40% in the next ten years.4 Between 1990 and 2009, aquaculture was the fastest growing livestock sector,5 and in 2014, aquaculture surpassed wild capture as our main source of seafood for the first time ever.6

A future where farmed fish play a greater role in global diets is likely one that is better for both land and ocean ecosystems than one without it. Aquaculture is a crucial supplement to wild-capture harvests, and fish are more efficient at converting feed into protein than pigs or cows, so farmed fish generally has lower environmental impacts than meat.7 Of course, all animal protein sources come with environmental impacts, and today’s aquaculture sector generates significant ones. Commercial aquaculture can destroy coastal habitats, generate nitrogen pollution, and put pressure on forage fish stocks—low-value fish that are harvested for aquaculture feeds.8
 

Almost a third of global fish stocks are fished at a biologically unsustainable level.


The pathway that the aquaculture sector follows in the coming decades will have tremendous impacts on the environment. Changes to aquaculture inputs, especially fish feeds, will improve the environmental performance of fish farming, and a drop-off in demand for carnivorous species like salmon and shrimp would also reduce impacts. However, a sustainable future for aquaculture may also mean radical changes to production methods. Instead of smallholder systems and coastal aquaculture, which dominate the sector today, aquaculture will have to move on land and offshore in order to dramatically reduce its impacts.

Farming fish in recirculating tanks on land or in deep offshore waters can greatly reduce or even eliminate many of the environmental problems plaguing the sector today, including pollution, damage to habitats, and freshwater consumption. However, both also require greater energy consumption than today’s prevailing systems. Indeed, reducing environmental impacts often requires more energy. A sustainable future for aquaculture in the 21st century is thus possible, but it will likely depend on an abundant supply of clean energy, highlighting the centrality of energy in environmental challenges.


 


From Smallholder Farms to Commercial Exports

Today, the majority of people engaged in fish farming worldwide are smallholder producers,9 mostly in Asia.10 Many of them are engaged in the kind of traditional fish farming that has existed for centuries; in China, for example, farmers have been raising carp in flooded rice paddies dating back 8,000 years, and the practice continues today.11

This type of traditional, extensive aquaculture involves stocking fish in small ponds or nets in ponds, rivers, and reservoirs. Fish feed on natural food—plankton in the water or worms and snails from sediment.12 Very few external inputs are used, although farmers may fertilize the water with animal or human waste to enhance natural food production in the water.13 Productivity levels are usually quite low—less than one metric ton of fish per hectare per year14—and the fish are used for household subsistence or local consumption, rather than commercial sale or export.15

These low-input systems can be relatively environmentally harmless.16 The fish farms are typically integrated with natural bodies of water like lakes or reservoirs, so there is no pumping of freshwater to create an artificial aquaculture environment. Since they rely mainly on human labor, there is also very little or no industrial energy consumption. By not using external feeds, extensive fish farms also save on costs and environmental impacts.

Extensive fish farming can still generate pollution and habitat impacts, however.17 Creating artificial ponds can alter the natural landscape, and waste from farmed fish can change the nutrient profile in natural bodies of water, impacting other aquatic life. However, since extensive fish farming uses only minimal fertilization and no external feeds, nutrient pollution is not as much of a concern as with larger-scale, more commercial operations.18

Despite being relatively benign environmentally, small-scale aquaculture is not equipped to meet rising commercial demand for seafood. Without using external feeds, natural food availability limits the scale of production on a given fish farm.19 To increase output, more bodies of water could perhaps be brought into use, but this would expand impacts on the landscape and would not resolve the problem of low area productivity. Rural aquaculture has also not proven a reliable way for families to emerge out of poverty,20 and increasing urbanization has led many small-scale farming households to abandon aquaculture and seek off-farm employment in cities.21


Milkfish farming is a centuries-old industry in Indonesia, the Philippines, and Taiwan. Slow to modernize, it now faces challenges from competing aquaculture species and as a result of present economic realities.

Given the low productivity of extensive fish farming, supplemental feeds have been necessary to farm enough fish to meet growing demand. Today, aquaculture that uses external feeds already represents the majority of farmed fish production, and its share is growing.22 Many areas formerly dominated by rural aquaculture have transformed into centers for export-oriented commercial production.23 The Mekong Delta in Vietnam, for example, has undergone rapid intensification, transforming a sector dominated by smallholders into a global aquaculture producer and exporter.24

Most commercial aquaculture today can be characterized as semi-intensive, meaning farmers use external inputs but the fish farm is still an open system. Many commercial operations farm fish in net pens in the ocean or in a pond or lake, while in other types of inland operations, water is cycled into artificial ponds and raceways from a nearby water source. Farmers use commercial feeds and fertilizers to boost fish growth, they culture higher-value, selectively bred species of fish, and they sell their final product commercially, often for export.25

The management practices in semi-intensive systems increase productivity, but these fish farms are still interconnected with the surrounding environment. This can create a dangerous combination of intensive production in the middle of natural ecosystems. The open aquaculture systems that dominate commercial production today can produce direct environmental impacts in the form of habitat loss, pollution, and freshwater consumption.26 Open aquaculture systems often depend on being located in sensitive coastal ecosystems where the conditions are right for aquaculture, linking fish farming to immediate environmental impacts.

One of the most well-known examples of habitat loss from commercial aquaculture is the destruction of mangrove forests in Southeast Asia. Beginning in the 1960s and ’70s, governments in Southeast Asia encouraged smallholder farmers to convert mangrove forests to shrimp farms to promote economic development and food production.27 A review study found that between 2000 and 2012, aquaculture was responsible for about 30 percent of total mangrove loss, or 30,000 hectares.28


Khao Daeng village, Pranburi, Thailand, where shrimp farming and fishery have contributed to the destruction of mangrove forests.

Inland aquaculture can also degrade habitat areas by clearing land to create artificial ponds and raceways and by diverting water from rivers, ponds, and reservoirs, disrupting the aquatic ecosystem.29 In flow-through systems on land, water is sometimes cycled through the aquaculture system and returned to the water source with effluents and waste still present, which can create a pollution problem and trigger eutrophication.30

While coastal aquaculture does not create direct demand for freshwater, it can still cause pollution problems. In marine net pens and cages, fish waste and excess feed can change the nutrient balances in the surrounding waters and disrupt the marine ecosystem. Because commercial fish farms stock fish at higher densities, they also sometimes use hormones and antibiotics to reduce disease and promote fish growth.31 When these compounds are freely exchanging with the surrounding water, it can harm other flora and fauna.32 Commercial salmon farms in Chile33 and Scotland,34 for example, have caused controversies for polluting waterways with pesticides and triggering toxic algae blooms.

There are ways to reduce nutrient pollution in open systems, both with modern technology and more traditional methods. Some modern marine net pen systems use video cameras that detect when uneaten feed begins falling to the bottom of the pen, triggering the feed machines to turn off.35 A more low-tech way to reduce nutrient build-up is using other species to do the job: multi-trophic aquaculture involves farming filter-feeding species like shellfish or seaweed alongside fed species like salmon or shrimp. The byproducts from the fed species become the inputs for the filter feeders, reducing effluent build-up, improving water quality, and generating an additional economic good for producers.36 A study of salmon farms in Chile found that culturing red algae alongside the fish farm can successfully absorb the nitrogen effluents produced by the salmon, although a large area of algae would be needed to eliminate nitrogen pollution entirely.37
 

The pathway that the aquaculture sector follows in coming decades will have tremendous impacts on the environment.


A source of environmental impacts in any commercial system is feed production, which can generate impacts that dwarf everything that happens on the farm level.38 The feed issue is especially important for carnivorous species like shrimp and salmon, which are given feeds with fish meal and fish oil made from processed wild forage fish.39 This dependence on forage fish ties aquaculture production to wild fisheries.

Feed production is not only an environmental challenge but a high cost for producers as well, so fish farmers have a strong incentive to reduce their demand for expensive fish meal and fish oil.40 One way to do this is improving the feed conversion ratio, or the amount of feed the fish need to reach mature size. More intensive production methods can improve feed conversion ratios because they stock fish at high densities, use fish selectively bred for growth, and carefully control water quality to provide the ideal growing environment.41 These kinds of management interventions have brought down feed conversion ratios for major farmed species like shrimp, tilapia, and salmon by about 25% since the 1990s.42

Convincing consumers to prefer herbivorous fish like tilapia to carnivorous ones like salmon would also help reduce the demand for fishmeal and fish oil. In the meantime, though, producers are tricking carnivorous fish into being omnivores by substituting plant proteins for fishmeal in feeds.43 Feeds with a higher share of plant ingredients have their own agricultural impacts, of course, including fertilizer and water consumption.44 Improvements to crop agriculture will thus also rebound positive effects to the aquaculture industry.

Ultimately, commercial open aquaculture systems will always face environmental challenges since they rely on external inputs to intensify production, but remain integrated with the surrounding environment. Pollution can’t be eliminated entirely when a commercial fish farm is placed in a natural body of water, and habitat loss and degradation will continue to be a problem, especially for coastal aquaculture. Relying on the natural environment to provide the setting for aquaculture can provide advantages for producers, but ultimately ties fish farming to environmental risks as well.

The current state of global aquaculture raises concerns about its sustainability, especially with production expected to increase in the coming decades. If low-input systems are unable to scale up production, and commercial production in open systems is causing a variety of environmental problems, what is the way forward?
 

A Deep Dive into Next-Generation Aquaculture

“When we compare aquaculture technologies, two stand out in terms of sustainability potential: land-based recirculating systems and offshore aquaculture,” says Dane Klinger, visiting researcher at Princeton University and director of biology at the aquaculture company Forever Oceans. Moving aquaculture on land or offshore can reduce many of the key environmental problems with today’s commercial aquaculture systems, including habitat loss, freshwater consumption, and pollution.

Offshore aquaculture systems are marine net pens placed far from shore in the open ocean, where deeper water and stronger currents can better dilute the waste from the fish farm than in coastal systems.45 Farther offshore there are fewer nutrients and less biodiversity than in sensitive coastal ecosystems, so fish waste is quickly dispersed and absorbed into the marine food web.46

“We have this vast ocean that contributes less than 2% of the world’s food,” explains Jerry Schubel, president and CEO of the Aquarium of the Pacific. Today, marine aquaculture only takes place in the narrow strips along our coasts, where farming fish not only presents a risk to the surrounding ecosystem but also has to compete with other coastal economic activities.47 Moving into the open ocean allows fish farmers to avoid these problems.

Offshore aquaculture thus harnesses the benefits of marine aquaculture while minimizing the negative environmental impacts. Unlike land-based aquaculture, there is no need for pumping or heating water, and the pollution and habitat risks that are problematic in shallower, coastal waters are much reduced in deep offshore waters.

“Advantages of offshore aquaculture greatly outweigh any benefits of closed systems on land,” Schubel argues. There are already a few fish farms operating in the open ocean, including one growing sashimi-grade yellowtail in Hawaii and another growing shellfish on longlines off the California coast. Last year, Norway approved an offshore salmon farm built with advanced technology to enable remote operation. Someday, motorized sea cages may roam the oceans autonomously, fattening up fish with automated feeders until a boat comes to harvest the mature fish.  

Of course, not all fish species can be cultured in ocean waters. For freshwater species, and to farm fish in locations far from the ocean, moving aquaculture production into land-based systems offers the most potential to reduce environmental impacts. In recirculating aquaculture systems (RAS), fish are farmed in indoor tanks, creating the right aquaculture environment using pumps, heaters, aerators, and filters.


 

These closed environment systems can provide tremendous environmental advantages. As implied by the name, recirculating systems treat and recycle water to reduce the amount actually consumed.48 RAS are designed to achieve nearly 100% water recycling, but a small amount of water is always lost to evaporation and incorporated into the fish biomass.49 Water recirculation has a huge advantage over flow-through systems in terms of freshwater demand, since flow-through systems require continuous new water withdrawals.

Another key advantage of closed, land-based aquaculture is that it can virtually eliminate pollution risks. RAS use settling basins and biofilters to remove fecal matter, excess feeds, and toxins from the water.50 Since waste products in RAS are collected instead of released to the environment, they can eliminate nitrogen and phosphorus emissions that can pollute waters in marine net pen and flow-through systems.51 Processing these effluents requires energy, and the waste does have to go somewhere: either a sewage treatment plant or landfill. However, the ability to fully control the flow of effluent provides a major advantage over open aquaculture systems, where there is free exchange of polluting nutrients between the fish farm and the environment.

Finally, land-based RAS open the door for aquaculture to occur virtually anywhere, including urban environments.52 Rather than being tied to coastal environments, land-based RAS can be sited on degraded or already-developed land, resulting in minimal habitat loss. Responsible siting is not a guarantee, however, and would rely on appropriate land-use planning.
 

Land-based systems open the door for aquaculture to occur virtually anywhere.


There are some commercial RAS facilities operating in developed countries today. In land-locked Iowa, for example, a family of former hog farmers have started farming barramundi using RAS technology. A Massachusetts start-up is proposing to use RAS to raise a genetically modified “AquAdvantage” salmon that grows faster and needs less feed than traditional farmed salmon.

Seafood Watch, the organization that provides sustainability ratings for seafood products, gives fish farmed in RAS high ratings. “Right now, Seafood Watch recommends all farmed fish from RAS as a ‘Best Choice,’ since these systems generally score well on many of our metrics like effluent and disease,” explains Tyler Isaac, an aquaculture scientist with Seafood Watch. Some farmed fish from open systems are rated “Avoid” due to their environmental impacts, although the ratings vary by location.53


The Role of Energy as a Substitute

While moving offshore or on-land can solve many of aquaculture’s environmental challenges, there is one resource use that goes up in these systems: energy use. In many cases, energy use is still coupled to greenhouse gas (GHG) emissions, which creates a sustainability trade-off for these next-generation aquaculture technologies.

The separation from the natural environment that characterizes closed, land-based systems drives higher energy demand. RAS rely on mechanization and industrial energy to provide functions like water exchange and filtration that, in an open system, are provided by ecosystem services.54 Pumps, filters, heaters, and aerators running on electricity or liquid fuels allow RAS to contain and collect fish waste, recycle water, and farm fish indoors. The totally self-contained environment that gives RAS its environmental advantages is entirely dependent on energy.  

Most energy use for RAS is in the form of electricity,55 and since the electricity mix varies by region, so too will the greenhouse gas (GHG) emission impacts of high-energy RAS.56 A study of RAS in Canada found that global warming potential was 63 percent lower when modeled with the Canadian average electricity mix, which is mostly low-carbon hydropower, compared to the coal-fired electricity grid in Nova Scotia.57 Another study conducted in France found very little difference in GHG emissions between a raceway and RAS, even though the RAS had significantly higher energy use, because the electricity mix in France is very low-carbon.58

Regions that already have low-carbon electricity are well-placed to harness the benefits of RAS without major emissions consequences. Combining low-carbon energy with the low environmental impacts of RAS can offer a best-case scenario for aquaculture production. However, for producers, electricity cost is often the more pressing concern than emissions, so to have both economically and environmentally successful RAS deployment will depend on an energy supply that is both cheap and low-carbon.

Next-generation aquaculture underscores the importance and urgency of a future with cheap, low-carbon energy.

Marine aquaculture has much lower energy demand than RAS.59 However, moving fish farms into the open ocean will increase energy use compared to today’s typical marine aquaculture, which takes place near the coasts. Boats have to travel farther to manage the operation and harvest fish, burning diesel fuel all the while.60 Alternatively, farms can use automated equipment to run daily operations remotely, but this also requires energy.61 By removing fish farming from the crowded and ecologically sensitive coastline, open-ocean aquaculture also uses energy to reduce other environmental impacts.  

The energy needs of open-ocean aquaculture are in some ways more comparable to wild-capture fishing, where energy use is measured in liters of diesel fuel per ton of fish harvested. However, offshore aquaculture promises a guaranteed catch at a known location; wild capture involves an uncertain harvest and often much longer boat trips. Nonetheless, maintenance of offshore fish farms will depend on energy from liquid fuels, tying it to GHG emissions, at least until today’s diesel-powered open-ocean vessels are electrified or a viable low-carbon liquid fuel is developed.  

Automation may be able to reduce the number of management visits necessary with open-ocean aquaculture, but periodic site visits for harvesting and upkeep will still be needed. Mechanical feeders and monitoring technology also need an energy source, and distant offshore aquaculture sites can’t be easily connected to the electricity grid. In the near term, offshore fish farms will likely rely on stored liquid fuel,62 but there are conceptual proposals to develop systems that run on renewable energy sources like solar, wind, or wave power.63

In sum, the two aquaculture technologies with the greatest potential to minimize fish farming’s environmental impacts—offshore aquaculture and land-based RAS—both rely on greater energy inputs compared to today’s dominant systems.

The importance of energy in substituting for aquaculture’s impacts mirrors energy’s role as the “master resource” in other contexts. Indoor farming, for example, offers the ability to grow food without any arable land thanks to artificial lights, using electricity to create a year-round growing season. Desalination uses energy to open the door to plentiful freshwater from the oceans, relieving pressure on surface and groundwater sources that are habitat areas. Energy allows humans to substitute for ecosystem services, which can redound to environmental savings if it means meeting our material needs without clearing land for agriculture or damming rivers into reservoirs.

However, energy use remains largely coupled to greenhouse gas emissions in our fossil fuel-based energy system. Next-generation aquaculture systems offer the potential to greatly reduce the pollution, water use, and habitat impacts that characterize today’s commercial fish farms, but at the expense of greater climate impacts, at least for now. The role of energy use in next-generation aquaculture only underscores the importance of innovation into cheap, low-carbon, abundant energy sources.
 

Challenges for Moving Aquaculture On Land or Offshore

Today, global aquaculture production is still dominated by inland and coastal open systems. Next-generation fish farming practices like RAS and offshore aquaculture remain niche, since the higher costs and new risks associated with them have slowed deployment.64  

While RAS producers have greater control over their operations in many ways, they are also vulnerable to different risks. In 2014, a power outage at a land-based salmon farm in Nova Scotia killed the entire stock of 12,000 fish.65 Capital costs are usually higher with RAS, and the rate of return is typically lower than conventional net pen systems.66 “Not a lot of RAS systems have reliably made money,” says Dane Klinger, “but it’s a relatively new technology and a new space.” RAS thus offer environmental benefits but come at a greater cost; government support and private R&D are being leveraged to bring down these costs and encourage deployment.67  

Scale presents another challenge for intensive land-based systems. Most RAS today are operating at a small scale due to high infrastructure and operating costs.68 While a typical net pen salmon system would produce thousands of metric tons of salmon annually, a typical RAS operation today produces at best a few hundred metric tons.69 “Larger RAS systems are technically feasible, but still impractical in terms of cost and energy demand today,” says Nathan Ayer of Dalhousie University. Cheap, low-carbon energy would go a long way toward making RAS more attractive to fish farmers and more climate friendly.


Red tilapia farmed on a river in Thailand.

Advocates of offshore aquaculture also face some barriers to widespread adoption. Operating in federal or international waters raises new legal and regulatory challenges.70 Fish farms in the open ocean can also be more dangerous to manage since weather and ocean conditions can be more extreme than near shore. Stronger currents and higher surf also present a greater risk of damage to the aquaculture equipment.71 In bad weather, ships may not be able to reach the site at all.

Finally, there are some species that will be very difficult to raise in aquaculture of any sort. Bluefin tuna, for example, are a popular and highly valued delicacy in Japan and other sushi-loving countries, but their large size, temperamental disposition, and voracious appetite make them technically difficult to farm.72 “For high-trophic-level species like bluefin tuna, I think the natural world does a much better job of producing those species than we’ll ever do,” says Keegan McGrath, a fisheries biologist. Currently some bluefin tuna are “ranched,” where juveniles are caught in the wild and fattened up in net pens to reach marketable size.73 However, a research team in Japan has successfully grown bluefin tuna from eggs to maturity in an aquaculture environment, and while production is still at a small scale, they expect to produce 6,000 tuna a year by 2020.74


Solutions on the Horizon

“We need to look at aquaculture in terms of the global food supply,” argues Kim Thompson of the Aquarium of the Pacific. No food production is without environmental impacts, especially when it comes to animal products. Fish farming produces high-quality animal protein, generally with fewer environmental impacts than meat. People will eat plenty of meat in the coming decades as well, but fish can be one of the lowest-impact animal foods when it comes to the environment.

The aquaculture sector is poised to undergo a major transformation this century. Fish farming has already surpassed wild capture as our main source of seafood, and experts expect it will grow to meet nearly all new demand in the coming decades. To ensure a more sustainable future for fish farming, commercial aquaculture cannot continue to integrate intensive production with sensitive ecosystems along coasts, rivers, and lakes.

The future of sustainable aquaculture lies on land and offshore.

A return to low-input extensive systems is not a feasible option, however, given the volume of seafood demand and the need for export production. Instead, the future of sustainable aquaculture lies on land and offshore. Recirculating aquaculture in land-based tanks allows for total control of waste and minimal freshwater consumption. Offshore aquaculture takes advantage of open ocean waters to dilute pollution and removes fish farming from the sensitive and crowded coastal environment.

Both these next-generation aquaculture solutions rely on greater energy inputs to substitute for direct environmental impacts. This underscores the importance and urgency of a future with cheap, low-carbon energy, since otherwise the benefits of next-generation aquaculture will be countered with greater climate impacts. This trade-off of energy use, emissions, and environmental impacts must be carefully navigated at a regional and local level today. Nonetheless, the 21st century will see aquaculture rise to dominate the seafood sector, and energy will be the key to ensuring its sustainability.

Energy Access or Energy for Development?

Providing households and communities basic access to energy technologies—a solar panel, a cookstove, a battery—might marginally improve their quality of life without providing a ladder to further opportunity. As Roger Pielke, Jr. wrote in 2012, “We do not label those who live on $1 per day as having ‘economic access’—rather they are desperately poor, living just above the poverty line.”

Yes, with increased energy use comes rising living standards, liberalized social values, and greater opportunity—for women, especially. So a plan to achieve universal access by 2030 through the deployment of distributed renewables, such as this one from the advocacy group Power for All, sounds sensible.

But we need to think holistically. It turns out that it’s not so easy to separate energy poverty from, well, poverty. And it’s not at all clear that solving the former solves the latter.

So when we talk about energy poverty and energy access, we should talk about energy for human development. Rather than an end in itself, abundant energy serves an integral role in the movement toward higher and more equitable standards of living. Energy consumption, in this sense, is also contingent on development itself.  

As a result, any proposal to do away with energy poverty simply by accelerating investment in distributed renewables should probably raise some eyebrows. Mini-grids and rooftop solar are certainly capturing an increasing share of investment and filling in gaps in access and capital. But it is not obvious that these solutions alone will provide much access beyond household illumination and a charge for a cell phone—unless they are accompanied by substantive efforts to drive economic opportunity out of the house and off the farm.

Fortunately, the energy for development framework may slowly be catching on. “An energy ladder without increasing incomes is a ladder to nowhere,” as Greg Neichin, Diane Isenberg, and Mary Roach, of impact investment firm Ceniarth, wrote in a recent piece detailing their disillusionment with the energy access sector—and with venture-funded home solar in particular—as a result of over-hype and under-impact. “This does not mean abandoning energy solutions entirely,” they conclude; “rather, it means that we will evaluate them through the lens of the long-term livelihood improvements and income-generating opportunities that the solutions may bring to a community.”

It would be nice if solving poverty in the 21st century were as “simple” as kickstarting a virtuous cycle of socioeconomic growth with a solar panel and a battery. But there remains nothing simple about poverty or growth. Investors, researchers, and practitioners should be honest about that.

Solar in California

With the rapid growth of the solar industry comes the challenge of ensuring that this power can be economically used. At all times, the supply and demand for electricity on the grid must be almost exactly equal; otherwise, the grid will fail. However, the output from solar panels and wind farms is highly dependent on the weather and can be unpredictable, and these plants might not be most productive at the times when power is most needed.

As Jesse Jenkins and Alex Trembath observed in 2015, the costs of integrating variable renewables and managing their short-term fluctuations are real but manageable. The biggest challenge is economic. Solar panels are typically most productive at midday, when the sun is the highest in the sky, while demand is highest in the evening. A grid with a large fraction of solar energy should therefore see prices depressed at midday, which cuts into the profitability of solar. Jenkins and Trembath suggest as a rule of thumb as solar penetration approaches its nominal capacity factor--the ratio between average electricity production and peak production--it will become increasing difficult to deploy more capacity. Solar has a capacity factor of 10-20%, meaning that California may be approaching this "capacity factor threshold."

The California Independent System Operator, or CAISO, has several tools to keep the grid balanced, and in particular to ensure that solar plays well with the system as a whole. One tool is curtailment: excess solar power can simply be cut off. This spring, with the growth of solar and more hydroelectric power than usual, Jeff St. John at Greentech Media reports that CAISO may need to curtail 6-8 gigawatts of power. This is about a quarter of solar capacity. With the price of solar panels falling and expected to continue falling, solar can still be profitable with some curtailment, but additional solutions are needed if solar continues to grow.

Curtailment is a blunt tool. St. John points out additional solutions that CAISO is developing to handle the solar load. Solar farms with advanced inverter controls can give the farms the short-term flexibility needed to replicate the frequency response function performed by natural gas peaker plants. CAISO is expanding time-of-use pricing, which would raise or lower electricity prices to consumers depending on market conditions. This allows customers with flexible needs to shift their consumption to match generation. California is also able to export excess power to neighboring states and import power during times of shortfall. The first ultra-high-voltage direct-current transmission line in the United States is scheduled to begin construction later in 2017, and this technology could eventually facilitate the development of a continent-wide “supergrid” that would calm the variability of solar by spreading it over a much larger range.

With all these tools, however, it is unlikely that solar power could economically power a majority of an electric grid without low-cost energy storage on a large scale. The National Renewable Energy Laboratory recently modeled the storage requirements for California to get 50% of its power from solar and 71% from all renewables. In the optimistic scenario, 19 GW of storage capacity is needed. Under this scenario, solar PV has a levelized cost of 3 cents per kilowatt-hour and therefore is economical with a significant amount of curtailment, and the grid is much more flexible than today’s with new demand shifting capability, 25% electric vehicle penetration with mostly flexible charging schedules, and greatly expanded electricity export capacity. Under less optimistic assumptions, the storage capacity needed might be 35 GW. In 2020, California is expected to have 4.4 GW of storage capacity, of which over two-thirds will be pumped hydro.

Most of the new energy storage in California will be in the form of batteries. Only 200 MW of battery capacity was available in the United States in 2016. California’s ambitious plan calls for 1.8 GW of new capacity, mostly from lithium-ion batteries, by 2021, but even these plans must be greatly scaled up to meet the challenge of a half-solar grid. As Will Boisvert demonstrated in 2015, such growth in grid-level batteries will not be economically feasible unless the technology comes down in cost dramatically.

The growth of solar is surely one of the great success stories in the energy industry in recent years. Continued growth of the industry, however, poses an additional set of technical and logistical challenges. Jesse Jenkins and Sam Thernstrom recently reiterated an important lesson about deep decarbonization, which they write “may require a significantly different mix of resources than more modest goals; long-term planning is important to avoid lock-in of suboptimal resources.” Especially as low-carbon energy sources like solar approach capacity factor thresholds, or other obstacles, grid planners need to think systematically. Not doing so might result in deploying renewable energy at much lower levels than deep decarbonization would require, all the while making it impossible to keep existing nuclear plants online and, ultimately, locking in a great deal of coal and natural gas for the foreseeable future.

With over half of the state’s power generated by natural gas, the impending closure of the Diablo Canyon nuclear plant, a half-baked storage requirement, and growing curtailment of growing solar and wind resources, that is exactly the path California finds itself on. Something’s gotta give.

More than Share-Spare Philosophies Needed

There should be little disagreement regarding the benefits of diversity in ecosystems. Ecological theory and science have long since established that diverse ecosystems are robust and resilient to outside influences and shocks. In agricultural systems, lack of diversity has, at times, been implicated in susceptibility to climatic changes and pest or disease infestations ending in crop losses or even widespread crop failures. In this light, the article by Linus Blomqvist exploring the relationships between biodiversity and the balance of agricultural land sharing and land sparing is well taken. In broad strokes, these concepts are appealing, each in its own way, allowing for biodiverse ecosystems to coexist or commingle with agricultural production. With closer inspection, however, we believe there are limitations to these concepts.
 

Limitations

Defining and measuring biodiversity is difficult. On the empirical side, the issue has primarily been studied in agricultural systems by using a few macroinvertebrate groups such as lepidoptera (i.e., butterflies, skippers, and moths), several bee species, and various types of beetles. More recently, great interest has emerged in the plant and soil microbiome (the microbial diversity associated with plants and agricultural soils), but overall, knowledge of diversity in many agricultural ecosystems is, thus far, relatively limited.

This limited scope restricts our ability to infer many potential impacts biodiversity may have, both within the agricultural system and outside of it. Indeed, it is possible that these two sub-ecosystems, one “natural,” the other human influenced, may not only be different, but may be at odds with each other. Unfortunately, the “ideal” state simply isn’t known for a vast majority of species for which conservation is important. For example, some migratory species may prefer small patches of habitat spread throughout a large range similar to what might be accomplished in a land-sharing approach. Other species may be favored by larger blocks of land set aside in a more natural state and might be favored by a more intensive land-sparing approach. And what works for some species may actually be detrimental to others.

Some organisms, like the European honey bee and some earthworm species, have become somewhat iconic species and are used—in the media if not in the scientific literature—as symbols of diversity in agriculture. But these species are, in fact, introduced to North America and can displace native species or cause environmental disruption outside the agricultural realm. Other species, like the milkweed plants relied upon by the beloved monarch butterfly, may actually be favored by agricultural practices. Some have posited that milkweed populations might even be much greater as a result of agricultural practices than could have been supported by the pre-agricultural native ecosystem. Similarly, there are places in Europe where land has been cultivated for so long that the agroecosystem must be protected to maintain the wildlife that exists in those habitats. Until we have a better understanding of the composition and interaction of agricultural bio-communities and those of the surrounding environment, it will be difficult to define the ideal balance between concepts such as land sharing and land sparing.

Another issue that needs to be addressed in these conversations is the concept of productivity and how it is measured. Typically, as here, productivity is measured as total output or yield per unit of land. This is particularly prevalent in the over-argued debate between organic and non-organic production systems. In that case, as well as with this comparison of land sharing or sparing philosophies, however, it is not the best metric for measuring productivity. We would argue that in addition to yield, product quality is equally valuable in assessing production systems. For producers, this is second nature. The wheat farmer, for example, regularly considers the protein content of their grain, the sugar beet farmer not only worries about their yield, but the sugar content of the beets as well, and so on. For the theorist, any system that optimizes yield, even in an environment with great biodiversity, will be of little use in land sparing/sharing if low product quality induces producers to maintain profitability through increased cultivation. For this reason, we would encourage any considerations of optimizing agricultural production to incorporate the dual characteristics of yield and quality.
 

The Bigger Problem

Agricultural activity is, by definition, a disruptive process. Even in a minimalist scenario, the simple husbandry of a plant or animal, whether native or introduced, is an alteration of the surrounding environment. The act of agriculture alters the world around us, sometimes in profound ways. Yet this process is necessary for our survival. It should be obvious to all parties, on all sides of the various debates, that we should then strive to carry out these activities in a manner that is not counterproductive to our current and future existence. To this end, we believe an agricultural system should aim to optimize the following characteristics, whether through land sharing, land sparing, or other concepts:
 

  1. Minimize input. As a matter of efficiency, agricultural production should use as few resources as possible. While this encompasses obvious inputs such as fertilizers, pesticides, and fuel, it should also be applied to other resources such as land use, labor, and money. When often-scrutinized inputs like fertilizer and pesticides are reduced, an increase in other inputs (like labor and fuel) nearly always results. The impacts of these trade-offs must also be considered.
     
  2. Minimize impact. In conjunction with inputs, agricultural systems should limit the degree of alteration or effect of alteration on the environment where possible. Disruption of environments can lead to destabilization of a given system, increasing the risk of system failure. Limiting impact also helps ensure the long-term viability of the system into the future.
     
  3. Maximize output. The goal of any agricultural system is to produce a useful product. Given the investment being made in the inputs and impacts above, it is desirable to get the best product possible from those resources. As mentioned above, however, this output should consider both quantity and quality.
     
  4. Maximize benefit. Consumers are obvious beneficiaries of any agricultural system, and as such, the system should produce products they require in an affordable manner. It would seem the majority of discussions on agricultural benefits often concentrate on the consumer side. The producers, however, occupy an equally important role in production benefits. Any agricultural system must provide benefits to the producer. Most notably, these benefits should take the form of economic returns, but also more nebulous considerations such as their quality of life and living conditions.
     

The first three of these criteria are addressable through further refinement of theoretical or scientific knowledge. Indeed, many of these topics have been, and are being, pursued. The arguments of Blomqvist regarding land sharing and sparing are certainly related to these items. Yet, even with these considerations, they seem inadequate. They may indeed be necessary, but are not sufficient. Through technological advances, there has been intensification of agricultural yield (and quality) of truly astonishing degrees over the 20th century. In spite of this, however, we have also seen marked increases in agricultural land use in the North American Midwest. European production has seen valiant efforts towards land sharing, yet this is accompanied by increased deforestation in the developing world, much of it aimed to supply the demand in developed countries. These concepts of land sharing or sparing have not, in themselves, resulted in desired outcomes. This is primarily due to more powerful forces outside the limited academic arguments of inputs and outputs, yields and efficiencies, and other tangible factors. In order to address the balance between agricultural land use and that of the natural world, we believe the final point above, particularly in terms of the producer, is vital to any discussion. Without inclusion of farmers, ranchers, and other producers in these discussions, no methodology, no matter how effective or desirable, will move forward in a lasting way.

While it is widely acknowledged that biodiversity is beneficial, we need a better understanding of what it means specifically for an agricultural system and the environments that surround it. We must also cope with the realization that ideal biodiversity on and off the farm may not be the same or even compatible. Furthermore, solutions will differ from farm to farm, year to year, and crop to crop. Most importantly, all the debates regarding conservation methods and philosophies must move beyond the academic sphere and incorporate the social and economic components that will inevitably be the determiners of their adoption or rejection. Only when all participants in the food system are on board can we have long-term, productive, conservation-minded policies in place that can weather social, economic, and political challenges.

Democracy in the Anthropocene: Breakthrough Dialogue 2017 Registration

Responses: Food Production and Wildlife on Farmland

As part of Breakthrough's Future of Food series, we have invited experts on food, farming, livestock, and resource use to respond to and critique our research essays. We hope this will be the starting point for an inclusive, productive, and exciting new conversation about twenty-first century food systems. You can read the responses to our Wildlife and Farmland essay below.

 

 

Demand-Side Interventions

Claire Kremen Responds to Breakthrough's Essay on Wildlife and Farmland

While the land-sparing/land-sharing debate drives much of the discussion around food production and conservation, Kremen argues that its emphasis on the supply side oversimplifies the issue and distracts from the solutions at hand. A demand-side approach, in contrast, would turn the focus away from yields and toward attempts to minimize consumption, waste, and inequity. Such reductions, in turn, could yield huge dividends for both humans and wildlife.

 

More than Share-Spare Philosophies Needed

William Price and Andrew Kniss Respond to Breakthrough's Essay on Wildlife and Farmland

Land sparing and land sharing may appeal as concepts, but they have yet to result in sufficient outcomes from a conservation standpoint. Price and Kniss argue that we'll need to expand our understanding of agricultural biodiversity and productivity, and include farmers and producers in our discussions, if we want to secure resilient, conservation-minded policies for our farms.

 

What We Consume Matters. So Does How We Produce It.

Ben Phalan Responds to Breakthrough's Essay on Wildlife and Farmland

Demand-side interventions will be needed to make more room for wildlife, writes Ben Phalan, but that should not undercut the importance of land sparing for meaningful conservation. The majority of the world's species will require the return of large, connected habitats not available on farmland, no matter how wildlife-friendly. As a result, proponents of alternative forms of agriculture “must think beyond the farm, to how their activities can support the objectives of halting and reversing habitat loss and degradation in the wider landscape.”

Demand-Side Interventions

In Linus Blomqvist’s recent essay on wildlife and farmland, he uses the classic land-sparing versus land-sharing framework to discuss how to best protect biodiversity at the local and the global level. Blomqvist concludes that there are opportunities for “win-wins” between farmland productivity and biodiversity, but that ultimately, high-yield agriculture, whether conventional or organic, will mostly mean less on-farm wildlife. Thus, conserving biodiversity primarily requires a “tweaked” land-sparing strategy—i.e., protecting biodiversity in large reserves and mildly modifying intensive agriculture through crop rotations.

I agree with Blomqvist on protecting large reserves whenever possible, but disagree that the land-sparing/land-sharing perspective will help us to identify pathways for biodiversity conservation. To me, the land-sparing/land-sharing debate is dangerous because it oversimplifies key aspects influencing land use, and focuses attention away from more important issues that will determine future land use, biodiversity, and human well-being. Instead, tremendous potential exists to protect biodiversity and meet human nutritional needs by focusing on the demand side, rather than the supply-side emphasis of land-sparing/land-sharing. By shifting diets away from meat, reducing food waste, and bending the population growth curve down through voluntary family planning, we can significantly reduce humanity’s impact on landscapes, ecosystems, and wildlife. Finally, while Blomqvist and I agree that how we farm matters for both biodiversity and yields, I disagree that monocultures are always more high yielding; further, they are demonstrably far less resilient to pests, disease, and disasters than more diversified agriculture.

Land sparing versus land sharing: An oversimplified debate

The fallacy of the land-sparing/land-sharing debate is its deceptive simplicity.1 The debate centers on the assumption that there is a fixed target of production that must be met, either by high-yielding agriculture on a smaller land area, or low-yielding agriculture on a larger land area.2 Unfortunately, the world does not usually work that way. Norman Borlaug, the architect of the Green Revolution and the land-sparing concept, thought that as agriculture became more efficient and higher yielding, prices would fall, forcing less productive farmers out and reducing the overall land area devoted to agriculture. But empirical data does not support this claim. Forest economist Thomas Rudel, in his 2005 analysis of 10 major commodity crops in 161 countries, found that “the most common pattern involved simultaneous increases in agricultural yields and cultivated areas” (emphasis added). This simultaneous increase is an example of a “Jevons paradox”—as prices fall due to increased or more efficient production, demand also expands.3 Global supply chains can readily soak up the surplus supply, profiting from cheaply produced commodities like palm oil or corn, for example, to fabricate hundreds of profitable products. Thus, whenever increased agricultural productivity spikes new demand, yield increases will not lead to land sparing,4 unless accompanied by strong market-based and/or regulatory environmental safeguards that stringently restrict agricultural expansion into primary habitats.5–7

How to stabilize the agricultural land footprint

The land-sparing/land-sharing debate, by focusing on the yields and areas needed to meet a target level of food production, is missing several real opportunities to stabilize the agricultural footprint. The only real way to prevent further agricultural expansion is to reduce consumption, and its companions, waste and inequity. Such solutions are not politically popular because they push back against the growth economy, yet there are three obvious places to start that could yield huge dividends for biodiversity and for current and future quality of life.

 First, we currently devote an incredibly large area to livestock production (one-third of terrestrial ice-free land surface), and devote three-quarters of croplands to growing grains that feed livestock rather than humans.8 Feed crops constitute 36% and 53%, respectively, of the calories and proteins in global crop production, yet only ~4% and ~26% of these crop-based calories and protein end up in human diets, due to vast conversion inefficiencies as nutrients move up the food chain.9 Simply reducing meat consumption by half could feed 2 billion people, reducing the burden of hunger on the planet without increasing the area under cultivation at all. As economies develop, however, they are clamoring for more meat, not less.8 In fact, this rising demand constitutes a large component of the oft-repeated claim that food demands will double by 2050.10 Food demands are not the same as food needs,11 however, and eating too much meat is a well-recognized contributor to heart disease, stroke, type 2 diabetes,  obesity, and certain cancers.12,13 Reducing meat consumption by those who eat too much, stabilizing it at current levels for those who are eating the right amount (about the size of a pack of cards per person per day),8 and increasing access to meat for the 2 billion people who suffer from iron deficiency anemia14 could help to solve several global health burdens at once. Finally, reintegrating livestock into smallholder farms could help to reduce nutrient overloads produced at contained animal feeding operations, reduce the overuse of antibiotics in livestock, and return critical nutrients to the soil and to the diets of smallholder or subsistence farmers.8

Second, we currently waste 30-50% of the food that is produced annually.15 Instead of worrying about whether organic agriculture produces only 80 to 85%16 of the food produced by conventional agriculture, why not simply cut the wastage by half? That would take care of the organic-to-conventional yield gap without requiring more land, while improving conditions for biodiversity on farms17 and growing food in a more sustainable manner.18 Further, preventing food waste would also prevent wastage of all the energy, water, inputs, and labor that went into producing it. Some of these inputs, such as fertilizers and pesticides, are exerting damaging effects on our oceans and streams, our wildlife, and human health.19

Third, the growing human population also increases consumption—yet there is a large unmet need for family planning.20,21 Many families wish to reduce the number of births, but do not have the means to do so. If these unmet needs for limiting reproduction could be met even partially, human population could stabilize well below the 9-13 billion people that are currently projected for 2100.22 A number of countries, including impoverished countries like Bangladesh, have made huge strides in providing contraception to families that desire it, halving total fertility rates in two decades or less.21 Meeting these needs through voluntary family planning can simultaneously improve child and maternal health and reduce poverty.21 A population projection that compared meeting the unmet need for family planning with business as usual estimated population sizes of 7.3 billion people by 2100, rather than 10.4 billion.23 We produce enough food annually to feed today’s population of 7.5 billion with our current agricultural footprint, although many people remain hungry or malnourished due to poverty and lack of access to sufficient, nutritious food.11 Thus, policies that support the universal access to reproductive health and family planning services (e.g., Millennium Development Goal 5.B) should be a top priority not only for human development21 but also for biodiversity and environmental conservation.

The land-sparing/land-sharing debate ignores these important issues and potential solutions by focusing only on yield and area. Unfortunately, this focus plays right into the agenda of the small number of highly consolidated agribusinesses that produce agricultural inputs such as seeds, pesticides, and fertilizers. Under a biodiversity banner, agribusiness companies can further promote their products as essential for increasing yield to meet world food demands, even as evidence unfolds to the contrary.24–26 Yet due to the Jevons paradox, any yield gains are unlikely to protect biodiversity, and are more likely to fuel the overconsumption that drives habitat conversion.  

Agricultural practices, yields, and biodiversity conservation

In agreement with Blomqvist, how we farm matters, both for biodiversity and for production, and win-win opportunities do exist. Further, any agricultural use will markedly change biodiversity in a region, and often impoverish it;27,28 thus, stabilizing the agricultural footprint is the critical first step for biodiversity conservation (i.e., no more expansion, particularly into primary habitats that are rich in biodiversity).8,19

Assuming that we could stabilize the existing agricultural land footprint primarily by reducing consumption (as described above), then what practices will promote biodiversity while maintaining sufficient agricultural productivity? I believe there are three main components to focus on:

  1. Create a favorable agricultural matrix that promotes wildlife dispersal between protected areas, which is very important to reducing long-term negative effects of isolation, such as inbreeding. Developing a favorable agricultural matrix could be accomplished by strategically creating corridors of native vegetation surrounded by or interdigitated with the most hospitable types of agricultural habitats, such as agroforestry,29 silvopastoral,30 or other diversified agroecological systems.31
     
  2. Use agroecological methods in the agricultural matrix that rely on the underlying biodiversity and ecosystem services, and that reduce negative impacts of agriculture on adjacent habitats and downstream regions.18,31–33 Primarily, this involves reducing pollutants like pesticides and excess fertilizer that can kill wildlife or cause dead zones in lakes and oceans. For example, intercropping and crop rotation reduce the concentration of a given crop in space and over time and thus the crop’s attractiveness to pests. In turn, insectary strips, polycultures, and hedgerows provide habitat and resources for natural enemies of crop pests. These techniques can greatly reduce the need for pesticides,34–36 which are sometimes overused and thus add costs to farm budgets without improving yields.24,26,37
     
  3. Maintain productive, sustainable agriculture that supports livelihoods of local people living in the vicinities of protected areas. Agroecological methods, such as agroforestry, integrated pest management, and livestock integration, can substantially increase yields for people formerly practicing either conventional or subsistence agriculture across the board for a wide variety of crops.38,39 
     

Finally, it is necessary to push back against Blomqvist’s notion that only intensive monocultures can be highly productive, and against the claim that they are resilient. There is strong evidence that increasing plant diversity in crop, forage, and forestry systems increases yields,40 and that when yield gaps occur, they can be minimized or eliminated by agroecological methods that diversify the system, such as intercropping, cover cropping, and crop rotations.35,41–44 In some systems, even removing land from production to create wildlife habitat has been shown to benefit production by increasing ecosystem services like pest control or pollination.30,45,46 Finally, compared to conventional systems, these agroecological methods increase resilience to drought, pests, diseases, floods, hurricanes, and climate change,31,47–49 and help to preserve the sustainability of the system by maintaining soil organic matter, water infiltration and holding capacity, pest and disease control, pollination services, etc.18 

Conclusion

Blomqvist is correct that there are always trade-offs. However, the land-sparing/land-sharing debate focuses on the wrong set of trade-offs. We should instead be asking whether to allow grazing lands and grain production for livestock to eat up the remaining biodiversity-rich habitats, or instead, work on promoting diets light on meat. Whether to continue wasting 30-50% of the food that we laboriously produce, or take steps needed to alleviate food waste. Whether we really want to live on a (hot and crowded) planet containing between 9 and 15 billion people, or instead, increase efforts to provide voluntary family planning for all. If we worked on these fronts, then when we sometimes must take a small yield hit41 to produce agriculture in a more sustainable and wildlife-friendly manner, it will not be an issue. And my dream of a vision that “most, if not all, conservationists could get behind,” of “large protected areas surrounded by a relatively wildlife-friendly matrix,”1 could indeed become a reality.

The End of the Nuclear Industry as We Know It

As in the US, new plants in France and Finland are behind schedule and over budget. Fears of nuclear accidents led Japan and Germany to shutter their nuclear fleets after Fukushima. Meanwhile, stagnant demand, liberalized electricity markets, cheap natural gas, and competition from heavily subsidized renewable energy have led Vermont, California, and a half-dozen other states to take existing plants offline rather than operate them at a loss.

Policy hasn’t helped. Neither states nor the federal government have been willing to properly value the zero-carbon electricity that nuclear plants produce. Nuclear is excluded from state renewable energy standards and has limited access to the federal tax credits that have underwritten the expansion of renewable energy over the last several decades. Regulators, meanwhile, hold nuclear power to environmental and health standards that no other energy source must meet.

But it is also true that the nuclear industry today is an artifact of the Cold War and the geopolitics of the oil crises of the 1970s. Reactors developed to power submarines and aircraft carriers were modified to serve civilian needs. Faced with energy insecurity and high prices, publicly controlled utilities and state-owned enterprises in nations like France and Sweden build out a nuclear reactor fleet in the same way that they built publicly funded highways and rail systems. In the United States, publicly regulated utilities teamed up with large incumbent firms like General Electric and Westinghouse to build public works projects—a typical reactor could power 1 million homes—at a time when fossil fuels were believed to be scarce and electricity demand was expected to grow exponentially.

The old model had its benefits. France and Sweden succeeded in weaning their power sectors almost entirely off of fossil fuels while the United States built out the largest nuclear fleet in the world, over 100 reactors providing roughly 20% of the nation’s electricity.

But in a world in which fossil fuels are abundant and industrial planning long ago fell out of fashion, the old top-down and state-led model of nuclear development has not fared well. The cost of large public works projects of all sorts has risen precipitously in recent decades. Few utilities or investors are willing to make $10 billion, 60-year bets on large light-water reactors when electricity demand is growing slowly and energy technology—from solar panels to natural gas extraction technologies—is evolving rapidly.

If there is going to be a future for nuclear energy outside of a few state-led Asian and Middle Eastern economies, both nuclear technologies and the nuclear industry will need to evolve. In a new Breakthrough Institute report, How to Make Nuclear Innovative, we consider the ways that the nuclear industry and nuclear policy will need to be transformed in order to develop and deploy commercial nuclear reactors that are smaller, cheaper, and easier to build and operate, and consider case studies from other complex, highly regulated technology sectors that might provide models for how the nuclear sector will need to change.

What will be necessary is not new physics. The basic physics of virtually all nuclear fission technologies has been well understood and demonstrated since the late 1950s. Rather, radical nuclear innovation must be informed by markets, end users, and modern fabrication and manufacturing methods. This is centrally a job for entrepreneurial engineers, not scientists at national laboratories, technocrats at the Department of Energy, or division heads at Westinghouse or General Electric.

Bottom-up innovation, led by start-ups, not large incumbents, will need to define a revived nuclear sector. Such a shift would not be unprecedented. The era of cheap genetic decoding became a reality when decades of federal research and development was handed off to Craig Ventor, an entrepreneur who used technologies and basic science pioneered by federal scientists to develop a better and cheaper way to decode the human genome. Space flight, once the sole province of NASA and its large contractors, has undergone a similar transformation.

Radically reorganizing the nuclear sector in this way will require important changes in federal policies. The modern biotech sector was made possible by changes in the way that the FDA licensed and tested new drugs, and by changes to the federal tax code and to patent law that incentivized public institutions to get research out of their laboratories and into the hands of entrepreneurs and venture capitalists. NASA gave birth to a diverse and growing commercial space industry by allowing private firms access to the agency’s decades of technical expertise and by creating incentives and competition among firms for contracts to carry federal payloads into orbit.

An innovative 21st-century advanced nuclear sector will require similar changes. A regulator committed by statute to the development of new reactors that are both safe and cheap. A staged licensing process similar to that at FDA that is better suited to the development and finance needs of small firms and venture capital as opposed to the large incumbent firms that the NRC’s current licensing rules are designed to accommodate. National laboratories and Department of Energy programs designed to support engineers and start-ups, not direct them. And competitive grants and advanced market commitments designed to eliminate the need for technocrats at the Department of Energy or elsewhere to pick technological winners too early.

Technological advance, of course, is never certain and none of these measures guarantee that a globally competitive advanced nuclear sector will rise from the ashes of today’s dying industry. But significant progress toward a clean energy future will be difficult without a new generation of advanced nuclear reactors in the mix. That, in turn, will require a 21st-century nuclear industry.

The End of the Nuclear Industry as We Know It

How to Make Nuclear Innovative

What will it take to bring 21st-century innovation to the nuclear industry?

How to Make Nuclear Innovative, a new Breakthrough report, makes the case for an entirely new model of nuclear innovation. Instead of conventional light-water reactors financed and constructed by large incumbent firms, the advanced nuclear industry will be characterized by innovative reactor and plant designs, new business models, and smaller entrepreneurial start-ups.

Download the full report here. 

The report draws on lessons from some of the most innovative industries in today’s economy. The recent successes of these sectors—wide-body aircraft, pharmaceuticals, commercial spaceflight, and unconventional gas exploration—indicate that a networked, bottom-up, privately led model would propel innovation and commercialization in the nuclear industry, allowing it to tap into the vast potential of the start-ups, national laboratories, universities, and venture capital that underpin innovation in the United States.

Prior to the congressional release of How to Make Nuclear Innovative, we sat down with leaders and supporters of advanced nuclear in attendance at Third Way’s 2017 Advanced Nuclear Summit and Showcase to discuss the importance of advanced nuclear, the challenges and opportunities that lie ahead, and the lessons that can be learned from other advanced industries. Watch our conversations here, and read the executive summary of How to Make Nuclear Innovative below.

 

 

Executive Summary 

 

If the nuclear industry once stood at the forefront of energy innovation, it is today rapidly losing ground. Global nuclear generation has fallen 9% since its peak in 2006, with closures outpacing new builds in many developed economies, and fossil fuels still cheaper and faster to deploy across the developing world. Even as climate change emerges as one of this century’s defining challenges, nuclear’s share of global electricity is declining.

 

 

The next generation of nuclear plants holds promise to be cheaper, safer, more flexible, and faster to build than the current fleet. But the old model of innovation—state-led and top-down—stands in the way of commercializing such technologies. Dozens of advanced reactor projects in the United States and Europe have languished over the last 40 years; experimental projects across Asia, meanwhile, continue to depend on an outdated model of innovation. In order for new designs to meet the needs of deregulated markets in developed countries, and to outcompete cheap fossil fuels in the developing world, the nuclear industry will need to innovate in step with its technologies.

Doing so will require far-reaching changes to the nuclear industry itself, and to the public institutions, policies, and regulations designed to support it. To date, the nuclear industry has succeeded in making incremental improvements to large light-water reactor designs, resulting in reactors that operate more safely and at close to full capacity. While this model was once sufficient to support the development and growth of the industry, it has proven unable to support the kind of far-reaching technological innovation the sector now needs to compete with adjacent energy technologies.

A highly innovative nuclear sector will require tilting the playing field away from large, incumbent nuclear firms and toward smaller, more entrepreneurial start-ups. With its growing ecosystem of advanced nuclear companies, world-leading nuclear engineering programs and national laboratories, and extensive venture capital networks, the United States is well positioned to lead such a shift. Nevertheless, considerable barriers stand in the way of the development of an innovative advanced nuclear industry. Transformation of the industry will require significant policy reform.

The prospect of disruptive innovation within a highly complex technology industry, fortunately, is not without precedent. In this report, we examine four advanced industries with strong records of innovation: wide-body aircraft, pharmaceuticals, commercial spaceflight, and unconventional gas extraction. Each sector offers the nuclear industry its own lessons for developing and commercializing radically new technologies—the aviation industry, for instance, demonstrates the importance of smaller designs that allow for economies of multiples as well as stable demand. The recent transformation of biotech attests to the need for a diverse mix of firms and a staged licensing process. Commercial spaceflight, meanwhile, shows the benefits of an explicit shift to private-sector-led innovation, while the success of hydraulic fracturing owes much to the close collaboration of public and private organizations.

Based on these case studies, we draw a number of recommendations for modernizing nuclear innovation in the United States, including:

1. Licensing reform. Licensing of new nuclear technologies will need to be reformed in order to support smaller, entrepreneurial firms and to build investor confidence as key design and testing benchmarks are achieved.

2. Public-private partnerships. National laboratories will need to provide private companies with access to equipment, technical resources, and expertise in order to lower costs and promote greater knowledge spillover in the testing and licensing process.

3. Targeted public funding for R&D. Significant and sustained research funding should be directed toward solving shared technical challenges.

4. Inter-firm collaboration. Policy and funding should be designed to encourage knowledge spillover and collaboration between companies.

5. Private-sector leadership. Public investment in demonstration and commercialization should follow private investment and avoid early down-selection of technologies.

Together, these changes would entail a reorganization of the nuclear sector that would be both far-reaching and long overdue, one that would drive the development and deployment of advanced nuclear reactors through bottom-up innovation, private-sector entrepreneurship, and targeted public investment. Without remaking the nuclear sector in this way, prospects for the sort of disruptive innovation that would assure nuclear energy an important role in the global energy future appear unlikely. There is unlikely to be a 21st-century nuclear renaissance without first creating a 21st-century nuclear industry.

How to Make Nuclear Innovative

Dialogue Registration 2017

Change’s Challengers

Change’s Challengers

“How might we responsibly guide and accelerate innovation against its enemies?” Breakthrough’s Alex Trembath tackles this question in a review of Calestous Juma’s Innovation and Its Enemies: Why People Resist New Technologies for the latest Issues in Science and Technology. Juma, director of the Science, Technology, and Globalization Project at Harvard University’s Belfer Center, and recipient of the 2017 Breakthrough Paradigm Award, traces social resistance to innovation through a series of case studies, from coffee and the printing press to transgenic crops and genetically modified salmon. These obstacles, it turns out, have as much to do with new technologies as with the social and political factors that accompany their emergence. They also appear to go hand in hand with modernity itself. As a result, including diverse perspectives in our decision-making and creating inspiring visions for change will be essential for fostering innovation. We should also keep in mind, Trembath concludes, “that we have never been better equipped to pursue a bright technological future.”

Read the full review in Issues in Science and Technology here.

Video: Wildlife and Food Production on Farmland

 

For more food and farming news from Breakthrough, subscribe to our mailing list.

Cities: A Climate Solution?

The state is committed to reducing greenhouse gas emissions 40% below 1990 levels by 2030. Based on CARB’s scoping plan, the LA Times estimates that in order to accomplish this goal, Southern Californians will need to cut their driving from 22.8 to 20.2 miles per day.

Unfortunately, California’s urban planning policies and restrictive zoning regulations present a major barrier to achieving these goals. As demonstrated by Issi Romem of BuildZoom, cities in California and throughout the United States concentrate their new housing development in low-density areas or by expanding into undeveloped regions. This pattern of low-density development increases the amount of driving required to get to work or conduct simple errands.

Higher density in California is possible. San Francisco has a population of 865,000, while Paris, a city with a smaller area (41 square miles compared to SF’s 47 square miles) houses 2.22 million souls. If San Francisco had the same population density as Paris, then SF could support a population of 2.57 million. Los Angeles has a population just under four million, but if it had the density of London, LA could support 6.7 million people. The Los Angeles Metropolitan Area houses nearly 13 million people, but if the region had the same density as the slightly larger Greater Tokyo Area, it would hold about 35 million people.

(Population density figures taken from Wikipedia)

The benefits of relaxing zoning rules to allow greater density would go well beyond reducing greenhouse gases. A lack of supply is a major reason that many housing markets, especially in the San Francisco Bay Area, have gotten so expensive. Ed Glaeser and Joe Gyourko have found a median house price in San Francisco of $800,000, but absent restrictions it could be about $280,000. In Los Angeles, median house prices are $400,000 but could be about $200,000.

Image Source: New York Times

As noted in An Ecomodernist Manifesto and Nature Unbound, dense cities represent an important pathway towards intensifying human economic activity, using fewer resources, and reducing pollution. And as agricultural modernization proceeds around the world, labor forces tend to shift from the farm to industry and services, which concentrate in cities. That’s why over 70% of humanity is projected to live in urban areas by 2050. Obviously, large portions of humanity will remain outside of cities. But if these trends are any indication, it is clear that bad policies prevent millions of people from living where they want to live. The aggregate effect increases the human footprint, both geographic and atmospheric.

Food Production and Wildlife on Farmland

Food Production and Wildlife on Farmland

What kind of agriculture most benefits biodiversity? In recent years, few questions have animated conservationists and land-use scientists more than this one. Rightly so: agricultural expansion and intensification are leading causes of wildlife declines and habitat loss,1 and with rising demand for agricultural products, pressures are set to mount even further.2

In a 2005 paper titled “Farming and the Fate of Wild Nature,” Rhys Green and his colleagues framed the challenge in terms of two alternative strategies: land sparing and land sharing.3 In the former, agricultural intensification reduces wildlife on farmland but spares natural habitats by shrinking the overall land footprint from producing any given amount of food. In the latter, lower productivity and wildlife-friendly methods provide more suitable conditions for birds, insects, and mammals on the farmland itself, but result in more land being utilized for any given level of production, as less food is produced per hectare. Sparing and sharing occupy two ends of a spectrum of land uses, and in theory allow conservationists, farmers, and planners to find the best possible combination of food production and wildlife conservation in a given area.

Of course, in reality, things are never as black and white, and over the decade since the seminal paper by Green and others, layers of complexity have been added to the question.4–8 But at the bottom, the choice between land sparing and sharing implies a stark trade-off: we can’t have it all.

But this notion of a strong trade-off between biodiversity and agricultural productivity has been challenged, with some arguing that it is exaggerated, or does not exist at all. “A scenario that most if not all conservationists could get behind,” says Claire Kremen at UC Berkeley, is one with “large protected areas surrounded by a relatively wildlife-friendly matrix” – matrix here referring to a farmland mosaic where species can easily move around and do their business.9 “A biodiversity-productivity trade-off is not a sine qua non,” argue Ivette Perfecto and John Vandermeer.10

According to this line of thinking, changing agricultural practices could circumvent the uncomfortable trade-off between food production and wildlife. “It is more likely the specific agricultural practices and suites of practices utilized, rather than the yields they produce, that determine how hospitable the shared agricultural landscape would be for elements of biodiversity,” says Kremen.9 In particular, agroecological or organic forms of agriculture, which, among other things, replace external chemical inputs with ecosystem services, use rotations, and eschew genetically modified seeds, are often held up as the most promising way to reach this goal.9,11–14

This is not, strictly speaking, a question of whether there can be farmland with pretty good yields and fairly high levels of biodiversity. It is also not a question of picking either land sparing or land sharing. Rather, what is at stake is whether different forms of agricultural technology and management can provide for more wildlife at any given yield level. In other words, can the trade-off itself be mitigated – can we find solutions that are win-win?

 

No Food, No Space: Why High-Yield Farming Has So Little Wildlife

The answer depends on a host of factors, including which region, crop, and taxa you look at. We will start with the crops that take up the most land globally – row crops like cereals and soybean – as well as sugarcane, cotton, potatoes, and beets, all of which share relevant characteristics for this discussion.

Let us start with some first principles for how high yields are achieved in these types of crops. Several factors account for the remarkable yield improvements that have occurred in almost every world region in the last 50 years. First of all, farmers have made sure that any limiting resources like water and nutrients are in ample supply, allowing the plants to grow faster and produce bigger harvests. Second, they have tried to channel as much of these resources, as well as sunlight, into the target crop itself by planting the plants closer and closer together, making them more resilient to environmental stresses like cold, drought, and floods, and eliminating any weeds or pests that harm crop growth. Furthermore, they have maximized the amount of time during the year that plants grow, by double- and even triple-cropping where the climate so permits, and, in temperate regions, planting earlier – sometimes even during the winter.

What you end up with is a field where a single plant, the target crop itself, is extremely dominant, especially during the peak growing season. With enough water and nutrients, sunlight becomes the limiting factor, and getting the most growth out of a crop means using up every last little drop of sunlight hitting the field. This leaves little sunlight for any weeds to grow in the understory.15,16 And if any weed were to reach up and capture some sunlight, it would be targeted with herbicides, since it would hamper crop growth. There is a catch-22 for non-crop plants: they cannot shade the crop, nor can they do well in the full shade of the crop. Given scarce weeds and an extremely simple, homogeneous crop structure, there is not a lot of food or space for organisms on the following levels in the food chain; this is further exacerbated by the required elimination of any critters that harm plants.15,17,18 With few invertebrates to feed on, and little space for nesting, birds and mammals have a hard time making a living on the fields.17 Think of a corn field – how could much wildlife possibly fit in here?

While in reality the situation might not be as extreme as described above, there are good reasons to think that the yields themselves, and the exclusion of non-crop life they entail, are an important – if not the most important – determinant of farmland biodiversity. As John Krebs has noted, “Intensification is about making as great a proportion of primary production as possible available for human consumption. To the extent that this is achieved, the rest of nature is bound to suffer.”19 For example, the loss of European and American farmland birds in the last few decades has not been driven so much by direct mortality from pesticide applications as by the loss of feed sources and nesting sites in high-yield agricultural systems.18,19 In fact, in many developed countries, pesticide applications have stabilized or declined, and the overall toxicity has gone down rapidly.20 Similarly, it is not the fertilizers themselves that kill the critters, but – again – the loss of living room that results from the increasing biomass dominance of the crop.17

The challenges in raising yields are not that different among farming systems, be they “conventional,” organic, or something else. Fertilizers and water have to be supplied in adequate amounts. Pests and weeds have to be eliminated. The plants need to capture as much of the sunlight that falls on a field as possible during the growing season, and crops need to be growing during a larger share of the year, even virtually all the time, as in double- and triple-cropping systems. These are simple biophysical components of yield grow that there is not much of a way around.

 

An Unavoidable Trade-off?

In so far as yields and not individual practices determine farmland biodiversity, alternative farming systems like organic are not at an inherent advantage: raising yields in organic systems will most likely reduce biodiversity on the fields, just as lowering yields in conventional systems can increase it. European farming was about as “conventional” in the 1970s as it is today, especially in terms of external inputs like chemical fertilizers and pesticides. Yet populations of many farmland bird species, for instance, were as much as 50% higher at that time.21 While this has been driven in part by the loss of field margins and hedgerows, the changes in cropping itself have most likely contributed substantially.19,22,23

Organic farming – the best-defined of alternative or agroecological farming methods – is consistently associated with more farmland wildlife. In a comprehensive review of existing evidence, Janne Bengtsson and colleagues found that organic farmland has on average 30% higher species richness and 50% higher abundance as compared with typical non-organic practices.24 But organic farming also has consistently lower yields compared to more conventional systems. Several studies have found the yield gap between organic and conventional farming to be around 20 to 25%.25–28 When looking at only the most comparable organic and conventional systems, Verena Seufert and her colleagues found the yield difference to be as high as 34%.26 That implies that producing one unit of food with organic methods takes about 50% more land. The exact yield gap depends on the crop type, with cereals having a larger gap and legumes like soybean a lower one.28

This implies that the difference in biodiversity between organic and conventional, for any given yield level, might not be that big. Few studies have looked simultaneously at yields and biodiversity, but one by Doreen Gabriel and colleagues, focusing on cereals in England, is an exception. They concluded that “the higher biodiversity levels in organic compared to conventional farming observed in many studies may simply reflect the lower production levels rather than the more wildlife-friendly farming methods per se.”29 Another study by Paul Donald and others found that cereal yields alone explained a large share of the decline in farmland bird populations in Europe.23

What this means is that, should the yield gap between organic and conventional farming narrow, the difference in biodiversity would probably also shrink. It is plausible, in fact, that if organic and conventional farming had the same yields, they would also have the same levels of biodiversity. It also means that efforts to increase farmland biodiversity – in any system – are very likely to require more cropland for any given level of food demand. The resulting cropland expansion usually takes place at the expense of natural habitats like forests, which are often host to more threatened species.8,30 What wildlife is gained on farms might thus be outweighed by losses elsewhere.


Biodiversity on European farms has decreased due to the loss of field margins and hedgerows like those pictured above, along with changes in cropping to increase yields.

Wiggle Room: The Potential For Win-Win Practices

What we have seen so far points to a strong trade-off between yields and biodiversity. However, as Ben Phalan, a zoologist with the University of Cambridge, remarked, “There is probably quite a lot of wiggle room available from where we are at the moment.” This wiggle room not only points to some promising opportunities to mitigate the trade-off, but also to the limitations of viewing organic and conventional farming as two static, dichotomous farming systems.

Limiting or avoiding tillage, for example, can be a boon for biodiversity, since it reduces disturbance and leaves plant residues on the ground for invertebrates and other organisms to feed on, although this effect may be canceled out if no-till requires greater herbicide applications.31,32 Cover crops can have a similar beneficial effect outside the growing season, but can cause a yield penalty if they deplete water stored in the soil, says Andrew Kniss, an agronomist at the University of Wyoming. Furthermore, according to Phalan, cover crops are often dominated by a single species, with limited benefits for plant biodiversity.

Kniss describes how some agronomists are experimenting with planting forage crops as understory in corn fields early in the growing season. The forage crops get a little peek at the sun before the corn canopy closes, then just barely make it through the peak growing season, but finally get some more sun once the corn plants start drying out again. Even this little bit of extra plant diversity can make a difference, as long as they don’t compete with the crop itself for sunlight.

Intercropping – where more than one crop is planted in each field – may also create a little more diversity, giving a small boost to the entire food chain.33 Some techniques for intercropping can be compatible with large-scale mechanical harvesters, so they do not require a complete overhaul of the farming system, according to Kniss.

The most promising avenue for win-wins, however, lies in pest control. Many insects and other invertebrates are not pests, and can coexist with crops if they can find the right resources to survive in the fields. Yet many of them currently fall prey to the use of pesticides, which often kill species other than the target pests.18 Some insects even help with pest control, being natural predators to the pests.34

The challenge when it comes to pest control is precision: being as effective as possible in killing the pests while harming as few other species as possible. Fortunately, there are many options. New high-tech tools allow farmers to better monitor pest outbreaks and then only spray pesticides exactly when and where they are needed.35 Innovations in synthetic pesticides have allowed them to become more targeted and less toxic overall.20 And new GM traits like Bt, where the plant – usually corn, soy, and cotton – produces its own insecticide, makes it unnecessary to do any spraying at all. Studies have shown non-target invertebrates to be more abundant in Bt cotton and corn fields than in fields managed with conventional insecticides.36

Another set of options is typically associated with agroecological or organic farming, but could just as well be adopted in any other system. Most importantly, crop rotations – where fields alternate between two or more crops over time – make it far harder for most pest species to persist from year to year, while allowing for a broader diversity of non-pest species.33,37 In this sense, rotations are a preemptive strategy that can reduce the need for reactive applications of pesticides once there is an outbreak. When outbreaks do occur, organic systems no longer have an advantage, since they rely on a small number of often highly toxic pesticides.38 This often – but far from always – creates a situation, according to Kniss, where organic farmers as defined today use good preventative tools but bad reactive tools, and conventional farmers use good reactive tools but often fail to use preventative ones. How those two balance out is not clear, says Kniss, but the way forward is clear: combining organic’s preemptive rotations with the diversity and flexibility offered by synthetic pesticides and GM traits like Bt.


Crop rotations – where fields alternate between two or more crops over time – make it far harder for most pest species to persist from year to year, while allowing for a broader diversity of non-pest species.

What About Other Crops?

The challenges and opportunities in boosting biodiversity described above apply, as mentioned at the outset, to most row crops like corn, soybean, or cotton. The situation might, however, be different for other crops, especially perennial ones like coffee, cocoa, oil palm, or rubber trees. By virtue of being taller and sometimes allowing both some understory and overstory, at least some of these trees can foster quite a bit more diversity in flora and fauna than, say, a cornfield. Cocoa and coffee, in particular, have been touted as examples of crops where high yields and biodiversity can coexist.9,12,39 This is true – but there is still a fairly strong trade-off. Cocoa and coffee can both be grown under a selectively thinned canopy in native forests. But the shade does come at a cost for the growth of the crops, and, in general terms, the less shade they have, the higher the yields.40,41 The very highest yields are typically recorded where there is no shade at all, at which point these plantations start resembling the more species-poor plantations of, for example, oil palm.40

Broadly speaking, more shade means more structural diversity, and thus more niches for other life. So when a native forest is thinned to make way for a cocoa or coffee plantation, forest-dwelling species usually suffer; one study found the number of forest species to decline by 60% upon this initial conversion to cocoa agroforestry.42 Going from partly shaded to full-sun systems likely involves a similar drop. But in between, along some part of the shade gradient, there might be opportunities for raising yields marginally without losing a lot of biodiversity, although studies that show this for cocoa haven’t accounted for the composition, only the number, of species.43 The biodiversity friendliness might also decline with distance from intact forest fragments and over time, as native trees are replaced by planted, often non-native trees.40,43,44


The orange-billed nightingale-thrush is an insect-eating bird that lives on Costa Rican “shade” coffee plantations.

Can’t Have the Cake and Eat It

All in all, there clearly exist some opportunities to reduce the trade-off between yields and biodiversity on farmland – although there is no guarantee that the species that will benefit are those of conservation concern, as opposed to widespread generalists. The examples mentioned above are not exhaustive; there are more out there. A lot of effort among conservationists has gone into comparing the trade-off between land sparing and sharing in different locations, but less into studying how it can be mitigated in the first place. Claire Kremen is right when she says that “focusing on how specific agricultural practices or suites of practices relate to both yields/profits and biodiversity” is what can provide scope for management interventions.9 This is an agenda that not only unites land sparers and land sharers, but that should also make it more obvious that conservation scientists and agronomists need to work more closely together.

Nevertheless, it is also clear that we cannot proceed on the assumption that these trade-offs will be more than marginally reduced, let alone eliminated. What sort of strategy each country or locale adopts – be it sparing or sharing or something in between – is ultimately up to the democratic will of these constituencies. Organic farming can be a perfectly legitimate choice, even though, due to its inherent limitations, it will likely remain lower yielding than non-organic farming for the foreseeable future.

But there will be consequences that extend beyond that single place. When Europeans choose to convert increasing areas to organic farming or implement certain forms of agri-environmental measures in the context of a shrinking overall area for agriculture, the lost production will be picked up somewhere else, more likely than not in a biodiverse tropical region. We can make the best of the situation by locating agriculture in the places where the biodiversity losses are smallest and the yield gains are the greatest, and steer any expansion that does happen into the least sensitive areas. But by and large, when it comes to biodiversity and farming, we cannot have the cake and eat it.

Where the Wildlife Are

What kind of agriculture most benefits biodiversity? In recent years, few questions have animated conservationists and land-use scientists more than this one. Rightly so: agricultural expansion and intensification are leading causes of wildlife declines and habitat loss,1 and with rising demand for agricultural products, pressures are set to mount even further.2

In a 2005 paper titled “Farming and the Fate of Wild Nature,” Rhys Green and his colleagues framed the challenge in terms of two alternative strategies: land sparing and land sharing.3 In the former, agricultural intensification reduces wildlife on farmland but spares natural habitats by shrinking the overall land footprint from producing any given amount of food. In the latter, lower productivity and wildlife-friendly methods provide more suitable conditions for birds, insects, and mammals on the farmland itself, but result in more land being utilized for any given level of production, as less food is produced per hectare. Sparing and sharing occupy two ends of a spectrum of land uses, and in theory allow conservationists, farmers, and planners to find the best possible combination of food production and wildlife conservation in a given area.

Of course, in reality, things are never as black and white, and over the decade since the seminal paper by Green and others, layers of complexity have been added to the question.4–8 But at the bottom, the choice between land sparing and sharing implies a stark trade-off: we can’t have it all.

But this notion of a strong trade-off between biodiversity and agricultural productivity has been challenged, with some arguing that it is exaggerated, or does not exist at all. “A scenario that most if not all conservationists could get behind,” says Claire Kremen at UC Berkeley, is one with “large protected areas surrounded by a relatively wildlife-friendly matrix” – matrix here referring to a farmland mosaic where species can easily move around and do their business.9 “A biodiversity-productivity trade-off is not a sine qua non,” argue Ivette Perfecto and John Vandermeer.10

According to this line of thinking, changing agricultural practices could circumvent the uncomfortable trade-off between food production and wildlife. “It is more likely the specific agricultural practices and suites of practices utilized, rather than the yields they produce, that determine how hospitable the shared agricultural landscape would be for elements of biodiversity,” says Kremen.9 In particular, agroecological or organic forms of agriculture, which, among other things, replace external chemical inputs with ecosystem services, use rotations, and eschew genetically modified seeds, are often held up as the most promising way to reach this goal.9,11–14

This is not, strictly speaking, a question of whether there can be farmland with pretty good yields and fairly high levels of biodiversity. It is also not a question of picking either land sparing or land sharing. Rather, what is at stake is whether different forms of agricultural technology and management can provide for more wildlife at any given yield level. In other words, can the trade-off itself be mitigated – can we find solutions that are win-win?

 

No Food, No Space: Why High-Yield Farming Has So Little Wildlife

The answer depends on a host of factors, including which region, crop, and taxa you look at. We will start with the crops that take up the most land globally – row crops like cereals and soybean – as well as sugarcane, cotton, potatoes, and beets, all of which share relevant characteristics for this discussion.

Let us start with some first principles for how high yields are achieved in these types of crops. Several factors account for the remarkable yield improvements that have occurred in almost every world region in the last 50 years. First of all, farmers have made sure that any limiting resources like water and nutrients are in ample supply, allowing the plants to grow faster and produce bigger harvests. Second, they have tried to channel as much of these resources, as well as sunlight, into the target crop itself by planting the plants closer and closer together, making them more resilient to environmental stresses like cold, drought, and floods, and eliminating any weeds or pests that harm crop growth. Furthermore, they have maximized the amount of time during the year that plants grow, by double- and even triple-cropping where the climate so permits, and, in temperate regions, planting earlier – sometimes even during the winter.

What you end up with is a field where a single plant, the target crop itself, is extremely dominant, especially during the peak growing season. With enough water and nutrients, sunlight becomes the limiting factor, and getting the most growth out of a crop means using up every last little drop of sunlight hitting the field. This leaves little sunlight for any weeds to grow in the understory.15,16 And if any weed were to reach up and capture some sunlight, it would be targeted with herbicides, since it would hamper crop growth. There is a catch-22 for non-crop plants: they cannot shade the crop, nor can they do well in the full shade of the crop. Given scarce weeds and an extremely simple, homogeneous crop structure, there is not a lot of food or space for organisms on the following levels in the food chain; this is further exacerbated by the required elimination of any critters that harm plants.15,17,18 With few invertebrates to feed on, and little space for nesting, birds and mammals have a hard time making a living on the fields.17 Think of a corn field – how could much wildlife possibly fit in here?

While in reality the situation might not be as extreme as described above, there are good reasons to think that the yields themselves, and the exclusion of non-crop life they entail, are an important – if not the most important – determinant of farmland biodiversity. As John Krebs has noted, “Intensification is about making as great a proportion of primary production as possible available for human consumption. To the extent that this is achieved, the rest of nature is bound to suffer.”19 For example, the loss of European and American farmland birds in the last few decades has not been driven so much by direct mortality from pesticide applications as by the loss of feed sources and nesting sites in high-yield agricultural systems.18,19 In fact, in many developed countries, pesticide applications have stabilized or declined, and the overall toxicity has gone down rapidly.20 Similarly, it is not the fertilizers themselves that kill the critters, but – again – the loss of living room that results from the increasing biomass dominance of the crop.17

The challenges in raising yields are not that different among farming systems, be they “conventional,” organic, or something else. Fertilizers and water have to be supplied in adequate amounts. Pests and weeds have to be eliminated. The plants need to capture as much of the sunlight that falls on a field as possible during the growing season, and crops need to be growing during a larger share of the year, even virtually all the time, as in double- and triple-cropping systems. These are simple biophysical components of yield grow that there is not much of a way around.

 

An Unavoidable Trade-off?

In so far as yields and not individual practices determine farmland biodiversity, alternative farming systems like organic are not at an inherent advantage: raising yields in organic systems will most likely reduce biodiversity on the fields, just as lowering yields in conventional systems can increase it. European farming was about as “conventional” in the 1970s as it is today, especially in terms of external inputs like chemical fertilizers and pesticides. Yet populations of many farmland bird species, for instance, were as much as 50% higher at that time.21 While this has been driven in part by the loss of field margins and hedgerows, the changes in cropping itself have most likely contributed substantially.19,22,23

Organic farming – the best-defined of alternative or agroecological farming methods – is consistently associated with more farmland wildlife. In a comprehensive review of existing evidence, Janne Bengtsson and colleagues found that organic farmland has on average 30% higher species richness and 50% higher abundance as compared with typical non-organic practices.24 But organic farming also has consistently lower yields compared to more conventional systems. Several studies have found the yield gap between organic and conventional farming to be around 20 to 25%.25–28 When looking at only the most comparable organic and conventional systems, Verena Seufert and her colleagues found the yield difference to be as high as 34%.26 That implies that producing one unit of food with organic methods takes about 50% more land. The exact yield gap depends on the crop type, with cereals having a larger gap and legumes like soybean a lower one.28

This implies that the difference in biodiversity between organic and conventional, for any given yield level, might not be that big. Few studies have looked simultaneously at yields and biodiversity, but one by Doreen Gabriel and colleagues, focusing on cereals in England, is an exception. They concluded that “the higher biodiversity levels in organic compared to conventional farming observed in many studies may simply reflect the lower production levels rather than the more wildlife-friendly farming methods per se.”29 Another study by Paul Donald and others found that cereal yields alone explained a large share of the decline in farmland bird populations in Europe.23

What this means is that, should the yield gap between organic and conventional farming narrow, the difference in biodiversity would probably also shrink. It is plausible, in fact, that if organic and conventional farming had the same yields, they would also have the same levels of biodiversity. It also means that efforts to increase farmland biodiversity – in any system – are very likely to require more cropland for any given level of food demand. The resulting cropland expansion usually takes place at the expense of natural habitats like forests, which are often host to more threatened species.8,30 What wildlife is gained on farms might thus be outweighed by losses elsewhere.


Biodiversity on European farms has decreased due to the loss of field margins and hedgerows like those pictured above, along with changes in cropping to increase yields.

Wiggle Room: The Potential For Win-Win Practices

What we have seen so far points to a strong trade-off between yields and biodiversity. However, as Ben Phalan, a zoologist with the University of Cambridge, remarked, “There is probably quite a lot of wiggle room available from where we are at the moment.” This wiggle room not only points to some promising opportunities to mitigate the trade-off, but also to the limitations of viewing organic and conventional farming as two static, dichotomous farming systems.

Limiting or avoiding tillage, for example, can be a boon for biodiversity, since it reduces disturbance and leaves plant residues on the ground for invertebrates and other organisms to feed on, although this effect may be canceled out if no-till requires greater herbicide applications.31,32 Cover crops can have a similar beneficial effect outside the growing season, but can cause a yield penalty if they deplete water stored in the soil, says Andrew Kniss, an agronomist at the University of Wyoming. Furthermore, according to Phalan, cover crops are often dominated by a single species, with limited benefits for plant biodiversity.

Kniss describes how some agronomists are experimenting with planting forage crops as understory in corn fields early in the growing season. The forage crops get a little peek at the sun before the corn canopy closes, then just barely make it through the peak growing season, but finally get some more sun once the corn plants start drying out again. Even this little bit of extra plant diversity can make a difference, as long as they don’t compete with the crop itself for sunlight.

Intercropping – where more than one crop is planted in each field – may also create a little more diversity, giving a small boost to the entire food chain.33 Some techniques for intercropping can be compatible with large-scale mechanical harvesters, so they do not require a complete overhaul of the farming system, according to Kniss.

The most promising avenue for win-wins, however, lies in pest control. Many insects and other invertebrates are not pests, and can coexist with crops if they can find the right resources to survive in the fields. Yet many of them currently fall prey to the use of pesticides, which often kill species other than the target pests.18 Some insects even help with pest control, being natural predators to the pests.34

The challenge when it comes to pest control is precision: being as effective as possible in killing the pests while harming as few other species as possible. Fortunately, there are many options. New high-tech tools allow farmers to better monitor pest outbreaks and then only spray pesticides exactly when and where they are needed.35 Innovations in synthetic pesticides have allowed them to become more targeted and less toxic overall.20 And new GM traits like Bt, where the plant – usually corn, soy, and cotton – produces its own insecticide, makes it unnecessary to do any spraying at all. Studies have shown non-target invertebrates to be more abundant in Bt cotton and corn fields than in fields managed with conventional insecticides.36

Another set of options is typically associated with agroecological or organic farming, but could just as well be adopted in any other system. Most importantly, crop rotations – where fields alternate between two or more crops over time – make it far harder for most pest species to persist from year to year, while allowing for a broader diversity of non-pest species.33,37 In this sense, rotations are a preemptive strategy that can reduce the need for reactive applications of pesticides once there is an outbreak. When outbreaks do occur, organic systems no longer have an advantage, since they rely on a small number of often highly toxic pesticides.38 This often – but far from always – creates a situation, according to Kniss, where organic farmers as defined today use good preventative tools but bad reactive tools, and conventional farmers use good reactive tools but often fail to use preventative ones. How those two balance out is not clear, says Kniss, but the way forward is clear: combining organic’s preemptive rotations with the diversity and flexibility offered by synthetic pesticides and GM traits like Bt.


Crop rotations – where fields alternate between two or more crops over time – make it far harder for most pest species to persist from year to year, while allowing for a broader diversity of non-pest species.

What About Other Crops?

The challenges and opportunities in boosting biodiversity described above apply, as mentioned at the outset, to most row crops like corn, soybean, or cotton. The situation might, however, be different for other crops, especially perennial ones like coffee, cocoa, oil palm, or rubber trees. By virtue of being taller and sometimes allowing both some understory and overstory, at least some of these trees can foster quite a bit more diversity in flora and fauna than, say, a cornfield. Cocoa and coffee, in particular, have been touted as examples of crops where high yields and biodiversity can coexist.9,12,39 This is true – but there is still a fairly strong trade-off. Cocoa and coffee can both be grown under a selectively thinned canopy in native forests. But the shade does come at a cost for the growth of the crops, and, in general terms, the less shade they have, the higher the yields.40,41 The very highest yields are typically recorded where there is no shade at all, at which point these plantations start resembling the more species-poor plantations of, for example, oil palm.40

Broadly speaking, more shade means more structural diversity, and thus more niches for other life. So when a native forest is thinned to make way for a cocoa or coffee plantation, forest-dwelling species usually suffer; one study found the number of forest species to decline by 60% upon this initial conversion to cocoa agroforestry.42 Going from partly shaded to full-sun systems likely involves a similar drop. But in between, along some part of the shade gradient, there might be opportunities for raising yields marginally without losing a lot of biodiversity, although studies that show this for cocoa haven’t accounted for the composition, only the number, of species.43 The biodiversity friendliness might also decline with distance from intact forest fragments and over time, as native trees are replaced by planted, often non-native trees.40,43,44


The orange-billed nightingale-thrush is an insect-eating bird that lives on Costa Rican “shade” coffee plantations.

Can’t Have the Cake and Eat It

All in all, there clearly exist some opportunities to reduce the trade-off between yields and biodiversity on farmland – although there is no guarantee that the species that will benefit are those of conservation concern, as opposed to widespread generalists. The examples mentioned above are not exhaustive; there are more out there. A lot of effort among conservationists has gone into comparing the trade-off between land sparing and sharing in different locations, but less into studying how it can be mitigated in the first place. Claire Kremen is right when she says that “focusing on how specific agricultural practices or suites of practices relate to both yields/profits and biodiversity” is what can provide scope for management interventions.9 This is an agenda that not only unites land sparers and land sharers, but that should also make it more obvious that conservation scientists and agronomists need to work more closely together.

Nevertheless, it is also clear that we cannot proceed on the assumption that these trade-offs will be more than marginally reduced, let alone eliminated. What sort of strategy each country or locale adopts – be it sparing or sharing or something in between – is ultimately up to the democratic will of these constituencies. Organic farming can be a perfectly legitimate choice, even though, due to its inherent limitations, it will likely remain lower yielding than non-organic farming for the foreseeable future.

But there will be consequences that extend beyond that single place. When Europeans choose to convert increasing areas to organic farming or implement certain forms of agri-environmental measures in the context of a shrinking overall area for agriculture, the lost production will be picked up somewhere else, more likely than not in a biodiverse tropical region. We can make the best of the situation by locating agriculture in the places where the biodiversity losses are smallest and the yield gains are the greatest, and steer any expansion that does happen into the least sensitive areas. But by and large, when it comes to biodiversity and farming, we cannot have the cake and eat it.

Event Registration: Breakthrough Dialogue 2017

Registration: Breakthrough Dialogue 2017

Breakthrough Dialogue Registration 2017

Breakthrough Dialogue 2017 Event Registration

Registration: 2017 Breakthrough Dialogue

Breakthrough Dialogue 2017 Registration

2017 Dialogue Registration

2017 Breakthrough Dialogue Registration

Yascha Mounk

Jamie Lorimer

Tamar Haspel

2017 Breakthrough Dialogue Registration

The Shrinking Footprint of American Meat

The Shrinking Footprint of American Meat

In “The Future of Meat” Marian Swain draws attention to multiple factors that combine to determine the impacts of meat consumption on natural resources. Swain identifies area of cropland for animal feed as a key impact. Using the “ImPACT Identity” we can dissect causes of changes in the number of acres of cropland employed to satisfy the American appetite for meat.1 Our formalism requires some simplifications and assumptions about the path from field to plate, but the result robustly indicates the power of five trends.

To answer “How much cropland is needed to raise meat for the US population?” the ImPACT identity posits that the cropland used to provide meat equals the product of:

Annual changes in any variable in this sequence (assuming the others stay fixed) alter the number of acres of cropland used to satisfy American meat consumption. In casual terms, changes in the number of mouths, affluence, taste for meats, efficient growth of animals, and crop yields all influence the land area impacted or spared.

Spanning a long interval with reliable data over which American habits evolved, we calculated how each variable in the ImPACT identity changed in the United States over the period from 1969 to 2010. Population and GDP per capita grew at annual rates averaging about positive 1% and 1.7% respectively over this period. In contrast, the same interval saw negative annual changes in the amount of meat Americans ate per dollar, the amount of grain needed to produce a unit of meat, and the amount of land needed to grow that grain. On average, between 1969 and 2010, the amount of US cropland used to grow meat fell almost 0.8% per year.

In essence, even as wealth and overall meat consumption rose, shifting taste and more efficient growers lowered the absolute impact—in this case, land area required for crops—of producing meat. The graph below shows this decoupling of meat and land. 

Figure 1. Sources: World Bank; US Bureau of Economic Analysis; USDA; Congressional Research Service 

Our analysis covers three types of meat—beef, swine, and poultry—which account for over 95% of American consumption of meat from land animals. Combined, such American meat consumption grew from 15.3 million tons in 1969 to 23.7 million in 2014.2 The composition of that total shifted strikingly. While the average 1969 American ate three times as much beef as chicken, a recent average American ate about 10% more chicken than beef. Pork consumption fell from being twice that of chicken to less than three-quarters of it.

We assumed that all US consumption of the three meats comes from animals raised in feedlots. In reality, the USDA reports that “beef from alternative production systems … accounts for about 3 percent of the U.S. beef market.”3 While animals feeding on grass and other extensive sources of nutrition might assume greater importance in the future, the tiny fraction in our interval means adjusting for it does not change the picture. We also assumed corn, which typically forms 80% or more of the diet of cattle and swine, as the feed for each animal type and a feed conversion ratio of 6:1 for beef, 3:1 for swine, and 2:1 for poultry.4 Our values for corn yields (bushels per acre) come from the USDA.5 Rises in yields of soybeans, whose products are fed to chickens and pigs, resemble those of corn, so adding detail on different feeds does not change the picture either.

In sum, our calculation shows that the amount of cropland used for raising land meat for Americans fell by nearly a third: around 9 million acres, about the size of Maryland. This occurred over four decades famous for burgers, sausages, and fried chicken. The rise in the yield of feed per acre, along with the shift in the American diet away from beef and pork to chicken, contributed the most to the reduction of cropland used for meat. Because our data begin in 1969 when American meat consumption was already high, rising affluence resulted in little more American appetite for meat per capita. The land sparing was surely compounded by improvements in animal breeding and nutrition—for example, by the addition of trace elements such as zinc to animal diets.

Collectively, the lowering of meat/GDP, feed/meat, and land/feed allowed 319 million Americans in 2014 to eat nearly 24 million tons of meat from less land than the 202 million Americans in 1969 ate their 15 million tons of meat. The recent American meat experience shows that sizeable reductions can occur even without a shift to a mainly vegetarian diet and suggests caution in predicting rising use of land for meat around the world. 

Losing Ground in the Amazon

We can’t talk about deforestation without talking about agriculture, which is why so much of the literature on conservation is concerned with topics like soybean demand, beef production, and demand for vegetable oils. Agricultural expansion is a key driver of forest loss, not only in the Amazon but in other important conservation regions like central Africa and Indonesia. This link between agriculture and conservation helped motivate our desire to do a deep dive on food and farming topics with our ongoing series The Future of Food.

A recent New York Times piece presents some disheartening news from the Amazon basin: after years of zero-deforestation pledges from governments and agricultural companies, deforestation seems to be again on the rise. In Brazil, deforestation increased last year after a decade of decline, while in neighboring Bolivia forest loss has been steadily accelerating since the 1990s. Why? Rising demand for soy production, mostly to use for livestock feeds.

Policymakers in developing and middle-income countries face a trade-off between capitalizing on the tremendous agricultural potential of tropical regions and preserving their forests. One of the experts quoted in the Times piece puts it quite starkly: “The forest is seen as useless land that needs to be made useful.” The socialist government in Bolivia views soy production as a way to promote food sovereignty in the developing nation and has explicit plans to continue clearing forest for farmland in the coming decades.   

Broadly speaking, the way to reconcile this trade-off between crop production and conservation is through sustainable intensification. That means growing more crops on existing farmland by increasing yields, or growing crops on degraded or other previously developed land, rather than clearing forests. When agricultural giants like Cargill sign zero-deforestation pledges, they still want to increase production, so their only hope is sustainable intensification.

But even as intensification drives cropland productivity up and deforestation rates down globally, we are still losing the battle to protect some of Earth’s most treasured and biodiverse ecosystems. It is tragic, but no coincidence, that some of the world’s most productive farming regions are the tropic climates that are also home to some of the world’s oldest and largest rainforests. Indonesia may be the perfect place to grow oil palm, which is a higher yielding oil crop than competitors like soybean oil, but that is meager consolation when we look at the vast areas of forest habitat that have been lost there due to oil palm expansion.

The recent reported uptick in deforestation in the Amazon is a worrying sign that efforts at sustainable intensification may be stalling. We should not lose sight of the real progress that has occurred, however: even with the increase in deforestation last year, Brazil’s rate of forest loss is still much lower than it was in the early 2000s.

An ecosystem as vast as the Amazon requires conservation planning at a regional level, and a patchwork of policies across Brazil, Bolivia, and elsewhere will leave too many gaps for exploitation. As satellite imaging gets better and more frequent, it will provide important data for governments and NGOs to track forest loss and identify priorities. What areas have already been cleared, where could production be intensified on existing croplands, and where are the most sensitive ecological areas to focus protection efforts? These questions should guide new eco-regional planning approaches, which are our best hope to preserve rainforests amid the unrelenting pressures of agricultural expansion.

Another Record Year for Nuclear Power

These new reactors also came from a greater diversity of countries: five in China, and one each in Pakistan, India, Russia, and the United States. The completion of Watts Bar-2 in Tennessee was perhaps the biggest story, as it was the first US reactor to come online since 1996. The other big story comes from South Korea, which brought online its first APR1400, their new GenIII design, four more of which are currently being built in the United Arab Emirates.

Looking forward to 2017, there could be many exciting new additions—see the table below (from the World Nuclear Association). Most significantly, the Emirates may start up their very first commercial reactor at Barakah-1, making it the first country to join the nuclear-powered club since China in 1991. There’s also potential for between seven and fourteen restarts of reactors in Japan, which would reduce the country's imports of coal and LNG.

Most of the reactors expected to come online in 2017 will be typical Pressurized Water Reactors (PWRs). But China may connect its first modular high-temperature gas-cooled reactor to the grid this year at Shidaowan. India is also slated to bring online another two heavy water reactors.

Doomsday-Sayers

Perhaps worse is the way in which technological change seems to be intertwined with political instability, leading us down what might look like a spiralling path toward destruction. Our technologies have aggravated our tribal impulses, Osnos tells us, manifesting in both polarization and dystopianism. The rise of statistics, as William Davies points out in The Guardian, seems to have stoked political dysfunction rather than quelled it. And according to Jason Tanz of Wired, the ethos of personal liberation that has accompanied Silicon Valley-esque technological change since the 1960s has recently been co-opted by authoritarian forces, leading us to the political predicament we face today.

The notion of doomsday itself, however, magnifies these challenges far more than it resolves them. The looming fear that the world is ending may make sense in the face of an actual nuclear winter, but it tells us very little about how to deal with our problems, as Will Boisvert has argued—and it does far worse by scaring us away from technologies that might serve as solutions, not to mention further into our ideological silos.

Technology and political institutions, Osnos and Davies remind us separately, are both in the end tools—tools that may be employed toward destructive or constructive ends. Reinvesting in both, as well as in a positive vision to guide them, will take us a long way farther than the short road to apocalypse possibly could.

As science faces its own existential crisis, a number of smart commentaries have emerged to trace the lack of faith in the institution, as well as possible ways forward. Alan Levinovitz details some contributing factors to Trump’s ascension—namely, “epistemic uncertainty, existential panic, and anti-elitism”—to emphasize that reason alone will fail to convince those whose worldview depends on the rejection of scientific consensus. Peter Broks drives the point home further, arguing that “science communication has failed.” We are in need not of better messaging tools, he thinks, but rather of “a vision of society—free, open, equal and inclusive—that science can help us create.” Matthew Nisbet hashes out a fuller version of this argument in American Scientist:

What is needed is broader strategic thinking about the handful of policy goals and investments that scientists can join with others in pursuing that would have an enduring impact on problems such as income inequality and political polarization, and on the threat they pose to the scientific enterprise.

Part of this requires that scientists recognize their own role in exacerbating inequality; especially revealing is the finding that those individuals considered “scientific optimists” tend to have benefitted from the changes that accompany innovation. If scientists do seek to mobilize, Nisbet says, it should be in the service of broader goals and institutions—those that might serve to dispel the inequality and polarization currently undermining science and society alike.

Breakthrough’s Ted Nordhaus, Alex Trembath and Jessica Lovering make the case that the Trump era, if nothing else, challenges “climate advocates to grapple seriously with why their politics and policies have failed so consistently for the last several decades” and to consider new options—including the advanced nuclear option. Paul McDivitt, writing for Ensia, similarly urges environmentalists to more accurately portray the success of renewables, a tactic that adds up not to “renewables denial,” as Bill McKibben would have it, but rather to a responsible assessment of our clean energy needs in the face of climate change.

Indeed, a new study led by Glen Peters demonstrates that any clean energy gains made to date by renewables have been offset by reductions in nuclear, leading Christopher Green to believe that we're in need of some serious technological breakthroughs. David Biello, writing for Yale’s E360, concurs; our “lack of progress underscores the urgent need for technological innovations”—along with the political will and capital to do so.

Nate Johnson attempts to tackle the dual problem of poverty and environmental degradation in a recent series for Grist; modernization and urbanization, it turns out, tend to precede not only rising incomes but also reforestation and conservation in turn. “The faster poverty diminishes, the more natural systems will rebound to nurture future generations,” he concludes. “This suggests that ending poverty must be a central focus, if not the central focus, of environmental efforts.” Such a shift would require a seismic shift in the environmental narrative, one away from “stopping change” and toward constructive and equitable action, Johnson says.

Jeremy Cherfas supplies a bevy of arguments “in praise of meat, milk, and eggs” for his latest “Eat This Podcast,” delving into questions of equity, nutrition, health, and environmental impact. Noting the projected rise in demand for animal-sourced foods over the next few decades—50 to 70 percent, he learns—he emphasizes the importance, for both humans and the environment, of making meat production in developing countries more efficient.

Olga Khazan highlights a new study led by Dan Kahan, which suggests that “science curiosity” may serve to combat political polarization. Further work might be devoted to establishing curiosity “as a disposition essential to good civic character,” Kahan’s group thinks. Hiroko Tabuchi, writing for The New York Times, presents additional strategies for reducing polarization in the face of climate change: focusing on common ground, practical matters, and modes of adaptation, rather than on the politically fraught issue itself.

2017 Breakthrough Paradigm Award-winner Calestous Juma outlines some of Africa’s agricultural “wins” from the past year, including the increased uptake of precision agriculture technologies like sensors, satellite imagery, and drones, as well as the growth in the sector that has resulted from long-term policy commitments, infrastructural development, and funding.

Mother Jones’s Maddie Oatman spotlights “the bizarre and inspiring story” of an aquaculture operation on an Iowan farm. Fish farms, she proposes, have the potential to reduce pressure on wild fish stocks, while driving food security and even economic growth.

Renegade environmentalist (and “techno guru”) Stewart Brand tells Ben Austen for Men’s Journal, finally, that the long view gives us “a sense of—a belief in—a better future.” Words to go by in 2017.

2017 Breakthrough Senior Fellows Announced

2017 Breakthrough Senior Fellows Announced

A leading agronomist addressing food demand and ecological protection. A renowned agricultural economist grounding the food and farming debate. A thought leader on global governance and development. A nuclear engineer at the forefront of nuclear innovation. And a historian who has helped us understand why modern food systems have evolved the way they have. The Breakthrough Institute is proud to announce Kenneth Cassman, Jayson Lusk, Samir Saran, Rachel Slaybaugh, and Maureen Ogle as our 2017 Senior Fellows.

This is the ninth year Breakthrough has conferred Senior Fellowships to top scholars who have influenced our thinking and our efforts. Our Senior Fellows form an essential part of the Breakthrough community, providing generous counsel and lifetimes of research centered on protecting nature and uplifting humanity. We are especially excited to welcome these five thought leaders, who join us as we seek to extend our analysis to food and farming, broaden our international partnerships, and continue to aid the development of cheap, clean, and scalable energy technologies.

If 2016 was a year of great transformations for the Breakthrough Institute, our continued collaboration with these distinguished colleagues ensures that our work will break new ground, and drive key conversations, in the year to come.

 

 

Kenneth Cassman

An agronomist and a leading voice in debates surrounding agriculture and environmental impact, Kenneth Cassman has had an outsized influence on the Breakthrough Institute, helping to shape our work as we delve deeper into food and farming. As a presenter at the Breakthrough Dialogue in 2016, Cassman provided our audience with an edifying take on “peak farmland” and the complex challenges that lie ahead; as a collaborator, he has better acquainted us with the work of farmers and other agronomists, a perspective too often lost in agriculture debates.

Cassman is Emeritus Robert B. Daugherty Professor of Agronomy at the University of Nebraska, where he served as Chair of the Department of Agronomy and Horticulture from 1996 to 2004. Co-author of the textbook Crop Ecology, Cassman is a recipient of the 2012 President’s Award from the Crop Science Society of America. He is also a Fellow of the American Association for the Advancement of Science, the American Society of Agronomy, the Crop Science Society of America, and the Soil Science Society of America.

Throughout his career, Cassman’s research and teaching have focused on ensuring local and global food security while also protecting the environment for future generations. During over 30 years as a systems agronomist, he has worked on nearly all of the world’s major cropping systems. Currently, Cassman is co-Principal Investigator for the Global Yield Gap Atlas, which estimates yield gaps in order to inform development policies, research, and technology uptake around the world.

Jayson Lusk

In the generation since the food movement emerged and food culture cohered, conversations around nutrition and agriculture have largely been dominated by symbolism, romanticism, and a distinct lack of empiricism. In response, Jayson Lusk’s pragmatic and reasoned commentary comes as a welcome perspective, providing the public and policymakers with facts and frameworks beyond mere symbols. His exceptionally grounded work on food prices, consumer behavior, agriculture policy, and animal welfare has especially influenced Breakthrough as we take our own research into the food and farming fray.

A food and agricultural economist who studies what we eat and why we eat it, Lusk is one of those unique combinations of prolific scholar and impactful science communicator. Not only has he been listed as one of the most cited food and agricultural economists of the decade, but he also regularly features in a wide variety of news outlets, including The Wall Street Journal, The Washington Post, and Forbes. In one particularly notable op-ed for The New York Times, Lusk makes the case for industrial farms; as he concludes, “There are no easy answers, but innovation, entrepreneurship and technology have important roles to play. So, too, do the real-life large farmers who grow the bulk of our food.” As a respondent to Breakthrough’s Future of Food series launched at the end of 2016, Lusk takes a similar tack, urging for an appreciation of science and technology as a means to improving environmental outcomes.

Lusk currently serves as Regents Professor and Willard Sparks Endowed Chair in the Department of Agricultural Economics at Oklahoma State University and as the Samuel Roberts Noble Distinguished Fellow at the Oklahoma Council of Public Affairs. His books include Unnaturally Delicious, The Food Police, and Compassion, by the Pound.

Samir Saran

Samir Saran is Senior Fellow and Vice President at the Observer Research Foundation, where he has emerged as a leading commentator on issues of global governance, including energy policy, global development, and inequality. As a headliner at the Breakthrough Dialogue in 2016, Saran provoked much discussion in suggesting that poverty, and energy poverty, had surfaced as an untenable climate mitigation strategy of the developed world. His challenge to those countries to decarbonize their own economies, rather than to hamstring the energy development of the rest of the world, was received as a crucial, and provocative, insight.

An electrical engineer by training, Saran has a Master’s in Media Studies from the London School of Economics and Political Science and has served as a Fellow at the University of Cambridge Program for Sustainability Leadership. He is currently a Visiting Fellow at the Australia India Institute and holds faculty appointments at a number of institutions.

As globalization, automation, demagoguery, and other forces threaten long-standing progress toward liberal democracy, Saran’s smart and forceful voice will prove all the more essential. As he wrote in an op-ed for The Times of India at the beginning of 2017, democracy, diversity, and development will each require fresh examination in the face of such challenges if we are to succeed in building a modern, multicultural, and more equitable world.

Rachel Slaybaugh

“Now is a very exciting time to be in nuclear engineering,” says Rachel Slaybaugh, an Assistant Professor of Nuclear Engineering at the University of California Berkeley, who believes students in her field of study “chose nuclear because they want to help save the world.”

If it is rare for a scholar to rally a new community of experts and entrepreneurs to collectively tackle complex, modern challenges, Slaybaugh takes the task on in stride. While her research applies numerical methods and supercomputing to reactor design, nuclear security, and nonproliferation, her broader work serves to consolidate and extend the reach of the advanced nuclear community. As a panelist at the SXSW Eco Conference in 2016, Slaybaugh made a splash advocating for nuclear power as a tool to combat climate change. As an organizer of the first annual Nuclear Innovation Bootcamp at UC Berkeley in 2016, she demonstrated a commitment to innovation and global engagement that we at Breakthrough value most deeply.

Through her varied service to UC Berkeley, ARPA-E, DOE’s Nuclear Energy Advisory Committee, and organizations such as Third Way and the Clean Air Task Force, Slaybaugh has shown herself to be a seemingly superhuman contributor to advanced nuclear. As we well know, the effort to develop and deploy cheap, clean, scalable energy technologies is far from over; as such, we are fortunate to have a nuclear engineer and advocate like Slaybaugh in the field and in our corner.

Maureen Ogle

Understanding where our food comes from would be impossible without understanding where our food used to come from. As in all debates and discourses, the contribution of the historian is essential. Maureen Ogle, historian and author of In Meat We Trust, provides just such a contribution, using the history of food production to contextualize debates surrounding nutrition and farming—a perspective all the more important given the frequent lack of reflexive dialogue in this space.

As Ogle argues in her response to Breakthrough’s Future of Food series, “A more useful narrative is one anchored in a global perspective, history, and demography.” Over the last half century, she says, the conventional narrative “has lured generations of consumer, environmental, and rural activists who’ve railed against ‘industrial’ livestock production. They’re convinced that large-scale production is the problem, and ‘big food’ the perpetrator. As a result, they’re blinded to the long view of the big picture. They don’t see that scale is a consequence, not a cause.”

“Imagine, instead,” she concludes, “a perspective that calculates demand as a given, and large-scale livestock production as a necessity. Perhaps that would inspire critics to channel their energy into projects that transform problematic necessities into environmental benefits.” It is this view, at once grounded, critical, and constructive, that we at Breakthrough find so inspiring.

Ogle holds a Ph.D. in American History from Iowa State University. She is the author of Ambitious Brew: The Story of American Beer, Key West: History of an Island of Dreams, and All the Modern Conveniences: American Household Plumbing, 1840-1890.

Trump, Twitter, and Tomatoes

One: anonymous sources leaked a Trump Administration memo ordering federal agencies to cease all public communications. The official Twitter account of Badlands National Park appeared to thwart this edict, tweeting basic climate science facts that were later, ominously, deleted. 

The incident caused a firestorm of social media outrage. Environmentalists accused the Trump Administration of attacking National Parks, scrubbing government websites of climate data, and, indeed, waging a War on Science

Two: White House Press Secretary Sean Spicer announced that Mexico will be forced to pay for the proposed border wall via a 20% tariff on imports from Mexico. It was later clarified that Spicer was talking about a border adjustment tax, not a tariff. Probably. At any rate, since Mexico accounts for the majority of America’s produce imports, a tariff would make food more expensive for consumers. A border adjustment tax is more complicated but, as Neil Irwin explains, also might significantly raise price of food (among other goods) for American consumers. 

By my (anecdotal) account, environmentalists’ reaction to this incident was…much less pronounced. No viral social media storms. No marches planned to advocate affordable food for Americans. No dispatches from environmental organizations telling their members Trump wants to Make Avocados Expensive Again.

So why the dustup over National Parks and climate science but not affordable food? I see at least two possible explanations for the discrepancy. 

The first is fairly innocent. Tariffs, border adjustments, WTO rules, currency valuation, and international trade in general are complex subjects. It is much easier to understand and lambast an apparent attack on National Parks than to parse Spicer’s tenuous and complicated language about trade policy. But with several days’ retrospect, the plans to muffle federal science agencies and vanish webpages from the EPA’s website are just as tenuous and complicated. That hasn’t stopped opponents from using them to their advantage.

The second explanation is that environmentalists are simply more outraged about perceived attacks on climate science, which taps into decades of pitched and partisan debate on the subject, than they are about the price of tomatoes. Concern about climate change and climate science have become central to what it means to be an environmentalist. The cost of food, by contrast, has not. In fact, in the eyes of many environmentalists, the problem with our food system is that food is too cheap.

Trump has positioned himself in opposition to globalization and trade agreements. So, in recent decades, has the environment movement. Environmental organizations long campaigned against the North American Free Trade Agreement, and spent the last several years excoriating the Trans-Pacific Partnership that President Trump just withdrew from. Even though industrial farming tends to more efficiently use land and resources, environmentalists have long rejected it in favor of small-scale, locally controlled, decentralized systems and technologies. This despite a growing literature suggesting that the former is more sustainable than the latter by most measures, most especially on a planet of 7 going on 9 billion people.

This is the danger when Downwingers control both sides of the debate. An opposition, or resistance if you will, not hobbled by its own version of the same romantic ideas about America, the economy, and the environment as Trump would immediately attack him for proposing to raise Americans’ food costs. I don’t want to equate environmentalism and Trumpism, the former of which at least does not enthusiastically threaten the rights and lives of the underprivileged, and, indeed, the stability of our democracy. But, like Trump, his environmental opponents focus almost exclusively on the costs of free trade, without attending to the benefits—a classic Downwinging, precautionary approach. As such, they are unable to marshal a potentially powerful argument against Downwinging Trumpism, one that brings home the costs of Trump’s illiberal demagoguery in a way that abstract complaints about respect for science, the free press, and democratic norms do not. 

 

Main image by public domain - http://www.photosforclass.com/search/donald%20trump/3, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=55600211

Breakthrough Institute

Climate Policy in the Age of Trump

Climate Policy in the Age of Trump

Breakthrough’s Ted Nordhaus, Alex Trembath, and Jessica Lovering assess the fate of climate policy in the Trump era for Foreign Affairs, and conclude that from a performance standpoint, not much is likely to change. Over the past three decades, international agreements and national commitments have done little to cap and reduce emissions, leaving significant decarbonization challenges ahead. In order to redirect our emissions trajectory, they argue, we’ll need a climate policy reset, one that will build political consensus to properly value our current and future low-carbon assets, including advanced nuclear. While the new administration is not likely to propose a coherent climate policy in this regard, one thing it should do is force climate advocates to rethink their policies and their politics—a reckoning that “might offer a more hopeful and optimistic path for climate advocacy in the years to come.”

 

Read the full Foreign Affairs article here

How Much Does Material Consumption Matter for the Environment?

A recent MIT-led study on global dematerialization, however, sheds some disheartening light on this idea. Examining consumption trends for dozens of chemicals and commodities over time, the authors find no evidence, for most products, of absolute dematerialization. That is, even though technological change has increased the efficiency of many materials examined, this efficiency has not resulted in any absolute decrease in their consumption. As the study’s lead author Christopher Magee said in a press release, “There is a techno-optimist’s position that says technological change will fix the environment. This says, probably not.”

“Fixing the environment,” however, is a different issue than reducing consumption for a set of materials. Decoupling environmental impact from economic growth does remain crucial to saving more nature, and we’re still far from reaching the point of “peak stuff.” Looking at consumption trends is thus an important part of the equation, but we also need to determine how consumption relates to specific environmental impacts like land use, water use, greenhouse gas emissions, and pollution. Although consumption is often linked with environmental impact, the relationship is not one-to-one: different materials entail very different production processes with differing impacts on the environment.

Let’s take a closer look at what the study tells us. The authors’ central question revolves around whether technological progress truly allows us to make more using less, or whether increased efficiency simply leads to increased consumption. This “rebound” effect is also called the Jevons Paradox, which dates back to the 19th century English economist William Jevons, who observed that as new technologies reduced the price of coal, coal consumption increased.

For their part, the study’s authors examined technological change and consumption trends for a group of chemicals, materials (like aluminum and polyester), and hardware products (like silicon transistors), and similarly found that while technological improvements have reduced the price and increased the efficiency of many of these products, those improvements have “rebounded” in the form of increased consumption. Silicon transistors, which are used in all our modern electronics, provide a prime example of this effect: as improved technology has radically decreased the amount of raw silicon needed to produce a transistor over time, we have also found all sorts of novel ways to use transistors as we invent new technologies and hardware. This increase in demand has outpaced the production efficiency such that today we’re consuming more silicon than ever.

So what this study tells us is that we’re consuming more aluminum, polyester, and silicon, even as technology allows us to use these materials more efficiently. But what if greater consumption of some materials substitutes for other activities that have greater environmental impact? Indeed, the authors raise this as an important area for further study, highlighting unanswered questions about the role of silicon semiconductors, for instance, in substitution. As our increased use of this technology enables us to communicate virtually, they ask, does it also reduce the amount of of emissions-intensive car rides and flights? Do more silicon-based solar panels allow us to use fewer fossil fuels for electricity production? Increased use of some materials, after all, can lead to a decreased use of others, making it more complicated to account for the full environmental trade-offs associated with material use.

Furthermore, not all commodities involve equal environmental impact. The materials reviewed in the study all come from somewhere, whether they are mined, refined, or produced through other industrial processes. What impact do these processes have on the environment, though? When we look at global environmental threats, the biggest driver of land-use change and biodiversity loss is agricultural expansion, not mining or fossil fuel extraction. As Breakthrough’s Linus Blomqvist pointed out in an article last year, the volume of resource consumption does not necessarily provide a direct prediction of environmental impact. To assess the study’s relevance for the global environment, it would be important to know how the commodities included rank as global sources of greenhouse gases, water consumption, and pollution, and whether their production processes have become more environmentally friendly over time.

It would also be interesting for future analyses to examine these consumption trends at the regional level. While global consumption of many of these products continues to increase, there may be significant regional variation that would offer some important insight. It is possible, for instance, that consumption of some materials in wealthy countries has actually peaked and started to decline as a result of stable population sizes and the demand saturation that can come with affluence. This pattern could also spread to other regions in the future, as developing countries grow wealthier and population growth slows or even reverses.

Ultimately, dematerialization provides an important metric for understanding our environmental footprint, but it doesn’t tell the whole story. Greater efficiency can certainly lead to increased consumption when we make really useful things, like silicon transistors, cheaper and more widely available. But it is also technological progress that allows us to substitute toward less environmentally damaging sources of material and energy, and to produce the same goods using less land, water, and fossil fuels. We may not see evidence of absolute dematerialization yet, but that doesn’t mean we should lose sight of the major ways in which technology helps us to reduce our environmental impact.

Exit, Voice, and Loyalty

But is it really a fair—or useful—representation of the world, one with two delineated sides in eternal struggle? And does it actually advance any progressive causes? Nate Johnson broaches this subject in his exit interview with Secretary of Agriculture Tom Vilsack. “There’s a growing divide in the U.S. between farmers and eaters,” Johnson says. “It’s a tough conversation to mediate, because it’s made up of a lot of shouting and very little listening.” As Vilsack tells Johnson, progress will depend on productive, as well as inclusive, dialogues, those that include not just questions of ideals but also “what’s physically and economically possible.”

There’s no real exiting the world we’ve entered in 2017. If we’re truly looking for pragmatic options, we’re going to need to make our voice not a steady shout, but something more like a constructive conversation.

David Rothkopf reiterates the case for optimism as articulated by Steven Pinker and Thomas Friedman (as well as “the technolophiliacs of Silicon Valley”): that history, and data, demonstrate that human progress is sure, swift, and accelerating. “Realism equals optimism,” he argues, a point that might be reiterated by Douglas Carswell, who assures us “that the world is looking up, that the human condition—however imperfect—is getting better.” For Carswell, it is the divide between “up and down,” rather than left and right, that matters for the future of liberal society. Brad Plumer and David Roberts of Vox, on the other hand, both hedge their bets when it comes to optimism. “Progress is never unstoppable,” as Roberts concludes in response to Obama’s recent piece in Science on clean-energy momentum, and “progress on climate change is still only just beginning.” Holding both views in mind—a belief in long-term, macro-level progress with an insistence on institutional intervention—may prove one of the more difficult balancing acts of our time.

Eduardo Porter challenges the notion that states will be leading the decarbonization charge in coming years; drawing on recent reports by the Brookings Institution and PricewaterhouseCoopers, he emphasizes that only two states in the US are currently decarbonizing at a rate rapid enough to meet agreed-upon timetables. More importantly, gains thus far have largely been driven by the transition from coal to gas, much of which has “played itself out.” As such, keeping nuclear online remains crucial—“climate change will be hard to stop without it”—and yet, continued nuclear plant closures in states like California and New York make for a dreary picture, Porter says.

Even as older plants face political and economic obstacles, however, advanced nuclear reaches a “significant milestone,” according to John Fialka: NuScale Power, a company based in Portland, Oregon, has recently submitted designs for its small modular reactor to the NRC. “Expect the first SMR to be built in America and become operational in the early 2020s,” writes James Conca for Forbes in response. This technology has much to offer if licensed, he says, including economic viability, integration with intermittent renewables, and enhanced safety features. And perhaps most important to keep in mind, NuScale owes much of its success so far to public-private partnerships, and particularly to support from the Department of Energy. “Without the leadership, vision, and support of the U.S. DOE,” as CTO Jose Reyes has said, “our technology, design, development, testing, and license application could not have proceeded to this point.”

“Yes, science is political,” Elizabeth Lopatto informs her readers, outlining some reasons why: scientists are people, science is funded—as well as regulated—by the government, and scientific findings do and should inform political decisions. The wrench in the works here is Trump, however, and his seeming disregard for scientific consensus, which promises to disrupt the already complex relationship between science and politics “in potentially destructive ways.”

Filmmaker Oscar Boyson explores “The Future of Cities” in an 18-minute video essay with The Nantucket Project. Proceeding from the premise “that when density is done right, it’s the best—if not the only—solution to our growing climate crisis,” Boyson highlights innovations cropping up in cities around the world to confront both the challenges and the opportunities of increasing urbanization. “If you care about people,” he thinks, “this is the defining question of our time.”

Ben Potter features “the ultimate in precision agriculture”: a robot designed to apply herbicide directly to weeds. Currently in development, the technology demonstrates the vast potential power of precision farming to drive up yields while also reducing the use of herbicide, in this case, by up to 94 percent.

Jayson Lusk

Jayson Lusk

Stephanie Romañach