Wisdom of the Clouds?

Stuck inside the last few cold weeks of spring,

I spent a lot of time hanging out with ChatGPT—swinging between fretting about how it would change journalism and delighting in the weird, eerily accurate, and unintentionally revealing passages I could make it spit out of the cloud. I’ll leave debates about what’s ahead for media to all the other fine professionals who spent early 2023 similarly occupied. In this issue, I’d like to focus on the unintentionally revealing.

After I was done asking ChatGPT to summarize some books I’ve claimed to have read, I decided to see whether it would tell me what it thought about Breakthrough and the work we do. Although its answers to the narrowest questions were accurate enough, the technology often stumbled when it came to broader concepts—especially those wrapped in normative values. Again and again, the bot would offer definitions of environmental ideas that captured the conventional wisdom, but that missed the actual meaning of the term—and in ways that would do more to harm debate than help it.

So, for this issue, I’ve decided to host a face-off between several of my colleagues at Breakthrough and ChatGPT. In a series of nine articles, human authors have dissected a relevant word. Each take is paired with a definition from ChatGPT. The comparisons, I think, say a lot about the way we humans build and spread knowledge.

Up first, Breakthrough founder Ted Nordhaus takes on the phrase “climate emergency,” tracing the various terms the environmentalist movement has put forward to try to spur climate action. Where he sees an abject marketing failure, ChatGPT sees the opposite.

Next, the institute’s academic liaison, Jennifer Bernstein, looks at the “Anthropocene,” a word often used to encompass the age we are now living in. Although Bernstein argues that the term is laden with assumptions and value-judgments about humans’ place in the universe, AI appears to have something more neutral and scientific in mind.

Yet even “science,” is not an idea our human experts and ChatGPT can agree on. In his essay, Breakthrough Climate and Energy Co-Director Patrick Brown traces the origins of the phrase “the Science says,” to show how most of what we call “science” in climate discussions isn’t that at all. His interlocutor, though, understands “the Science says” somewhat differently, noting that it “is used to assert the importance of evidence-based decision-making in the face of conflicting opinions and conflicting political or ideological interests.”

Perhaps ideas like science are too big for algorithms. What about more specific mechanisms like “tipping points”? Here, ChatGPT gets the definition basically right—“these thresholds can be thought of as points of no return, beyond which the natural system in question is altered in such a way that it cannot be restored to its previous state”—but there remains a problem. As Breakthrough Climate and Energy Co-Director Seaver Wang points out, the metaphor isn’t apt for most climate impacts. There often isn’t really any tipping (a rapid unbalancing like a cart toppling over) nor any point (a single, precise, known threshold).

As Alex Trembath, the institute’s deputy director, explains, there also aren’t any “villains” of the kind the environmentalist movement (and, perhaps not coincidentally AI) likes to blame. For example, Big Oil has long been a convenient baddy in the fight to decarbonize, yet the evidence points in a different direction.

Another apparent source of evil in climate discourse is “extractivism,” or as ChatGPT puts it, “the economic model of resource extraction, where natural resources…are extracted from the earth for export and profit. Extractivism is often characterized by the exploitation of resources by foreign companies or governments in developing countries, leading to environmental degradation, social conflict, and economic inequality.” Not so, rebuts writer Leigh Phillips.

Turning to agriculture, Dan Blaustein-Rejto, Breakthrough’s director of Food and Agriculture, and Alex Smith, an analyst on the same team, take on our next two selections, juxtaposing “conventional,” as in “conventional farming,” and “natural.” The problem with both terms is that even though observers may have something specific in mind when using them, what has been considered “conventional” or “natural” has varied so much by location and over time that those words mean very little. That is an issue AI was not able to solve.

The ninth phrase, finally, is “power,” as in “solar power.” In environmentalist discourse, writes Breakthrough correspondent Matthew Wald, “power” and “energy” are too often conflated, prompting misleading reports—and ChatGPT discussions—about the ability of solar installations to “power hundreds of homes” or replace nuclear plants (they can do neither).

As a whole, the pieces collected for this issue’s special feature are a reminder that knowledge creation and dissemination encompass lots of processes AI will struggle to get just right. Definitions—even those that seem the most technical or neutral—don’t spring fully formed from truth. ChatGPT’s output for this issue may purport to be fact, but what it really offers is a synthesis of a particular outlook on the world put forward by a particular portion of that world’s writership.

But that is not truth. Rather, it’s an uncanny echo of our own flawed human way of coming to knowledge: Somewhere, deep beneath the surface, lie facts. We find the facts we like or can’t ignore and combine them into bigger ideas, articulated through words and phrases that are full of assumptions and values, leaps of logic and faith. Sometimes, our assorted meanings are close enough that we can claim a consensus—or at least a couple of main schools of thought. And sometimes, based on those, we can agree on enough other ideas to form a plan or a policy or a way we’d like to better the world.

But sometimes, actually most times, a human will take a look at all of this from the outside and pinpoint where something went crooked. Unless AI gets really skilled at that part, I feel pretty good about my Breakthrough colleagues’ job prospects. So I’m concluding this introduction with a tenth word: “breakthrough.” My algorithmic pal hints at a breakthrough’s suddenness, its singularity, its almost violent overturning of whatever came before.

I would offer, though, that Breakthrough’s real job—and the job of anyone reading—is to be more plumb-line than sledgehammer. As we build ecomodernism, we should look for where things went askew, offer course corrections, and invite anyone who is willing—not only those whose outlook is represented in whatever consensus ChatGPT digs up—to add bricks.