Why I Left Breakthrough to Work on Climate Prediction Markets
Prediction markets can represent the best collective knowledge about our energy and climate future.
-
-
Share
-
Share via Twitter -
Share via Facebook -
Share via Email
-
I recently departed from The Breakthrough Institute to assume the role of Head of Climate Analytics at Interactive Brokers. Leaving The Breakthrough Institute was very difficult for me, as I have greatly appreciated the intellectual environment and believe that emphasizing the principles of Ecomodernism offers significant value to the energy and climate dialogue.
One of the aspects I valued most about being at Breakthrough was that it provided me the freedom and the opportunity to write about issues that I believe are pernicious and widespread in the development of knowledge on climate change.
In particular, I have written extensively about how social and professional incentives in academic climate science create self-reinforcing feedback loops that skew the overall output of academic literature toward negative predictions regarding future climate impacts while downplaying the potential adverse effects of restrictive energy policies. Essentially, although the ideal scenario is to have “evidence-based policy," I’ve argued that the reality involves a significant amount of “policy-based evidence," where various selection effects lead prudent researchers to frame their studies and results in ways that support their preexisting policy preferences, such as the temperature limits established by the Paris Agreement. Further, even when there is no such bias from the researchers involved, there is a strong tendency for scientific models to become “spherical cows” where the most relevant dynamics of a problem are assumed away or held constant because they are the most difficult to model.
However, identifying problems is much easier than solving them. I have felt the itch to work on projects that would improve the situation, but I have been unsure where my efforts would best be placed.
It was at this point that Thomas Peterffy began recruiting me to Interactive Brokers to work on prediction markets in the climate and energy sectors. While prediction markets may seem unconventional (much more on that below), I view them as a genuine opportunity to tackle many of the issues I have highlighted and ultimately to create some of the clearest representations of collective human knowledge regarding our energy and climate future.
The Problem of Prediction
Making predictions is a ubiquitous and crucial human activity—a part of decision-making in all walks of life, including business strategy, politics, and science. However, it has always been and continues to be remarkably challenging to do well.
For many people, the default strategy for obtaining the most reliable predictions is to seek out experts with specialized knowledge on the topic at hand. The assumption is that in-depth subject matter knowledge correlates well with forecasting abilities and that subject matter experts agree with one another, as they are presumed to be logical and rational, basing their reasoning on agreed-upon facts.
However, despite most people's intuition, expertise often fails to yield accurate predictions in practice. Studies have shown that confident, high-profile designated experts frequently perform no better than chance and, in many instances, worse than simple statistical models in their predictions. Forecasting errors arise from various sources, including cognitive biases, narrative fallacies, and the tendency to perceive patterns in randomness. Long-term prediction is particularly challenging due to path dependence, which causes small initial errors to accumulate over time and allows low-probability, high-impact events to shape the course of history. One challenge, therefore, is not merely to predict more effectively but to acknowledge the structural limits of prediction itself and design systems that explicitly recognize and incorporate inherent uncertainty rather than artificially side-stepping it.
Many of the same issues arise when it comes to expert predictions on matters specifically related to energy and climate. Well-understood fundamental physics makes certain aspects of climate change predictable, such as the global average temperature projected over decades in response to known changes in greenhouse gas and aerosol emissions. However, when economic and technological dynamics come into play, as they inevitably do, predictions can often be consistently and embarrassingly inaccurate.
But what is the alternative? One option might be to let the private sector evaluate climate risk. There has certainly been a significant increase in private climate consulting firms, or climate service providers, in recent years, many of which employ expert PhDs in energy and climate science. Theoretically, competition among firms should incentivize accuracy and lead to the distillation of the best information. However, these entities do not necessarily have the right incentives to provide the most accurate information either. They are also motivated to exaggerate the extent of change we are witnessing (and the accuracy of their predictive abilities) in order to sell their proprietary information. Competition should provide a check on this but a major problem with both academic and private climate service provider predictions is a lack of accountability due to the difficulty in verifying claims. This is partly because the predictions tend to be very difficult to falsify as they embed so many assumptions that are not considered to be part of the prediction.
In fact, the Intergovernmental Panel on Climate Change (IPCC) goes out of its way to protect itself from falsifiability by stating that it rarely makes climate predictions but instead makes climate projections that are based on assumptions about, for example, “future socio-economic and technological developments." The IPCC does not make probabilistic claims about these assumptions and simply states they “may or may not be realized.”
When forecasts are unfalsifiable, users of forecasts “...are not prepared to pay for quality they cannot verify, and sellers are unwilling to invest in quality they cannot demonstrate. Under these conditions, it is more rational for forecast providers to focus on presentation and the user-friendliness of their portals than on the accuracy of their forecasts.” (Roulston and Kaivanto, 2024). This applies to both private climate service providers and the production of academic publications, where there are strong incentives to spend more effort on the clean, aesthetically pleasing presentation of results (both visually and linguistically) than on improving unverifiable accuracy.
Therefore, three keys to improving the state of predictions are
- Not blindly relying on perceived experts.
- Ensuring that predictions are verifiable and thus falsifiable.
- Motivating prediction producers to be as accurate as possible.
Prediction Markets for Climate and Energy
It turns out there is a mechanism that goes a long way toward meeting all three of the above criteria: Prediction Markets. Prediction markets enable participants to buy and sell contracts based on their expectations of the outcomes of well-defined events.
Prediction markets are not a new concept or something emerging from outside academia. Numerous studies have examined their accuracy, and they even have their own dedicated academic journal. This body of research demonstrates that prediction markets have consistently performed as well as or significantly better than alternatives, and they have even been used to identify which academic studies are of low quality.
Fundamentally, prediction market probabilities aggregate the collective wisdom of participants in the market, and it turns out that this crowdsourcing of information often provides our best representation of reality—in many cases better than the views expressed by designated experts. This is a main theme in James Surowiecki’s 2004 book, The Wisdom of Crowds: Why the Many Are Smarter Than the Few and How Collective Wisdom Shapes Business, Economies, Societies, and Nations, as well as Cass Sunstein's 2006 book, Infotopia: How Many Minds Produce Knowledge. Perhaps the most famous antidote of the wisdom of crowds, detailed in Surowiecki’s book, is Sir Francis Galton's discovery in 1906 that the average estimate of an ox's weight made by 800 villagers was almost perfectly accurate and better than any of the individuals' guesses.
While crowds possess the potential for wisdom, Surowiecki articulates that the phenomenon of crowd wisdom materializes only when specific conditions are satisfied. These conditions include that individual contributions should be: 1) decentralized, enabling individuals to leverage local knowledge and unique information; 2) diverse, incorporating unconventional and eccentric viewpoints; and 3) relatively independent, ensuring that opinions are not overly influential upon one another.
When the conditions outlined by Surowiecki are not met, crowds do not exhibit the traits of wisdom but instead produce popular delusions and the madness of crowds. As a species that has relied on social cohesion for survival throughout our evolutionary history, humans are inherently predisposed to being heavily influenced by our peers. Thus, social contagion, information cascades, and a herd mentality can make group behavior much less logical than the reasoned judgment of any individual within the group. (Note that my ongoing references to Wikipedia represent a tacit endorsement of the crowdsourcing of information).
The situations where groups cannot be trusted arise when they are overly socially connected, which both stems from and contributes to a lack of diversity and independence. This presents a challenge for expert knowledge production, as designated experts often come from a relatively small number of elite PhD programs and spend their careers within rather insular groups. Moreover, the synthesis of academic research is frequently conducted by committees working closely together, which can further encourage herding and orthodoxy, especially when social incentives favor signaling that an individual expert belongs to a particular epistemic team rather than prioritizing the production of the most accurate information or predictions.
Prediction markets address these incentive problems by forcing clear questions and directly compensating participants for their predictive accuracy. They also empirically identify true experts through trial and error. One challenge of designating experts is that credentialed individuals can present plausible arguments on all sides of an issue, leading to the inevitable question of who the true experts are. Prediction markets select these people, not based on credentials and name recognition, but based on performance: those who continually make bad predictions will be incentivized to find ways to improve or exit the market. Furthermore, the financial incentives encourage all participants to invest time and resources into enhancing information and predictions.
The famous bet between economist Julian Simon and biologist Paul Ehrlich in 1980 on the future prices of five metals to settle a dispute over their differing views on human innovation and its effects on resource scarcity is an early example of quasi-prediction markets at work in the study of climate and energy. The bet shares characteristics with prediction markets in the environmental realm as it compelled Simon and Ehrlich to avoid vague proclamations (or academic obscurantism) and replace them with concrete claims or specific, measurable terms. Moreover, putting financial skin in the game ensured that there was a true penalty for being wrong and thus was high motivation for Simon and Ehrlich to put forth their best understanding of reality. The negotiation of terms between just two people with divergent views reveals some information, but prediction markets scale these dynamics by aggregating hundreds, thousands, or even millions of views.
So far, pilot programs in climate prediction markets have shown great promise, demonstrating accurate predictions on variables as diverse as UK monthly rainfall, average daily maximum temperature, annual wheat yields, monthly El Niño indices, annual Atlantic hurricanes, and annual US hurricane landfalls.
At Interactive Brokers, I will work on both high-level event questions of general interest and more local, specific event questions that have more direct applications to participants.
At a high level, for example, we will have event questions such as, “Will the annual global temperature breach the 2°C Paris Agreement temperature limit by 2040?” At first glance, this may seem like a narrow scientific calculation that is best left to experts, but initial impressions can be wrong. First, there is significant uncertainty in just modeling the physical climate system, so there is value in aggregating diverse opinions on that question alone. Second, answering this event question actually goes well beyond physics and requires information on the future trajectory of economics, technology, demographics, and geopolitics. It is thus a quintessential example of a question that would benefit from aggregating diverse information dispersed broadly across society in a prediction market.
The vision is that the market opinion on big-picture questions like these will provide the best available information to the public and decision-makers and help people understand where and how strong consensus is on various topics.
It democratizes participation so that those on both ends of the spectrum on climate change questions can no longer claim that their views are being marginalized. Those who are dismissive of the human origins of contemporary global warming are allowed to participate just as much as those who foresee the near-term collapse of society. Additionally, those within the climate movement should be supportive of the project, as research has shown that participating in climate prediction markets increases concern about global warming.
In the context of more local, practical applications, I hope that these markets will mitigate climate vulnerability by facilitating a more efficient resource allocation. For instance, if a contract indicating that “California will experience more than 1 million acres of area burned” in the coming year presents a significant probability prior to the commencement of the fire season, state and federal agencies could respond by increasing their hiring of seasonal firefighters or contracting for additional equipment.
These markets can also have the attribute of being extremely reactive to new important information. Thus, if the market for “Will a major hurricane make landfall in Miami-Dade County in 2025” shifts dramatically in the hours following a storm's formation, that information could be used to make decisions on evacuations, emergency declarations, and the prepositioning of disaster resources. These markets would not be alternatives to existing hurricane models but rather a way of combining information from multiple models (e.g., from traditional numerical weather prediction models with newer models like the GraphCast AI model created by Google’s Deepmind) via human minds into a single probability.
Like all human endeavors, there is a risk of bad actors manipulating the system for their own gain. However, singling out prediction markets as particularly susceptible to influences that undermine their predictive power is misguided, especially when compared to alternatives. For example, traditional climate information is shaped by a complex web of funding incentives, peer review gatekeeping, and institutional pressures that create their own types of bias—biases that are hard to detect and rectify. In contrast, climate prediction markets are open and decentralized, with prices determined by a broad range of independent participants rather than a small group of gatekeepers with aligned incentives. These markets are also no more vulnerable to manipulation than traditional financial markets, where significant sums of money are at stake, yet prices remain highly efficient. Critically, any attempt to artificially distort climate prediction markets presents a profit opportunity for informed participants, who can exploit and correct such manipulation by taking the opposite position. This self-correcting mechanism means that, over time, prediction markets should provide information that is more robust and resistant to bias than the alternatives.
Overall, prediction markets offer a way to discipline thinking by defining concrete event questions and they empirically select the best forecasters while continuously incentivizing improvements in accuracy. They also provide a means of aggregating diverse, widely distributed knowledge into a single probability or price.
I believe that scaling up these prediction markets in the climate and energy sector will bring clarity that has often been lacking from both academic and private sector climate predictions, ultimately leading to a better understanding of the world and more effective solutions to problems.