CO2 Scorecard Misrepresents and Misunderstands Efficiency Rebound Research

Recent posts by the CO2 Scorecard group claim to have discredited the analysis on rebound effects in industrial sectors of the US economy presented in one of my recent papers--let me here call it "Saunders." The authors offer an analysis of their own said to "devastate" the results I have reported there. Herewith is my response.

It is worth reminding readers of the stakes here. The energy consumption forecasts relied on by the IPCC, the IEA and McKinsey ignore rebound effects, or--to be maximally generous--treat them very inadequately. To the extent ignoring rebound effects results in underestimates of future energy use, it means we have less time than is generally believed to devise climate change solutions. This is surely problematic, but no serious individual would dispute the contention that uncomfortable reality must always trump wishful thinking. I believe rebound effects are significant and quite large, and I believe the peer-reviewed literature, including my own extensive contributions to that literature, supports this view. Unfortunately.

And to be absolutely clear: energy efficiency is a good thing (for one thing, increasing economic welfare) and must be aggressively pursued; this has always been my position. It's just that it may not deliver the large reductions in energy use many (including myself) would hope for.

Problems with the CO2 Scorecard Analysis

In light of the above, the CO2 Scorecard posts on this subject (1 and 2) are disappointing and disheartening. But they require a response, even if only to defend the honor of my fellow scholars in this field. A complete dissection of the CO2 Scorecard analysis would make this post too long. Rebound analysis, done properly, is a highly technical undertaking. The approach here is to show a handful of the serious problems with the authors' analysis by way of listing five points, with links to an appendix containing the technical foundation for these points. Those interested in further evaluating this foundation can see the technical appendix; those interested only in the claims made here can skip the full technicalities. Either way, as you will see, it is difficult to escape the conclusion that the authors of the CO2 Scorecard analysis are guessing at what they hope are problems with the Saunders analysis but then have not bothered to check if their guesses are actually right.

1. CO2 Scorecard Authors Misrepresent the Saunders Data

This point does not even need a technical discussion, it is so easy. The authors claim the Jorgenson et al. dataset used in Saunders...

"...is a dataset on the monetary value of energy expenditure by industrial sectors at a highly aggregate level. This means that the numbers in the dataset are current values of energy cost reported in million dollars--not in BTU or MWh or Joules, the conventional measures of energy use. As a result, it is impossible to directly estimate energy efficiency, productivity or intensity in physical units."

It is embarrassing to reveal that the authors evidently did not actually follow their own posted link to these data. Had they done so, they would have discovered that the dataset contains both monetary data and detailed quantity information, including for energy inputs. These are quantities expressed in physical units. These are the data used in my paper.

More on this in the technical appendix below.

2. CO2 Scorecard Authors Confuse Energy Intensity with Energy Efficiency

The authors, no doubt upon reading a quotation from me on this subject posted by Roger Pielke, Jr., have decided to punt on engaging this disturbing flaw in their work:

"From our perspective, further discussion of differences between energy intensity, energy efficiency and energy productivity is a moot point. If readers wish to have further clarification, we encourage them to write us."

"Moot," in its real meaning of "debatable" (as opposed to "irrelevant"), one might give them. But unfortunately, these distinctions speak to the core of their case. In their previous post, which claims to undercut my reported results, they make the fundamental error of assuming that energy intensity trends are the equivalent of physical/technical energy efficiency trends, failing to understand the fact that energy efficiency must be measured as the change in energy services provided per unit of energy input, which my analysis measures. Theirs does not. In their current post, proceeding under the illusion that my analysis is not physical quantity-based, they attempt to use this misconception to divert readers from their serious error.

Energy intensity trends are a fundamentally flawed measure of energy efficiency gains. Further, how the authors proceed from this assumption to forcefully-delivered declarations of rebound effect magnitudes without considering the necessary counterfactuals is simply staggering in its overreach.

More on this in the technical appendix below.

3. CO2 Scorecard Model is Grossly Simplistic

The unfortunate fact is, models are an absolute requirement for understanding rebound effects. And, yes, this means appealing to economic theory, empirical methodology, and datasets (see Point #1 above for how their critique of the dataset runs afoul of objective reality).

While the authors graciously acknowledge that I have been at pains to delineate the limitations of my analysis, they curiously are not as forthcoming in stating the limitations of their own. In fact they have employed a mathematical model, it is just an egregiously simplistic one. They don't want readers to realize that it is a model, but it is. And it is a mathematical model that makes truly heroic (albeit hidden) assumptions. Such as:

  • The ratio of energy input to output produced (energy intensity) reveals all that matters to understanding rebound effects;
  • Energy prices are irrelevant;
  • Prices of other inputs to production are irrelevant;
  • Efficiency gains for other inputs to production don't matter;
  • There is no need to ask counterfactual questions such as: "What would energy use have been in the absence of efficiency gains," or "What would energy use have been if energy efficiency gains had 'taken' on a one-for-one basis?";
  • Longstanding microeconomic principles can be dismissed as irrelevant.

It is certainly possible to devise a model, as the authors have, based on these hidden-from-view assumptions. But the astute among the target audience of the conclusions proffered from such a model must surely be left to wonder how such strict assumptions affect the conclusions thereby reached. Better models, such as that in my paper, relax these heroic assumptions and are thus, by any standard, more credible.

More on this in the technical appendix below.

4. CO2 Scorecard Misunderstands Flexibility in Production Processes

Having evidently not taken the time to understand Saunders, the authors claim that I fail to comprehend the limitations of adaptability and flexibility in production processes, a driver of rebound I first introduced to the literature in 1992. They say:

"Additionally, Saunders suggests that production systems are 'adaptable and flexible' and can seamlessly substitute energy for other factors of production. As we pointed out in the electric utility and mining sectors, once an operation starts, managers are often locked into their production processes and technologies, sometimes for decades. This reality defies the image of flexibility that Saunders describes in this video clip (at 6.45 min). Many times producers are not at liberty to change their technology every year; they can tinker at the edges but frequent technological change towards energy efficient systems is not a norm--a point also described in Greening 2000."

The authors are exactly correct that productive capacity, once put in place, will stay there for decades, "locked," as they say, to a particular technology. But ironically, this is the precise reason my analysis is explicit about vintaging, an aspect of the study the authors oddly find troublesome (too "theoretical"?). The flexibility to make changes in production processes is greatest when producers are putting new production capacity in place. Old production capacity is more rigid. The results I report for rebound magnitudes rest on this very idea. Proper rebound analysis must account for vintaging. And by the way, Lorna Greening, whom they cite, understood all this even if the authors do not.

More on this in the technical appendix below.

5. CO2 Scorecard Authors Fail in their Analysis of the Sectors they Examine

Perhaps most egregious is the authors' "analysis" of the three sectors they examine. The fundamental problem, noted previously, is that energy intensity trends do not equate to energy efficiency gains; and rebound cannot be measured without considering two counterfactuals.

But beyond this fundamental flaw, they make specific errors. In metal mining they assume away gains due to such things as automation. In electric power, they fail to understand the efficiency gains delivered by combined cycle technology. They also equate generation to electric power delivery, which in reality relies on substantial transmission and distribution capacity. In primary metal they equate a supposed over-estimate of energy efficiency gain to an over-estimate of rebound magnitudes, drawing on god-knows-what basis.

The upshot is that their rebound analysis of these three sectors is not remotely credible.

More on this in the technical appendix below.

###

Technical Appendix

This appendix provides further technical detail on the points in the main text.

1. CO2 Scorecard Authors Misrepresent the Saunders Data

The Jorgenson et al data (Dale W. Jorgenson, 2007-09-22, "35 Sector KLEM", available here, is a meticulously-prepared and widely-respected dataset containing full input-output information by sector--prices, value shares, and quantities. The little "qi" depict quantities. Standard microeconomics notation.

2. CO2 Scorecard Authors Confuse Energy Intensity with Energy Efficiency

The CO2 Scorecard analysts have clearly not done the hard work of examining the peer-reviewed literature in this field. And they apparently do not wish to do the hard work required to extract real rebound measurements from the data. This is a complex and very challenging undertaking. Unfortunate, but true. Instead, the authors take what they evidently believe is the easy way out and pretend (or delude themselves into believing) that a primitive analysis of energy intensity trends is sufficient to the task. In contrast, my analysis engages this challenge squarely and head on, wrestling with the subtleties that must be addressed to properly measure energy efficiency gains and quantify rebound magnitudes.

The critical point here is that it is fundamentally impossible to discern from intensity trends what energy efficiency gains have occurred. On top of this, to then believe it is possible to discern the rebound effects hidden in these trends could kindly be called a fool's errand.

There are too many drivers of energy intensity at work, all operating in different ways. For example, changes in energy intensity are driven not just by energy efficiency gains (i.e. improvements in energy services for each unit of energy input) but also by movements in energy prices. Worse, they are also driven by price movements in all other factors of production. Worse still, they are driven by technology gains for all other factors. Without knowing these, it is impossible to know how energy efficiency has evolved in any particular sector.

But the thorniest problem is that one cannot measure rebound effects without evaluating two counterfactuals: what energy use would have looked like in the absence of any energy efficiency gains, and what energy use would have looked like had energy efficiency gains "taken" on a one-for one basis. Only with these in hand can one make any definitive statements about rebound magnitudes. One certainly cannot do it by looking at a single trajectory of energy demand, let alone a single trajectory of energy intensity.

3. CO2 Scorecard Model is Grossly Simplistic

First let it be said that my model is not without limitations. A quick look at the paper will show that I have been at pains to delineate these as exhaustively as possible. The purpose in doing so is to advance the field, to motivate others to devise ways to overcome these limitations, and to clearly point to where advancements are needed. This is what scholars do.

But the supposed theoretical deficiencies of my model identified in the initial CO2 Scorecard study provide a weak basis for the authors' finding that rebound is an "illusion." The authors complain that the model overlooks a laundry list of forces allegedly affecting rebound, including "corporate vision," "strategic behavior," and even "social norms." One wonders if the authors, to deem a rebound model credible, would also require that it consider "personal self-esteem" of economic agents.

But the real problem with this critique is that it applies equally to their own conclusions. The authors find "no evidence-based support" for my rebound estimates and denigrate "blind faith in the results of such mathematical models." Yet in reaching this conclusion, the authors in fact rely on a model of their own--one that claims to find no evidence for rebound in energy intensity trends. They use a model, but it is egregiously simplistic. (Yet presumably somehow accounts for forces such as "social norms.")

In contrast, the model reported in my paper is designed to employ the minimum of assumptions needed to correctly measure rebound effects. None of the model components is gratuitous, merely necessary. The authors would like readers to believe the assumptions employed are arbitrary, theoretical and so abstruse as to be not credible. But sadly, this identical critique applies even more to their own model; they just don't want you to know this.

Yes, I do employ "duality" principles to move back and forth between prices and quantities. There is simply no other way to generate quantitative rebound magnitudes. Anyone wishing to do an actual econometric measurement of rebound in productive sectors is required to do the same thing. I take the quantitative data from Jorgenson et al, convert it to the dual price/ cost space for econometric measurement, and then re-convert it back to quantity terms, again using duality principles. This is no trivial task, but serious economists will recognize its necessity. If the authors truly want to understand what is involved in the duality aspects of rebound they would be well advised to consult Saunders (2007).

What will advance this discussion is a serious, scholarly examination of my results that matches the depth of the paper itself.

Many of us eagerly await the CO2 Scorecard working paper said to be forthcoming that promises "startling" findings concerning shortcomings of my paper. This latest post cannot be what they promise. As may be indicated by the above analysis of what they have produced thus far, perhaps the CO2 Scorecard group would be advantaged in this undertaking by communicating with me prior to its release. Merely a suggestion.

4. CO2 Scorecard Misunderstands Flexibility in Production Processes

My model comprehends capital vintaging. This is required to distinguish the flexibility of new additions to productive capacity from the inflexibility of productive capacity already in place--an aspect of the analysis the authors curiously cite as wanting, undoubtedly owing to the fact that they did not actually read the paper, analyzing it "equation by equation," as they claim. The model further comprehends the fact that not all production will operate at full capacity in any one year. If the authors (or others) are truly interested in understanding what is required to extract rebound measurements given this important aspect of the physical reality of economic activity, they will take the time to read Appendix A of Saunders.

Without accounting for vintaging, rebound results rightfully could be challenged as overlooking a key characteristic of real-world physical plant that affects rebound calculations. Few, if any, rebound studies account for vintaging. This is a strength, not a limitation, of my analysis.

5. CO2 Scorecard Authors Fail in their Analysis of the Sectors they Examine

Not only do the authors err in their depiction of energy efficiency gains by treating these as equivalent to energy intensity trends, but the fact is energy intensity trends are determined by numerous drivers that extend well beyond gains in energy efficiency. A nearly ideal example is provided by the authors' examination of the Metal Mining sector.
Below is shown the underlying data used in determining rebound in the Metal Mining sector in the US:

Figure 1. Historical Energy Intensity in the US Metal Mining Sector

Energy_Intensity_Metals.jpg

[Note that Energy Intensity is here expressed in physical quantity units]

This chart shows remarkable similarities to the authors' Exhibit 5A in their original post, which shows the energy intensity trend for this sector based on Canadian data. As seen both here and there, the energy intensity trend has remained relatively stable over time. The authors use this observation to (falsely) assert that energy efficiency gains in this sector have been essentially zero.

But critically, the actual measured energy efficiency gain from these data is nonetheless 4.44% per year. What the authors overlook in their too-simplistic approach is that efficiency gains apply not just to energy, but to other production inputs (factors of production) such as capital, labor, and materials.

In a rather large irony, their purported foundation for the claim of zero energy efficiency gains is the observation of persistent declines in ore grades. Roger Pielke, Jr. in his post shows qualitatively the flaws in their reasoning, but the real irony comes from knowing that the measured technology gain in the Metal Mining sector for materials (primarily ore) is actually negative; it is in fact -0.92% per year. (Table B-1 of Saunders). The technology for extracting metal from ever-lower-grade ores has evidently not kept pace with declines in grade. But this does not mean that energy applied to this extraction has not experienced efficiency gains. Based on econometric analysis (not "eyeball" analysis of energy intensity trends), it has. Further, there were gains in both capital efficiency and labor efficiency in this sector.

There are two problems with their analysis of the Electricity sector.

One, while it is true that energy efficiency in steam turbines and gas turbines themselves is slow to develop and even is arguably approaching practical thermodynamic limits, the advent of combined-cycle technology has enabled a significant improvement in power plant fuel efficiencies over the time horizon of their data. Ironically, this is why vintaging is required analytically.

And two, the results for Electric Utilities reported in Saunders comprehend not just generation (the subject of the authors' Exhibit 5B), but transmission and distribution as well. Efficiency gains in these segments are complex, for one thing involving tradeoffs between capital and energy. High voltage lines suffer lower losses than do low voltage lines but are much more expensive. As power use grows, greater advantage can be taken of higher voltage systems. Further, there have been energy efficiency advances in transformers, sub-station control systems and metering systems.

But fundamentally, the problem with the authors' analysis of this sector is the same as with their analysis of the Metal Mining sector--they conflate energy intensity trends with energy efficiency gains.

A nearly identical problem exists with their analysis of the Primary Metal sector, where they attempt to equate energy efficiency gains with percent changes in energy intensity. Moreover, they here make some obscure but decidedly heroic leap from a supposed over-estimate of energy efficiency gain to an over-estimate of rebound magnitudes. There is no logical train of reasoning I can think of leading to a conclusion that larger energy efficiency gains are associated with larger (or smaller, for that matter) rebound.