Why Aren’t We Decarbonizing?
Because we told the electricity companies not to.
-
-
Share
-
Share via Twitter -
Share via Facebook -
Share via Email
-
What the world needs now is rapid, radical innovation in how we produce energy. To decarbonize the United States, for example, we will need to approximately triple electricity generation by mid-century, to substitute kilowatt-hours for gallons of gasoline and diesel and for all the natural gas used for home heating and running industries. At the same time, we will need to retire about 60 percent of the existing electricity generating system, which runs on coal and fossil gas.
Right now, this isn’t happening. It is simply not what the people who build and operate power plants are choosing to do.
But how do they decide what to build? And what to operate? Clearly, the decision is not guided by what the world needs if it is going to decarbonize (after all, we’ve shut down perfectly good zero-carbon nuclear plants and hesitated to build new ones). Rather, it is informed by what the producers and operators can reasonably expect to make money on, under a financial system that was established by public policy. And the system isn’t working.
How did we set the wrong incentives for the energy systems?
Over the years, there have been two main factors in decisions on electricity generation, see-sawing in importance:
- What resources and technologies are available to meet the need?
- What does the financial structure of the market prefer?
For the last two decades, the second one has been in ascendance. And that’s one of the reasons that decarbonization is so hard. The financial system now in place encourages short-term thinking, prioritizing plants that are quick to build and have the lowest up-front costs—that is, it encourages gas, gas, and more gas. Gas can be helpful in weaning us off worse fuels, but we won’t be using a lot of gas in a zero-carbon system.
Glommed on to this underlying financial structure are incentives for solar and wind. These have encouraged deployment that is now producing weird market effects, like negative pricing, and overproduction that requires operators to unplug generators that could have made electricity for free. It’s a double shame because those generators were expensive to build.
To be sure, none of this was what the policymakers intended, and not much of it is compatible with our climate goals. But understanding how we got here requires a quick tour of the economic history of the grid.
Once upon a time, choosing the resources and technology available to meet energy needs meant balancing considerations like where we can build a dam for hydroelectricity, or find coal to mine, or gas to burn, convenient to a grid connection and cooling water.
Electric companies were given a monopoly, and they generated, transmitted, distributed, and sold the energy they produced with the resources they could tap. In exchange, public service commissions looked over their shoulders and were called upon to approve all major decisions, like what generation to build. The basis of the financial structure was that investor-owned utilities convinced regulators how they should spend the public’s money.
Over the years, a few tweaks were made, including taking into account the costs of dealing with soot and acid rain. And then came quotas and subsidies for politically popular forms of generation (what politician can ignore the appeal of harvesting energy from sunshine and breezes?).
These changes were meant to address real problems; indeed, the old system of regulation had notable failures. The electricity companies were guaranteed a profit, called a “rate of return,” based on the money they’d invested, and this encouraged them to invest more and more. If companies overbuilt, the costs fell on ratepayers, not shareholders.
And so, in the 1970s and early 1980s, many utilities went on a construction binge. They had seen enormous jumps in electricity demand in the 1970s, as households added air conditioners and other large electricity-consuming appliances, and the utility planners expected more. But when oil price shocks threw the economy into recession and raised prices, demand growth slowed dramatically.
Then came a great experiment in reform, which is now coming back to bite us. It is known as “deregulation,” though it is mostly regulation in a different kind of system. In the era when Congress deregulated trucking and airlines, the electric system seemed a logical next target.
The idea was that a monopoly still made sense on city streets; only one set of wires would go to each house and business. Likewise, long-distance transmission was also going to be carried out by a single set of wires.
But feeding electricity into the high-voltage grid? Any generator could play. Private companies (or, more often, unregulated subsidiaries of regulated utilities) would build power plants and sell their output. If a plant ran reliably, the owners could make a lot of money. If it ran badly, or if the owners entered a market that didn’t need that much energy or capacity, they would lose money. If your generator was still useable but newer, cheaper, more efficient generation could be built, your plant might be squeezed out of the market and would have to retire.
Consumers would get the necessary energy most efficiently, the thinking went, and would be protected from the follies of the market.
But the new regulations introduced big challenges. Since the time near the dawn of the electric age, when a second power plant was added to an electricity network, a key problem for utilities has been determining which plant to operate. Or, more realistically, which to keep running 24/7, which to start up as demand rises in the morning and peaks in the afternoon, and which to shut down first as demand fell.
When monopoly utilities ran the show, they knew what their generating costs were at each power station, and they simply started them up or shut them down in price order. In later years, the utilities linked together, so they could swap energy, and they could use capacity that was idle in one region to meet peak needs in another region, without having to build enough power plants to keep each city self-reliant. If company “A” knew that, in the next hour, it was going to need 500 megawatt-hours of energy and that it would cost $20 to make them, but that a neighboring utility, company “B,” could make the same energy for $18, then they’d swap, at a price of $19. Company A made money, company B saved money, and everybody went away happy.
But under “deregulation,” utilities were told to sell their generating stations to independent companies, and the plants became “merchant” producers. A new entity, called an Independent System Operator or a Regional Transmission Operator took over control of the grid. (There is a difference between an ISO and an RTO, but it’s not important for understanding the new process.)
To answer the question of which plants should run when, ISOs hold an auction—although it’s a kind of auction that few consumers have ever heard of, using something called a “clearing price.” In the familiar “going once, going twice, SOLD!” system, the top bidder gets the product at the price of the bid. But in a clearing price system, everybody gets the product, but the price is set by the most expensive producer that is chosen.
At certain set intervals (in some places, as little as five-minute increments, because solar and wind producers cannot reliably predict their output very long into the future), a computer figures out what demand is likely to be, and solicits supply bids from every generator connected to the grid. The computer lines up the supply offers in price order.
Solar and wind plants and some hydro plants, and some nuclear energy plants, must run, so they do not establish a price below which they will shut down. At some points during the day, for example, some generators might be offering their energy for zero dollars. But coal and gas-fired plants can cut down their output, or in some cases, shut down entirely, and so they do have a bottom-line price for turning on.
In this bidding system, the computer takes all the zero bids, and all the low-price bids, and the higher-price bids. It accepts bids in price order, until it has enough generation lined up to satisfy demand in the next time period—and everyone gets paid the price of the highest accepted bid.
At 7 AM, demand is relatively low, and thus price is low. The highest bid might be $20, for example, and a solar company would be paid that amount even if it had listed a $0 bid. At 9, when a lot of businesses start up, demand will rise, and the computer will have to go further up the list of bids to satisfy that demand. If the most expensive generator needed to meet 9 AM demand wants $25, then everybody gets $25.
By 5 in the afternoon (depending on region and time of year), air conditioning demand may be very high, and people may be starting to return home to turn on their big-screen televisions and microwave ovens, even though all the lights and air conditioning and computers are still running at the office. So demand is high, and the price for the last increment of energy needed might jump to $40. Again, all generators get that amount.
Generators bidding in at zero (solar, wind, hydro that does not have reservoir storage, and, in many cases, nuclear) collect revenue based on scarcity, on the price paid for the last increment of electricity needed to satisfy the demand. That makes the whole system acutely sensitive to the price offered by the “marginal” generator, the one that is the first likely to be turned on or turned off. That is generally natural gas, which is highly volatile in price.
The resources bidding in at zero do help moderate other prices, because without them, the computer would have to go higher up the price stack to find that last bid.
But the system creates some market distortions, as we will see in the next update.