On the eve of the second World War, Woody Guthrie wrote an ode to the Grand Coulee Dam, which was nearing completion and represented the world’s largest concrete structure. It has been modernized, but remains to this day the largest generating plant of any kind in the United States.
Uncle Sam took up the challenge in the year of thirty three
For the farmer and the factory and all of you and me.
He said, “Roll along Columbia”, you can roll down to the sea
But river, while you’re rambling, you can do some work for me.
Now from Washington and Oregon you can hear the factories hum
Making chrome and making manganese and white aluminum.
Now roars the Flying Fortress, for the fight for Uncle Sam
On the howling King Columbia, by the big Grand Coulee Dam.
Guthrie managed not only a song that captured the work, risk, and ingenuity of the new dam, worked in a passing reference to one of his other Bonneville Power songs, but sets up some lessons in modern economics as well.
First, sometimes you get what you pay for. Guthrie was almost never paid to write songs, but the Bonneville Power Authority paid him to write in 1941. “Grand Coulee Dam” and “Roll on Columbia” are two of the best known results. Many of history’s most beloved and famous artists wrote, composed, sang, painted, or sculpted not simply out of a love of the form, but also because they were paid to do so. It did not happen a lot for Woody Guthrie, but it did here.
Second, modern economies run on electricity. Indeed, electricity is arguably what makes an economy “modern”. As profoundly as we tend to believe that digital technologies have transformed life during the past few decades, this transformation does not begin to compare with the impact on farmers and factories of rural and urban electrification. In most economies, industrialization and electrification are roughly the same thing.
Finally, some industries are extraordinarily dependent upon cheap electric power. As Guthrie notes, aluminum is one of those industries. Economically, if not metallurgically, aluminum is nothing but congealed electricity. Cheap power, more than cheap bauxite, determines the low cost producer of this commodity (Jamaica happens to have both at
the moment). Boeing still builds airplanes outside of Seattle because in the 1940′s, when we suddenly needed a lot of aluminum to build warplanes, the northwest had abundant hydroelectric power. Why? Because Uncle Sam took up the challenge in the year of thirty three.
Another sector that is extraordinarily dependent upon low cost electricity is the Internet — which now seems to touch a very broad spectrum of economic activity. Writing in the New York Times, the remarkable John Markoff describes the massive facilities being built by Google in, yes, the Pacific Northwest on the banks of the Columbia River. He describes the latest addition to the Googleplex as:
…a computing center as big as two football fields, with twin cooling plants protruding four stories into the sky. (The complex will) house tens of thousands of inexpensive processors and disks, held together with Velcro tape in a Google practice that makes for easy swapping of components. The cooling plants are essential because of the searing heat produced by so much computing power.
The fact that Google is behind the data center, referred to locally as Project 02, has been reported in the local press. But many officials in The Dalles, including the city attorney and the city manager, said they could not comment on the project because they signed confidentiality agreements with Google last year.
“No one says the ‘G’ word,” said Diane Sherwood, executive director of the Port of Klickitat, Wash., directly across the river from The Dalles, who is not bound by such agreements.
“It’s a little bit like He-Who-Must-Not-Be-Named in Harry Potter.”
What? The saintly Googlers who admonish themselves to Don’t Be Evil suddenly compared with Voldemort? Shocking.
Truly shocking perhaps is the scale of infrastructure that Google has assembled. The size and growth of the platform is staggering and now constitutes a huge competitive barrier for anyone even thinking of launching a search engine.
The rate at which the Google computing system has grown is as remarkable as its size. In March 2001, when the company was serving about 70 million Web pages daily, it had 8,000 computers, according to a Microsoft researcher granted anonymity to talk about a detailed tour he was given at one of Google’s Silicon Valley computing centers. By 2003 the number had grown to 100,000. Today, if you Google “how many servers does Google have?”, the answer comes back 900,000. In nine US data centers and six more around the world.
Each of these data centers is located on a trunk line for the internet and near low cost power. As Markoff noted, the center on the Columbia River in Oregon as at The Dalles, a town small enough that you can easily spot the Google center from Google Earth. Not only do these centers each run tens of thousands of servers and consume unimaginable quantities of power, but they do so with a latency and reliability that we all take for granted. This is not optional: Google has found that for search engines, every millisecond longer it takes to give users their results leads to lower satisfaction. So the speed of light ends up being a constraint, and the company wants to put significant processing power close to all of its users.
These data centers serve rapidly changing technology requirements. According to a recent Wired article, Google realized that “If each of the world’s Android phones used the new Google voice search for just three minutes a day, these engineers realized, the company would need twice as many of these billion dollar plus data centers.
“Rather than double its data center footprint, Google instead built its own computer chip specifically for running deep neural networks, called the Tensor Processing Unit, or TPU. ‘It makes sense to have a solution there that is much more energy efficient,’ says Norm Jouppi, one of the more than 70 engineers who worked on the chip. In fact, the TPU outperforms standard processors by 30 to 80 times in the TOPS/Watt measure, a metric of efficiency.”
Google, Amazon, Microsoft and others need not only very large quantities of electric power, they need very high quality power as well. Internet geeks speak of “three nines” or 99.9% uptime. Sounds good until you realize that this represents eight hours of unplanned electrical outage a year. Unplanned outages are horrible. Servers hate to be unplugged without notice — and the databases running on those servers like it even less. The Electric Power Research Institute, based in Palo Alto, California, estimates that 80% of the power glitches that wreak havoc on an electronic system last for less than a few seconds-barely noticeable to the eye.
Companies manage this problem by building electrical power with incredible redundancy. One Internet company that I know particularly well houses its servers in a facility served by two independent electric power companies tied to completely separate grids to ensure redundancy. In case both of these fail, the facility has large rooms full of massive lead acid batteries charged and ready to kick in. In the event of an earthquake or other disaster, generators stand by to pick up the load. The facility maintains multiple contracts for diesel fuel and tests all systems at least monthly. The facility itself is highly secure with both fingerprint and retinal scanners to prevent unauthorized entry. Banks, airlines, and others who require perfect reliability use facilities like this.
This facility recently had an unplanned 20 minute outage — the one thing they promise will never happen. The Economist noted some years ago that
“..microprocessor-based controls and computer networks demand at least 99.9999% reliability, or “six nines”, amounting to no more than a few seconds of allowable outages a year. And that is just a start. The report estimates that the quality of electrical power must reach “nine nines”-milliseconds of faults a year-before the digital economy can truly have the right quality power to mature.
“Many technologies are being developed to improve electrical reliability. Batteries and backup generators often do not kick in instantly and are expensive to maintain. Specialized power chips are under development that split electrical power into packets “that can be reconditioned in order to switch between grid and stored power in milliseconds to ensure uninterrupted power.” On top of that, superconducting electromagnets and ultra-capacitors “can deliver bursts of power and be recharged quickly and without any degradation.” Flywheel systems may enable power providers to store electricity cheaply.”
“High nine” reliability is expensive. With each additional “nine” of reliability, costs can rise ten- to 100-fold. At “three nines” a kilowatt-hour costs about 10 cents. At “five nines”, it will cost $20 when investment is taken into account. At “six nines”, it could cost $1,000. Yet not all computer systems need such faultless reliability. Indeed, any company considering an investment in power cleaning should work out exactly how much a glitch would really cost its operations compared with the investment involved. The differences might be a powerful incentive for making do.”
I like to think that Woodie Guthrie would be pleased at the use to which his beloved Columbia River is today being put. Its power is turning our darkness to dawn in ways he could never have imagined.