Free Novel Read

Thank You for Being Late Page 4


  That Other Line

  So that is what is going on with scientific and technological progress—but Teller wasn’t done drawing his graph for me. He’d promised two lines, and he now drew the second, a straight line that began many years ago above the scientific progress line but since then had climbed far more incrementally, so incrementally you could barely detect its positive slope.

  “The good news is that there is a competing curve,” Teller explained. “This is the rate at which humanity—individuals and society—adapts to changes in its environment.” These, he added, can be technological changes (mobile connectivity), geophysical changes (such as the Earth warming and cooling), or social changes (there was a time when we weren’t okay with mixed-race marriages, at least here in the United States). “Many of those major changes were driven by society, and we have adapted. Some were more or less uncomfortable. But we adapted.”

  Indeed, the good news is that we’ve gotten a little bit faster at adapting over the centuries, thanks to greater literacy and knowledge diffusion. “The rate at which we can adapt is increasing,” said Teller. “A thousand years ago, it probably would have taken two or three generations to adapt to something new.” By 1900, the time it took to adapt got down to one generation. “We might be so adaptable now,” said Teller, “that it only takes ten to fifteen years to get used to something new.”

  Alas, though, that may not good enough. Today, said Teller, the accelerating speed of scientific and technological innovations (and, I would add, new ideas, such as gay marriage) can outpace the capacity of the average human being and our societal structures to adapt and absorb them. With that thought in mind, Teller added one more thing to the graph—a big dot. He drew that dot on the rapidly sloping technology curve just above the place where it intersected with the adaptability line.

  He labeled it: “We are here.” The graph, as redrawn for this book, can be seen on the next page.

  That dot, Teller explained, illustrates an important fact: even though human beings and societies have steadily adapted to change, on average, the rate of technological change is now accelerating so fast that it has risen above the average rate at which most people can absorb all these changes. Many of us cannot keep pace anymore.

  “And that is causing us cultural angst,” said Teller. “It’s also preventing us from fully benefiting from all of the new technology that is coming along every day … In the decades following the invention of the internal combustion engine—before the streets were flooded with mass-produced cars—traffic laws and conventions were gradually put into place. Many of those laws and conventions continue to serve us well today, and over the course of a century, we had plenty of time to adapt our laws to new inventions, such as freeways. Today, however, scientific advances are bringing seismic shifts to the ways in which we use our roads; legislatures and municipalities are scrambling to keep up, tech companies are chafing under outdated and sometimes nonsensical rules, and the public is not sure what to think. Smartphone technology gave rise to Uber, but before the world figures out how to regulate ride-sharing, self-driving cars will have made those regulations obsolete.”

  This is a real problem. When fast gets really fast, being slower to adapt makes you really slow—and disoriented. It is as if we were all on one of those airport moving sidewalks that was going around five miles an hour and suddenly it sped up to twenty-five miles an hour—even as everything else around it stayed roughly the same. That is really disorienting for a lot of people.

  If the technology platform for society can now turn over in five to seven years, but it takes ten to fifteen years to adapt to it, Teller explained, “we will all feel out of control, because we can’t adapt to the world as fast as it’s changing. By the time we get used to the change, that won’t even be the prevailing change anymore—we’ll be on to some new change.”

  That is dizzying for many people, because they hear about advances such as robotic surgery, gene editing, cloning, or artificial intelligence, but have no idea where these developments will take us.

  “None of us have the capacity to deeply comprehend more than one of these fields—the sum of human knowledge has far outstripped any single individual’s capacity to learn—and even the experts in these fields can’t predict what will happen in the next decade or century,” said Teller. “Without clear knowledge of the future potential or future unintended negative consequences of new technologies, it is nearly impossible to draft regulations that will promote important advances—while still protecting ourselves from every bad side effect.”

  In other words, if it is true that it now takes us ten to fifteen years to understand a new technology and then build out new laws and regulations to safeguard society, how do we regulate when the technology has come and gone in five to seven years? This is a problem.

  Let’s take patents as one example of a system that was built for a world in which changes arrived more slowly, explained Teller. The standard patent arrangement was: “We’ll give you a monopoly on your idea for twenty years”—usually minus time to issue the actual patent—“in exchange for which people will get to know the information in the patent after it expires.” But what if most new technologies are obsolete after four to five years, asked Teller, “and it takes four to five years to get your patents issued? That makes patents increasingly irrelevant in the world of technology.”

  Another big challenge is the way we educate our population. We go to school for twelve or more years during our childhoods and early adulthoods, and then we’re done. But when the pace of change gets this fast, the only way to retain a lifelong working capacity is to engage in lifelong learning. There is a whole group of people—judging from the 2016 U.S. election—who “did not join the labor market at age twenty thinking they were going to have to do lifelong learning,” added Teller, and they are not happy about it.

  All of these are signs “that our societal structures are failing to keep pace with the rate of change,” he said. Everything feels like it’s in constant catch-up mode. What to do? We certainly don’t want to slow down technological progress or abandon regulation. The only adequate response, said Teller, “is that we try to increase our society’s ability to adapt.” That is the only way to release us from the society-wide anxiety around tech. “We can either push back against technological advances,” argued Teller, “or we can acknowledge that humanity has a new challenge: we must rewire our societal tools and institutions so that they will enable us to keep pace. The first option—trying to slow technology—may seem like the easiest solution to our discomfort with change, but humanity is facing some catastrophic environmental problems of its own making, and burying our heads in the sand won’t end well. Most of the solutions to the big problems in the world will come from scientific progress.”

  If we could “enhance our ability to adapt even slightly,” he continued, “it would make a significant difference.” He then returned to our graph and drew a dotted line that rose up alongside the adaptability line but faster. This line simulated our learning faster as well as governing smarter, and therefore intersected with the technology/science change line at a higher point.

  Enhancing humanity’s adaptability, argued Teller, is 90 percent about “optimizing for learning”—applying features that drive technological innovation to our culture and social structures. Every institution, whether it is the patent office, which has improved a lot in recent years, or any other major government regulatory body, has to keep getting more agile—it has to be willing to experiment quickly and learn from mistakes. Rather than expecting new regulations to last for decades, it should continuously reevaluate the ways in which they serve society. Universities are now experimenting with turning over their curriculum much faster and more often to keep up with the change in the pace of change—putting a “use-by date” on certain courses. Government regulators need to take a similar approach. They need to be as innovative as the innovators. They need to operate at the speed of Moore’s law.


  “Innovation,” Teller said, “is a cycle of experimenting, learning, applying knowledge, and then assessing success or failure. And when the outcome is failure, that’s just a reason to start the cycle over again.” One of X’s mottos is “Fail fast.” Teller tells his teams: “I don’t care how much progress you make this month; my job is to cause your rate of improvement to increase—how do we make the same mistake in half the time for half the money?”

  In sum, said Teller, what we are experiencing today, with shorter and shorter innovation cycles, and less and less time to learn to adapt, “is the difference between a constant state of destabilization versus occasional destabilization.” The time of static stability has passed us by, he added. That does not mean we can’t have a new kind of stability, “but the new kind of stability has to be dynamic stability. There are some ways of being, like riding a bicycle, where you cannot stand still, but once you are moving it is actually easier. It is not our natural state. But humanity has to learn to exist in this state.”

  We’re all going to have to learn that bicycle trick.

  When that happens, said Teller, “in a weird way we will be calm again, but it will take substantial relearning. We definitely don’t train our children for dynamic stability.”

  We will need to do that, though, more and more, if we want future generations to thrive and find their own equilibrium. The next four chapters are about the underlying accelerations in Moore’s law, the Market, and Mother Nature that define how the Machine works today. If we are going to achieve the dynamic stability Teller speaks of, we must understand how these forces are reshaping the world, and why they became particularly dynamic—beginning around 2007.

  THREE

  Moore’s Law

  Lives are changed when people connect. Life is changed when everything is connected.

  —Qualcomm motto

  One of the hardest things for the human mind to grasp is the power of exponential growth in anything—what happens when something keeps doubling or tripling or quadrupling over many years and just how big the numbers can get. So whenever Intel’s CEO, Brian Krzanich, tries to explain the impact of Moore’s law—what happens when you keep doubling the power of microchips every two years for fifty years—he uses this example: if you took Intel’s first-generation microchip from 1971, the 4004, and the latest chip Intel has on the market today, the sixth-generation Intel Core processor, you will see that Intel’s latest chip offers 3,500 times more performance, is 90,000 times more energy efficient, and is about 60,000 times lower in cost. To put it more vividly, Intel engineers did a rough calculation of what would happen had a 1971 Volkswagen Beetle improved at the same rate as microchips did under Moore’s law.

  These are the numbers: Today, that Beetle would be able to go about three hundred thousand miles per hour. It would get two million miles per gallon of gas, and it would cost four cents! Intel engineers also estimated that if automobile fuel efficiency improved at the same rate as Moore’s law, you could, roughly speaking, drive a car your whole life on one tank of gasoline.

  What makes today’s pace of technological change so extraordinary is this: it’s not only the computational speed of microchips that’s been in steady nonlinear acceleration; it’s all the other components of the computer, too. Every computing device today has five basic components: (1) the integrated circuits that do the computing; (2) the memory units that store and retrieve information; (3) the networking systems that enable communications within and across computers; (4) the software applications that enable different computers to perform myriad tasks individually and collectively; and (5) the sensors—cameras and other miniature devices that can detect movement, language, light, heat, moisture, and sound and transform any of them into digitized data that can be mined for insights. Amazingly, Moore’s law has many cousins. This chapter will show how the steady acceleration in the power of all five of these components, and their eventual melding into something we now call “the cloud,” has taken us somewhere new—to that dot drawn by Astro Teller, the place where the pace of technological and scientific change outstrips the speed with which human beings and societies can usually adapt.

  Gordon Moore

  Let’s begin our story with microchips, also known as integrated circuits, also known as microprocessors. These are the devices that run all of a computer’s programs and memory. The dictionary will tell you that a microprocessor is like a mini computational engine built on a single silicon chip, hence its shorthand name, the “microchip,” or just the “chip.” A microprocessor is built out of transistors, which are tiny switches that can turn a flow of electricity on or off. The computational power of a microprocessor is a function of how fast the transistors actually turn on and off and how many of these you can fit onto a single silicon chip. Before the invention of the transistor, early computer designers relied on bulb-like vacuum tubes, the kind you used to see in the back of an old television, to switch electricity on or off to create computation. This made them very slow and hard to build.

  And then suddenly everything changed in the summer of 1958. Jack Kilby, an engineer at Texas Instruments, “found a solution to this problem,” reports NobelPrize.org.

  Kilby’s idea was to make all the components and the chip out of the same block (monolith) of semiconductor material … In September 1958, he had his first integrated circuit ready …

  By making all the parts out of the same block of material and adding the metal needed to connect them as a layer on top of it, there was no more need for individual discrete components. No more wires and components had to be assembled manually. The circuits could be made smaller and the manufacturing process could be automated.

  A half year later, another engineer, Robert Noyce, came up with his own idea for the integrated circuit—an idea that elegantly solved some of the problems of Kilby’s circuit and made it possible to more seamlessly interconnect all the components on a single chip of silicon. And so the digital revolution was born.

  Noyce cofounded Fairchild Semiconductor in 1957 (and later Intel) to develop these chips, along with several other engineers, including Gordon E. Moore, who held a doctorate in physical chemistry from the California Institute of Technology and would become director of the research and development laboratories at Fairchild. The company’s great innovation was developing a process to chemically print tiny transistors onto a chip of silicon crystal, making them much easier to scale and more suitable for mass production. As Fred Kaplan notes in his book 1959: The Year Everything Changed, the microchip might not have taken off if it hadn’t been for big government programs, notably the race to the moon and the Minuteman ICBM. Both needed sophisticated guidance systems that had to fit inside very small nose cones. The demands of the Defense Department started to create economies of scale for these microchips, and the first person to appreciate that was Gordon Moore.

  “Moore was perhaps the first to realize that Fairchild’s chemical printing approach to making the microchip meant that they would not only be smaller, more reliable, and use less power than conventional electronic circuits, but also that microchips would be cheaper to produce,” noted David Brock in the 2015 special issue of Core, the magazine of the Computer History Museum. “In the early 1960s, the entire global semiconductor industry adopted Fairchild’s approach to making silicon microchips, and a market emerged for them in military fields, particularly aerospace computing.”

  I interviewed Moore in May 2015 at the Exploratorium in San Francisco for the fiftieth anniversary of Moore’s law. Although eighty-six years old at the time, all of his microprocessors were definitely still functioning with tremendous efficiency! In late 1964, Moore explained to me, Electronics magazine asked him to submit an article for their thirty-fifth-anniversary edition predicting what was going to happen in the semiconductor component industry in the next ten years. So he took out his notes and surveyed what had happened up to that time: Fairchild had gone from making a single transistor on a chip to a chip with about eight ele
ments—transistors and resistors—while the new chips just about to be released had about twice that number of elements, sixteen, and in their lab they were experimenting with thirty elements and imagining how they would get to sixty! When he plotted it all on a log, it became clear they were doubling every year, so for the article he took a wild guess and predicted this doubling would continue for at least a decade.

  As he put it in that now famous Electronics article, which appeared on April 19, 1965, entitled “Cramming More Components onto Integrated Circuits”: “The complexity for minimum component costs has increased at a rate of roughly a factor of two per year … There is no reason to believe it will not remain nearly constant for at least ten years.” The Caltech engineering professor Carver Mead, a friend of Moore’s, later dubbed this “Moore’s law.”

  Moore explained to me: “I had been looking at integrated circuits—[they] were really new at that time, only a few years old—and they were very expensive. There was a lot of argument as to why they would never be cheap, and I was beginning to see, from my position as head of a laboratory, that the technology was going to go in the direction where we would get more and more stuff on a chip and it would make electronics less expensive … I had no idea it was going to turn out to be a relatively precise prediction, but I knew the general trend was in that direction and I had to give some kind of a reason why it was important to lower the cost of electronics.” The original prediction looked at ten years, which involved going from about sixty elements on an integrated circuit to sixty thousand—a thousand-fold extrapolation over ten years. But it came true. Moore realized that pace could not likely be sustained, though, so in 1975 he updated his prediction and said the doubling would happen roughly every two years and the price would stay almost the same.