On the UCLA campus in Boelter Hall, the engineering building, there is a room that has been converted into a small museum. It was once a research lab led by Professor Leonard Kleinrock where the first “e-mail” was sent across an electronic communications system (then known as ARPANET, now known as the internet) in 1969 between UCLA in Los Angeles, CA and the Stanford Research Institute (SRI) in Palo Alto, CA. As a practical application of his team’s seminal research in the fields of computer science and electronic communications, this was the first of that which became a messaging system among colleagues within the academic world, a modest, quick and easy way to share research among peers working at a distance.
Thirty years later, the internet became available to everyone. It got commercialized and popularized as the “world wide web”. Websites were created, dotcom became a thing, we got the cell phone and apps and all that. Start-ups became big tech, the world changed. “Making the world a better place” was often the stated motive and profit was the incentive. And since 1995 in the fields of computer science and digital technology both the motive and the incentive have driven innovation ever since.
But all that innovation—the appearance, dissemination and now ubiquity of so called “high-tech” (or “smart technology” driven forward, we are led to believe, by smart people) was preceded by a century of behind-the-scenes, blind alley wandering, fits-and-starts enduring, plodding and grinding, incremental innovation driven forward mostly by the desire to expand the boundaries of scientific and technological understanding and knowledge. At places like UCLA, a public university, and Stanford, a private university with lots of post war federal funding, the foundations laid for all that came later.
Earlier in the 1960s over 400,000 people employed either directly by the federal government or by companies contracted with the federal government worked individually and in teams to rocket a man to the moon and back. It was a heavy lift. It was expensive, paid for by the American taxpayer, not at all a sure bet. At government’s highest levels it was perhaps primarily motivated politically and ideologically (to best the communists), but the dividends since then have more than made up for the initial investments. The technology now firmly established the private sector (with typical fanfare bordering as always on hubris) has taken up the task of commercializing and popularizing space travel, once again ostensibly “for the sake of mankind” and justly to make money.
This is an American story and not a new one. Abraham Lincoln got Congress to pay private companies to build the transcontinental railroad in the 1860s and expand the railroad system which in turn spawned the railroad industry, “civilized the west” and enriched highly performing and motivated individuals (of which Leland Stanford was one). Taxpayers paid for roads and highways in the 1950s then, while General Motors, Ford and Chrysler thrived. (Sometimes it worked in reverse: In the early 20th century Andrew Carnegie paid for hundreds of libraries across the country, while local governments ran them. And, in the early mid-century health care in America started out as a entirely private affair, led by innovative companies like Kaiser Steel--whose empire was built on America’s investment in WWII-- but now seems headed toward a public system, optional or otherwise). We now mostly honor innovation but never in this country has innovation occurred without the systemic framework of public investment.
Around 1980 we began to believe that innovation was an entirely individual thing, that we didn’t need the supporting framework of public investment (“government is the problem not the solution”). All we needed were highly motivated, highly performing, incentive seeking individuals and a “free” market to push innovation. We could do it all on our own and the market would make it so. This, we now know, is not supported by the evidence. Not even Bill Gates (who certainly made a profit off years of prior public investment) believes it to be entirely true especially in areas where he has focused his philanthropy — such as education and healthcare. (Although his billions are a mere drop in the bucket compared to what we as the public could invest and in turn achieve).
As architects, how many individuals have we met through our work with government agencies—counties, cities, school districts—who do what they do because quietly and without much monetary incentive they want to make the world a better place? How much innovative thinking within local government and school systems have we witnessed where the only incentives are wanting to do a better job? How seemingly thankless is the job of a librarian or a teacher who must know but rarely gets to directly experience the influence of their work? And yet, how many times have we heard from a tech billionaire or a prize-winning author that it was the local librarian or that special teacher that changed their lives?
There are some systems—our social infrastructure—where the rules of the market do not apply, where unceremoniously and without celebrity innovation lives, free of market incentive, and where nevertheless, the foundations are laid for markets to thrive, where our society and our economy are made possible. Our systems of public education—our schools and libraries, our network of community, civic and cultural institutions—are perhaps foremost among them. If architecture is a way to both reflect and enable our priorities as a society, then it is for this reason that we as architects have opted to do what we do and chosen with whom we do it. What is architecture as an art form if not, after all, mostly public?