“All That is Solid Melts into Air”: Historians of Technology in the Information Revolution
Published in Technology and Culture, Vol 41, Number 4 (October 2000): 641-68. Note: This version is slightly different from the published one.
Introduction
“I hate traveling and explorers,” declares Claude Levi-Strauss at the outset of Tristes Tropiques. “Yet here I am proposing to tell the story of my expeditions.” With similar self-surprise, much as I dislike the reminiscences of university administrators, I find myself proposing to recount some of my own voyages in this realm. Five years ago, when I was offered the job of Dean of Students and Undergraduate Education at MIT, I realized that I was in an unusual and potentially fortunate situation. As a cultural historian of technology, I realized that doing administrative work at MIT would not simply be a diversion from my scholarship, but might even contribute to it. [1]
There must be an easier way to do research, I have since decided, given the turmoil and even tragedy that have transformed MIT student life in the past few years. Nevertheless, my scholarly discoveries have been considerable; what has also been transformed in recent years is my understanding of engineering and technology, based on new life experiences as an administrator.
As a historian of technology, I grew up, in scholarly terms, with the distinctive concepts, vocabulary, and concerns of a discipline that emerged in the 1960s in close relationship with the profession of engineering. Early historians of technology were typically men (I use the word deliberately) who had studied or even practiced engineering (like Thomas P. Hughes), or historians (like Melvin Kranzberg) closely involved in the education of engineers. Indeed, the very concept of a separate society for technological history emerged at a meeting of the American Society for Engineering Education. [2] Founding historians of technology shared with engineers a cluster of seemingly self-evident assumptions. They assumed that the “technology”—the grand, key concept–re-created the material world in ways that were useful to human progress. They assumed that technological activities were expressed above all in engineering. Finally, they assumed that engineers were men who worked for industry using rational analysis to design useful things at the lowest cost.
My experiences as an MIT administrator have repeatedly challenged these assumptions. In my administrative life, “technology” has no all-embracing grandeur. Instead of referring to the totality of the humanbuilt world, now it just means “software.” Instead of being the driver of historical progress, technology is now driven by the market and in turn drives human beings as an unstoppable and inevitable force. As for engineers, they no longer have a monopoly on technology: all sorts of other people are now in on the act. While the boundaries of the engineering profession have expanded dramatically, as a profession it has lost its self-conscious moral stance. Engineers are still busy trying to design things that are faster, cheaper, and smaller. This mission, however, is no longer necessarily associated with materiality, utility, rationality, industry, or historical progress.
This essay is an effort to understand the contrast between my scholarly and administrative lives. To be sure, viewing the history of technology through the lens of contemporary MIT raises evident epistemological issues. First, the academic view of engineering includes only a small part of the whole: much of what is conventionally thought of as engineering goes on in business (both corporate and entrepreneurial) and in government (civil and military engineering) rather than in the academy. Second, present changes in engineering practice and in concepts of technology have no obvious or necessary relevance to our understanding of the past.
The first limitation is one of which we at MIT are highly self-conscious. In discussions about promotion and tenure decisions, we often asked ourselves whether the type of engineering done by the candidate is appropriate for a university, or whether it more properly belongs to industry. In not a few cases, we concluded that the “real action” was taking place outside of academe. Awareness of this division between academic and non-academic engineering is acute right now, because of a widespread sense that universities have become so focused on engineering science that they have become too distant from engineering practice. Engineering schools all want faculty members who do more than just build things, who display theoretical insight into design problems. However, these schools also understand the need to connect academic engineering with the actual practice of building things. The resulting compromises lead to grumbling from both sides of the divide. Engineers from the science-based tradition worry that practice-oriented colleagues lack scholarly depth. Hands-on engineers complain that their supposedly scholarly colleagues use fancy equations to dress up unexciting content, and lack any feel for inventive design.
Every engineering school is trying to close this gap. Some are supplementing “regular” faculty of the engineering science persuasion with adjunct faculty from industry, while others are trying to subsume the dichotomy of engineering science vs. engineering practice in a higher order synthesis such as “systems engineering.” At MIT, some departments have stayed with an emphasis on engineering science, while others are redefining their mission in practice oriented terms. For example, MIT’s Department in Aeronautics and Astronautics has adopted what it calls a “CDIO” approach–“Conceive, Design, Implement, Operate”—which focuses on teaching students to build real things that perform useful functions. The department has rallied enthusiastically around the CDIO approach, which, it implies, is “real” engineering.
But what is “real” engineering anymore? While my engineering colleagues discussed how MIT should reconcile its academic approach to engineering with engineering practice, or how to reconcile engineering science with the CDIO approach, from my administrator’s point of view I saw a much more significant tension between all these varieties of engineering, on the one hand, and reengineering on the other. The term “reengineering” was popularized during the 1990s by management consultants James Champy (MIT ’65) and Mike Hammer, who published Reengineering the Corporation in 1993. It was one of many such management approaches of the decade that promised greater productivity through greater efficiency. In the case of reengineering this was to be accomplished through the creation of cross-functional teams organized around fundamental business processes.
At MIT, reengineering was defined as “the fundamental rethinking and radical redesign of support processes to bring about dramatic improvements in performance.” Reengineering began at MIT in March 1994 as an eight-member Core Team analyzed key administrative processes and eventually recommended that eight be redesigned: facilities, mail, supplies, management reporting, information technology, appointments, student services, and human resources. In some cases, such as Graphic Arts, work was outsourced and the unit closed. In others, such as Facilities, the labor force was reduced through reorganizing work into “zones.” But by far the most time, money, and effort went into installing a new financial system, SAP R/3, which replaced MIT’s general ledger, accounts payable and receivable, and procurement systems. SAP is complex and difficult to learn, but once mastered it serves as a powerful management tool by providing consistent data and integrated administrative processes. Out of total one-time costs of $65.2 million spent on reengineering at MIT, $41.8 million was spent on upgrading MIT’s financial system. [3]
When I became dean, it was with the understanding that I would help carry out student services reengineering. I was made a member of the Reengineering Steering Committee, which exercised general oversight over all the major reengineering teams in its weekly or biweekly meetings. While the MIT administration was heavily invested in reengineering, the rank-and-file staff was much more mixed. Some were highly enthusiastic. Others resented the intrusion of consultants and teams that they considered incompetent, while fearing that underneath it all, reengineering was just about eliminating jobs.
The view of the MIT faculty was even more negative. Although they were not threatened with job loss, and although most were not directly involved with reengineering teams or work reorganization, they regarded the whole effort with considerable skepticism and often with outright disdain. They certainly did not accept the administration’s belief that reengineering had an MIT-like flavor in its name and its emphasis on the practical restructuring of work through technology. Instead, they considered it at best a distraction from, and at worst an assault upon “real” engineering. Unfortunately, they too often projected their dislike of reengineering onto the MIT staff–-some of whom were deeply invested in the success of reengineering, and all of whom had to take it seriously, whatever their personal feelings about its wisdom.
For me, as a historian, reengineering presented a rich and complex case study in technological change. Above all, it made me rethink conventional understandings of “technology” and “engineering.” Over the historical long haul, the activities we retrospectively think of as having to do with “technology” cover a wide range of endeavors (sailing, hunting, weaving, plowing, fighting, cooking, traveling, mining…) and have engaged nearly all human beings (male and female, free and slave, educated and uneducated, sea-going and land-dwelling, urban and rural…). In the last two centuries, however, “technology” has become strongly identified with engineering. The Massachusetts Institute of Technology trains engineers: the coupling seems self-evident.
Reengineering challenges this identification. The particular example of reengineering, as it took place at MIT, is indicative of a much larger social process. While reengineering officially ended at MIT in June 1999, the same goals of improving “customer service,” cutting costs, and increasing administrative control continue to drive team projects and continue to generate meetings of oversight committees. As a recent headline in the MIT Faculty Newsletter announced, “Reengineering is over but change is not.” In this capitalist version of endless revolution, technological change is carried out by consultants and staffers whose titles are always in flux (a linguistic habit of reengineering is to rename people as “team leaders,” “process owners,” “facilitators,” and the like) but who are certainly not engineers by any conventional definition. Whereas for conventionally-defined engineers at MIT—and for many historians of technology, who identify with engineers and cultivate an engineering-centric view of the world—reengineering is not “technology” at all but a “business” or “management” application of technology.
Processes like reengineering point to the breakdown of this and other conventional distinctions. For the people who are actually involved in reengineering it defines technological change in our times: this is where technological history is being made. For the people who pay for reengineering, it is equally about technology, or more precisely about a “technology-driven” (in their words) redefinition of work, which may or may not save money in the long run but which is necessary to move from one technological base to another. All these people who are involved in reengineering affirm over and over, often with deep conviction, that this transition involves just as much “cultural change” as “technological change.” In this overall process, then, how is it possible to make tidy distinctions among “technology,” “management,” and “culture”? If all these people are convinced that this is where technological history is being made, should not historians of technology listen to their voice of experience?
Historians and engineers alike are living in a world churning with technological change, much of it not generated within the traditional realm of engineering. Technology has burst the modern shell of engineering. Technological change is now something that happens to engineers as much as it is produced by them. At MIT, this situation could be described as the irony of history, but it feels more like history’s poignancy. Since its doors opened in 1864, MIT has been a driver of what, in more self-confident times, was routinely called technological progress. MIT engineers and scientists have created and developed material inventions and systems intended to make human life more comfortable, secure, expansive, and productive. In particular, in the twentieth century MIT has been a world leader in the development of digital tools that use calculations and communications to control other processes: cybernetics, radar, computers, operations research, telecommunications, and artificial intelligence are all elements of this consistent and dominant theme to MIT research. This is what the Institute continues to think of as “engineering” and “technology.” With justifiable pride, it sends these inventions and the people who will develop them and create similar tools into the great wide world.
There is also a reverse movement, however. The waves of technological creativity and enterprise go forth from 77 Massachusetts Avenue, but they eventually return—and, in returning, the waves of change threaten to engulf the social environment that produced them in the first place. In particular, technologies of communication and control reflexively return to the Institute in the form of complex systems. Although these systems were often developed and marketed through the work of MIT alumni and alumnae, they appear barely recognizable and manageable to those who played a significant role in their origins. As a faculty member commented in a meeting about educational technology, “We need to catch up with the impact we’ve had on the world.” The producer becomes the consumer. The empire strikes back.
In Marxist terms, this process exemplifies the alienation inherent in a capitalist economy: rationality, once reified (another key Marxist term), imprisons and eventually kills the creative vitality that generated it. Marxist language is generally not used or appreciated at MIT, but I have heard many comments that express the underlying idea in similar language: the idea that the modern world is diminishing the very qualities that have made MIT so special. Accordingly, MIT finds itself less than comfortable in a world which it has had such a large role in shaping. At a 1995 symposium honoring Elting Morison, a founder of STS studies at MIT, visiting and former MIT professor Thomas P. Hughes quoted the observation of American historian Perry Miller that we Americans find ourselves “unequal to a universe of our own manufacture. [4] At MIT, the universe of engineering has expanded so dramatically that it now feels bewildering and troubling to some of its own creators.
This observation then brings us to the second logical difficulty in commenting upon the history of technology from the vantage point of contemporary MIT. If the universe of technology is expanding and changing so rapidly, what relevance does the present have to the past? This is what “engineering” and “technology” look like at MIT in the 1990s: what does this have to do with the history of engineering and technology? Every historian is warned against the fallacy of “presentism,” the inappropriate projection of present concerns and assumptions onto earlier times. Every historian also knows that presentism is inevitable. The only platform we have from which to view the past is the present: there is no perspective that lies outside of time. The usual solution is to be acutely conscious of presentism. Present concerns may not inevitably lead the historian into illusion. They may instead lead her to keener awareness of elements of the historical record that have been neglected.
In this sense, the whole enterprise of the history of technology has been one grand exercise in presentism. It has raised to a level of consciousness and dignity the creation and maintenance of the built world, and the production of useful systems and objects, from the perspective of a century in which these activities have become dominant as never before in history. The history of technology therefore arises from what Joan Scott has called an “emancipatory impulse,” the desire to liberate underappreciated aspects of the past, to bring what had been subordinate to a new level of awareness and appreciation-—the same impulse, for example, that has led to the development of women’s history. [5] In the case of technological history, the impulse arose from the conviction that the mechanics, artisans, workers, and engineers who created the material stuff of civilization had been too long ignored. They were to be emancipated from the shades of time so they could be honored as much as men on horseback, daring explorers, or political leaders.
What is different in the case of the history of technology is the instability of the central category compared to say, the category “women.” The very word “technology” is a relatively new addition to Western conceptual vocabulary. (The naming of MIT represents one of the first public uses of a term that was introduced in an 1828 lecture by Boston botanist and physician, Jacob Bigelow, who later became an MIT trustee.) But in the early days of the history of technology, the mutability of the key term was not fully appreciated. More often it was assumed that “technology” was necessarily and reliably associated with rationality, materiality, functionality, and engineering. [6]
In the past decade, however, the meaning of “technology” has changed dramatically. As we shall soon see, its meaning has been dramatically reduced to “software.” Historians of technology may complain about this conceptual reductionism, but we are not likely to alter the overwhelming popular usage. I am not convinced we should even try. We could take the view that our scholarly task is to bring out stable categories to this mess and help people sort through their confusion. I suggest we listen to the confusion and consider what it tells us about an inherently unstable concept. To argue that the narrower definition of technology is mistaken is itself mistaken because it assumes that ideas are independent of material change.
The concept of technology has mutated dramatically because the dominant type of machinery in our time has changed dramatically. What dominates now is the manipulation not of inarticulate matter but of symbols and language. In what is now commonly called the information revolution, many devices and systems are invisible, fantastic, and non-utilitarian. This technological-history-in-the-making challenges all the conventional associations of technological history with rationality, materiality, functionality, and engineering. In this situation, historians have an unusual, maybe unrepeatable, opportunity to understand in a much deeper and fuller way the nature of technology. When the built world is changing so rapidly and so dramatically, when the variety and flexibility inherent in the very concept of technology are so apparent, paying attention to the present does not shut down our thinking, but opens it up.
We are bound to do the history of technology differently in the midst of a technological revolution. Instead of projecting the present back into the past, we are more likely, if anything, to overstate historical discontinuities. We challenge, rather than assume, linkages between past and present when the present appears unprecedented. When our central category is evolving, we will be especially alert to our conceptual vocabulary and governing assumptions. In all these ways, radical transformations in the present can heighten historical awareness. By attending to the history of the present, we may develop new insight into the history that begins where memory leaves off. Times like these force the deeper questions to the surface.
The Evidence of Language: Technological Consciousness at MIT
In contrasting my experiences as a historian and as an administrator, I am comparing two rather different sets of evidence: books and articles I have read as a historian, and documents and discussions I have encountered as an administrator. In the latter role, the oral history has been considerably more important than the written documents: it is much more fresh, spontaneous, and honest. Much of modern technological activity takes place in the social context of meetings. Practically everyone at MIT contrasts meetings with “actually getting things done,” just as they contrast “real engineering” with reengineering. Nevertheless, meetings appear a primary, irreplaceable locus of modern technological action. For all the complaints about meetings as not being real work, no one seems to have found a substitute for them—yet another sign (as if we needed it) that the work of managing technologies has become more demanding than the work of inventing them.
In my case, most of my meetings were one-on-one or small group meetings, informal discussions rather than formal presentations, and most of them directly related to managing a large office. There were three other types of meetings, however, that were less frequent but particularly helpful to me in understanding contemporary technological consciousness at MIT. The first were Academic Council meetings that dealt with promotion and tenure cases, especially those of engineers, which would almost always lead to rich discussions of the engineering profession today. The second category includes meetings about reengineering and similar teams dealing especially with computer systems and related business processes. Finally, there were numerous meetings related to engineering education and particularly to the role of new technologies in education. Of these, the most prominent was the Task Force on Student Life and Learning, appointed by President Vest in 1996 and issuing its report two years later.
All these discussions are confidential to some degree, and some (especially those of Academic Council) are highly so. Therefore my account will usually be a composite one. Quotations will be unattributed and are intended to be personally unattributable. Even within these constraints, however, the oral history of administration provides rich evidence of the changing technological consciousness at MIT. As an intellectual historian, I was repeatedly surprised and impressed by the sensitivity of my MIT colleagues, especially those on the staff, to changing habits of language. Many of them were, and are, keenly aware that technological change involved the introduction not only of new software, but also of a host of linguistic and related cultural practices that they might find offensive, or amusing, or useful, depending on the occasion: facilitation, team-building exercises involving props like rubber balls, “stickies” to arrange on poster paper, and new terminology so that, for example, students are redefined as “customers” and faculty as “clients.” Such habits were intended to build a new “culture” of trust to replace that “old” one, and so they are introduced with great deliberation and consciousness. As one staff member marveled, “Five years ago I never heard of facilitation, and now it’s all we do!”
There was a good deal of reflection upon these changing habits of language and behavior, because they so much contradicted faculty habits. Staff were constantly trying to translate, as one of them put it, “corporate yuk” into acceptable faculty language. In one email exchange on which I was copied, various staff members tried out the idea of consciously varying their terminology so that faculty would be addressed in “scientific terms,” and administrators in “plain speak” or “jargon,” depending upon the personality of the administrator. Such sensitivity was necessary because “Our language has become a barrier to success,” one of the emailers concluded.
The habits of speech most telling to me, however, were less conscious. They involve the recurring use of a cluster of deceptively simple words: “technology,” “culture,” “change,” “space,” “time,” “community,” and “engineering.” In focusing my discussion here on this cluster, I am of course drawing upon an honorable tradition in intellectual history, as exemplified by Raymond Williams in his pathbreaking book Culture and Society. There Williams highlighted the reflexive or circular relationship in modern times between key terms of social analysis (such as culture, society, industry, and art) and the social changes they were supposed to analyze. Williams shows that the meaning of such words was to some degree generated by those changes: the words acquire meaning from the very social changes they are intended to analyze. In addition, key concepts tended to cluster, so that their meanings shifted in response to alterations in the others. [7]
This is the pattern I have seen, or rather heard, repeatedly in my MIT meetings. Discussions focus on a cluster of key terms; the terms are highly interdependent, so that they are defined in relation to each other; and their meanings are unstable and reflexive. This is especially true for the central term technology, which has particular weight at MIT, of course, given its pioneering role in publicizing the still-novel term when the institution assumed its name in 1861. In the subsequent half century, as Leo Marx has described, the term “technology” emerged as an expansive alternative to older phrases such as the industrial or practical arts, which had a reasonably clear referent in “everyday work, physicality, and practicality.” “Technology,” on the other hand, was a word disconnected from materiality and specificity and so became “a causative factor…in every conceivable development of modernity.” [8]
Posted on the Great Dome, technology sweepingly embraces the grand mission of the Institute. But in everyday meetings held under the Dome, the compass of the term has been drastically reduced: almost invariably, it is now a synonym for software. For example, an administrative report on student information policy begins with the statement, “The primary motivation to revise the policy was the recognition that technology has transformed the means by which we collect, manage, and provide access to student information.” Such usage was found over and over again in documents and conversation. More than once, in sketching business processes or organizational models, someone would write the technology, with a circle around it, to represent an independent and significant element of an administrative system. No one ever had to ask what was meant by the word, or complained that its meaning was too narrow. No adjectives are needed to explain that “technology” now refers primarily to transaction, not to production.
In describing the emergence of technology as an abstract, singular noun with no clear referent, Leo Marx worried that it had become a “hazardous concept,” the subject of active verbs and therefore “an autonomous agent, capable of determining the course of events….” [9] In MIT usage, “technology” is no longer a free-floating term, abstract and referentless. Technology now a referent: software. Nevertheless, it is still an autonomous, determining historical agent. Most historians condemn technological determinism as a dangerous fallacy, but my MIT staff colleagues are convinced that it is simply true. They fit the definition of “hard determinists” as presented in a recent collection of scholarly essays on the subject: “In the hard determinists’ vision of the future, we will have technologized our ways to the point where, for better or worse, our technologies permit few alternatives to their inherent dictates.” [10] In the language of the MIT staff, this means: the new releases will come; we will have to adopt them; the only question is how much we adapt the Institute’s culture to fit the software.
This process of adaptation is neither automatic nor straightforward. The “dictates” of technology trigger not unthinking obedience, but a complicated set of transactions of accommodation. Indeed, one of the salient features of information technology is the extent to which it raises the process of adaptation to technology to a conscious level. In case after case, software introduction or upgrades force people and organizations to be more explicit about their existing culture; it surfaces habitual, unexamined practices that now have to be examined in order to be related to technical protocols.
For example, consider the apparently simple goal of drawing up class lists, in order to allow electronic distribution of class materials to students. The project became less straightforward when it uncovered the not uncommon practice of some professors of allowing a few students or colleagues to sit in on a class on an informal basis. Were they officially students, and should class materials be distributed to them? A memo on this project described the dilemma well: “As we move from a mostly analog world in the classroom to an increasingly digital world, some aspects of the physical world need to become more explicit. By becoming more explicit, we often shine a light on inconsistencies between practices, behaviors, and policies.” The memo concluded with this question: “When a Faculty member’s ‘view’ of the classroom comes into conflict with explicit MIT policy, or de facto MIT rules, what is the MIT philosophy for resolving such disputes?”
In the digital world, technological consciousness and cultural consciousness are simultaneously heightened. The writer of the memo is not overstating the case in asking for a “philosophy” to elucidate the connection between them. MIT staff and faculty repeatedly noted such conflicts between software and habits, and almost always described them as conflicts between “technology” and “culture.” Indeed, MIT staff and faculty discuss the relationship between “culture” and “technology” at least as often as any reader of this journal—and often with much more emotion. This is because, for my MIT colleagues, “culture” is understood not as a “context” for technology, much less as something that might “socially construct” the technology. Instead, it is a world of value that is in conflict with the world of technological change, and the relationship between the two is one of constant trade-offs.
Like any world, the cultural one requires effort to construct and maintain. Both “technology” and “culture” are products of labor: changing them requires effort. Cultural work is necessary to support technical work, and it requires serious investment in budget, space, and staff. There is nothing “soft” about culture. Just as much as “technology,” it is a realm of labor and investment. Because cultural change requires such effort, its costs and benefits must constantly be measured against the costs and benefits of technological change. Technological change may save money, but it may require modifying deeply held cultural values. Cultural change is undertaken when the institution can no longer afford to maintain homegrown technologies that support the existing culture. To use terminology familiar to historians of technology, “culture” and “technology” have become interchangeable parts. They are both forms of manipulation of the social world, they are both required to achieve this manipulation; and they both require effort and cost.
Trade-offs are painful, and this is why MIT debates about “technology” and “culture” are often so emotionally charged. In particular, it explains the high degree of emotion, for and against, surrounding MIT’s reengineering effort. As already noted, by far the largest part of that effort involved the implementation of new software systems. Like many other large institutions with deep traditions, MIT has a host of legacy software systems custom-designed to express prevailing habits in research and education. To use words that would never be used in staff discussions, the systems reified existing cultural practices. When there were not many products on the open market, and when the Institute had such internal resources for generating its own systems, this approach made sense. For example, in the early 1990s, MIT developed a student information system based on a commercial product (Banner) but heavily modified to handle unusual features of the MIT curriculum such as first-year “hidden grades” and different ways of counting General Institute Requirements (as whole subjects) as opposed to departmental requirements (in credit units).
All these homegrown systems are now under scrutiny because they are expensive to design and maintain compared to off-the-shelf ones, which have become readily available. The Institute is broadly engaged in cost-benefit analyses that compare the cost of doing cultural work–that is, changing how the Institute operates–against the cost of doing the technological work—that is, maintaining legacy systems. The tipping point is close upon us where the financial benefits of adopting off-the-shelf software seems greater than the cultural benefits of maintaining the homegrown systems. In the words of one senior officer, “If the vendor’s worldview matches yours, you’re OK. If not, one or the other has to change.”
Already the decision has been made to move from legacy accounting systems to an integrated system from an outside vendor, SAP. This move entailed substantial cultural changes in (for example) the accessibility of previously confidential information related to laboratory purchasing–changes that challenged prevailing lab culture, and that led to considerable unhappiness and complaints. In this case, though, and in many others, the cultural change was made because the alternative was too expensive in changing the software system. Obviously there are some cultural habits that are important enough so any new system will have to be adapted to them. However, more and more it is assumed at MIT that in the trade-offs between culture and technology, technology will win. The thousands of small customizations will have to go. The major systems will have to be integrated.
Resistance to change is criticized strongly because belief in the inevitability of change is so strong. Since MIT prides itself on a culture oriented to experimentation and innovation, it was startling to hear one reengineering proponent declare, “We have to break the old culture here.” There may be a much closer fit between information technology and MIT culture than, say, among the Brazilian natives studied by Levi-Strauss; however, as a comment like this indicates, even at MIT, “technology” can be seen as an outside force that invades and disrupts the indigenous MIT world. This is because of the reflexive nature of modern technology. It is not simply an assemblage of devices sent forth from centers of invention to “impact” less advanced cultures. Technology reflects back, in complicated and unpredictable ways, on the inventive centers too.
The backwash is called “change,” and the process of coping with it “change management.” In reengineering circles, an important role is that of “change management process owner.” To some degree, change was accepted as inevitable and even welcome. In reengineering particularly, “change management” was regarded as a positive thing, while those who “resisted change” were routinely criticized for their inability to adapt. In fact, in some discussions this resistance was described as a “risk factor,” as an impediment to success. In others, this human resistance was described in mechanistic terms as a source of friction, an impediment to forward progress because they were “not on board.” To help overcome this friction, in one instance, a consultant advised using the concept of the “change journey.” When I inquired what was the destination of this journey, I was told that it might be thought of as arriving at a point of comfort with change.
Over and over again, people talked about the inevitability of change. Sometimes the word referred to something trivial, but far more often it referred to a sweeping, all-encompassing process of technological and social change. Any number of times I have been in a conversation where information technology is being discussed and someone pauses for a moment, and then adds, with emphasis and no little awe, “It really is at least as important as the steam engine,” or “The Information Revolution really is up there with the Industrial Revolution.” But for all the scale and scope of its impact, technological change was never referred to progress. I heard the word “progress” used in relation to scientific discovery, but I cannot remember hearing it used, unironically, to refer to technology. Technology was the realm not of progress but of change, understood as a series of adjustments, an endless catch-up game, with no clear direction and no end in sight, both in the sense of no limit and no purpose.
The evangelicism of change found among some staff members (for example, reengineering leadership) is not widely shared. More often, people complain about the degree and pace of change. They often assume, with considerable cynicism, that it simply means being asked to work harder. Its inevitablity was more often discussed in a tone of resignation, than of jubilation. In a discussion of the effects of information technology on education, involving both MIT and Microsoft personnel, a Microsoft participant commented, “Education is the building and technology is the earthquake.” The discussion continued along the lines of how we could strengthen the “building” to make it more flexible and better prepared to withstand the shocks of the tremors. In another discussion about organizational change, one colleague remarked, “We are suffering from innovation.”
The burden of change seemed heavy because it diminished the two fundamental dimensions of human life: time and space. Most people at MIT would be as bewildered by the language of phenomenology just as they would that of Marxism. Nevertheless, I have heard so much discussion of “time” and “space” as central issues for MIT that it seems everyone there speaking phenomenology without knowing it. Over and over again, people said that time and space were the scarcest resources of all, and, in some vague but decisive way, they attributed this scarcity to technology. In other words, there was a sense, confused but powerful, that technological change alters not just the material contents of human experience, but its very dimensions.
A sense of overcrowding is exactly opposite from what one would expect in an age when we are all supposed to be vaulting into cyberspace. Nevertheless, this sense is acute at MIT. Students feel they have inadequate social space, and in student residences the number of “crowds” (three students assigned to rooms intended for two, for example) regularly runs well over 150 (out of 2691 undergraduates in Institute housing). Employees are giving up offices for cubicles. Faculty seem to have an insatiable need for research space. The libraries are running out of shelves and seating. Obviously “technology” is not the only reason for these spatial scarcities, but in discussions of the problem most people assume that it is a significant cause.
Again, it comes down to trade-offs. The cost of building or renovating physical space now has to compete with the cost of installing network drops and other infrastructure related to virtual space. Furthermore, the cost of physical space increases dramatically because it now has to encompass the costs of preparing for the digital age. Classroom renovation budgets, for example, are much higher now that they include the cost of wiring the floors, outfitting them with video projectors as standard equipment, and providing lighting and switches that will accommodate a host of audiovisual uses. Library study space now has to be complemented by computer cluster space. Wiring the campus does not reduce the need for renovations: it increases that need (because we need to integrate the new virtual space with existing physical space) and so provides yet more demand on limited resources for renovations.
Even dearer than space is time. In their public statements, the co-chairs of the Task Force (two of the busiest faculty members on campus) over and over again emphasized that the key finding of the Task Force report: “The scarcest resource at MIT is not space, not money, but faculty time and student time.” They could have added staff time to the list. MIT’s collective life is deformed, sometimes severely, by scarcity of this invisible but vital resource. Again and again faculty and staff complain that shortage of time is the greatest obstacle to institutional and personal accomplishment. Everyone budgets his or her time with incredible precision and competes fiercely for each other’s time–and for the students’ time, each faculty member being convinced that he or she needs to outbid the other to be taken seriously in the weary student’s time budget.
This is not a new phenomenon. Engineers have always been hardworking and time-conscious. Calls for more time for reflection and synthesis go back to the origins of the Institute. What is new is the sense that “technology” is one of the primary sources of time scarcity. For example, people are convinced that a primary culprit is email–the endless flood of messages to which they feel compelled to respond, even to the trivial ones. No one feels they can control this glut (“Email is killing us” is a typical remark.) Faculty members who used to have a secretary are now expected to type their own reports and proposals, thanks to word processing. It is the principle of “more work for mother” in the electronic workplace: technological changes promoted as timesaving end up being time-consuming.
More generally, the problem is the proliferation of communication technologies that grossly overproduce information. This overproduction depletes time just as surely as overgrazing depletes pasturage. In one Task Force discussion, when a faculty member was marveling at a laser printer that could copy a large book in a short time, a senior administrator at the meeting remarked, “What we need are not laser printers, but laser readers.”
There is another, more subtle reason, why “technology” is blamed for time scarcity. Whether technological or cultural, “change” is work, and it requires large amounts of time. Either form of change generates meetings, which take more and more time to organize as faculty participants increasingly organize their professional lives around distant rather than proximate colleagues. Anyone who appreciates irony should reflect upon the role of meetings in dealing with time scarcity. At the end of one day-long reengineering retreat, when one of the leaders passed around “stickies” so participants could write down what the current “transition support team” could do to help them most, the wall was soon plastered with “stickies” bearing the single word: TIME.
In sum, information technologies have contributed to scarcity of human-scale space and human-scale time. The resulting cascade of negative effects damages the institution, because any institution needs time and space to flourish. Most people at MIT do not analyze the situation in institutional terms, however. They are not used to thinking of institutions as social inventions that are sometimes supported by and sometimes damaged by technical inventions. What most MIT people say they want–what they would regard as genuine progress–is more “community.” “Community” has become something of a mantra at MIT these days. In its 1998 report, the Task Force highlighted “community” as one of its primary recommendations. It concluded that MIT education has long had two pillars–research and academics–and that now MIT should add a third, “community,” as an equally essential element. It further stated that finding space and especially time for “community” is a major, unsolved challenge for the Institute.
If “culture” signifies existing modes of social interactions, “community” signifies a longed-for state, one in which time and space are available to support regular, meaningful human interactions. “Community” and “technology” are in tension to the extent that technological change is responsible for time-scarcity and space-scarcity. “Technology” is necessary and inevitable, but it degrades institutional culture, time, space, and community. It is powerful, it may even be unstoppable—but it is not necessarily linked with a civilizing mission, with human progress.
The Dissolution of Engineering
As already noted, while “technology” as an overarching concept refers to activities engaged in by all human beings, since the 18th century “technology” has been closely identified with the engineering profession. Engineers produce progress. And so it is striking that in discussions about technological change at MIT, the word “engineering” seems as disconnected from “technology” as the word “progress.” Of course, there are countless conversations at MIT about engineering—as a School, as departments, as faculty appointments, as undergraduate and graduate education. These discussions, however, are largely separate from those of technological change and its implications. People might turn to the Sloan School of Management for advice in handling technological change, but not to the School of Engineering. If technology and engineering were joined at birth, they have apparently since become separated.
This does not mean that engineering is disappearing, by any means. Engineering is still by far the largest and most dominant of MIT’s five schools (the others being Architecture and Planning; Humanities, Arts, and Social Sciences; Science; and the aforementioned Sloan School of Management). Two-thirds of MIT’s undergraduates major in an engineering discipline, and fully a third overall major in the Department of Electrical Engineering and Computer Science. The boundaries of engineering keep growing. If the range of reference of the word technology has been dramatically reduced, that of engineering has expanded. Until the recent past, the major varieties of engineering were electrical, mechanical, chemical, and civil. The first three are still the largest departments at MIT, but the former department of electrical engineering now embraces computer science and software engineering. Similarly, the department of civil engineering has expanded to include environmental engineering.
But this is only the start of the proliferation of new varieties of engineering. The School of Engineering has recently established two new divisions, bioengineering and systems engineering. These cut across traditional departmental boundaries so that faculty may have a “two-key” appointment (one in the division, one in a department). Outside of the School of Engineering entirely, Sloan faculty are involved in “financial engineering.” In all these ways, the traditional disciplines of engineering are breaking down into interdisciplinary mixes that merge with each other and with various sciences—physical, biological, and information–in novel ways. The controlling unit in engineering is no longer a discipline, but a project, a unit of innovation. In the words of MIT’s former provost John Deutch, who argues that the distinction between science and engineering has disappeared, “The world is now dominated by application, not by technology generation…. We are faced not with disciplines but with situations.”
The dissolution of engineering described by Deutch has been going on for decades. First, the postwar march of engineering science blurred the boundary between engineering and science. This boundary has consequently become even more blurred with the proliferation of various mixes of science, applied science, and engineering. More recently and dramatically, the boundary between “technology” and “society” has also begun to dissolve, as engineers are encouraged to take into account the entire technological system. In MIT parlance, they are doing “Engineering with a big E.” Old-fashioned “little e” engineering focuses on technical design: big E Engineering encompasses the whole technical system, including human and natural as well as technical elements.
The expansion of engineering poses a challenge for faculty who are trying to get tenured. Engineering faculty have long been working in interdisciplinary areas, of course, but they used to keep one foot in a base discipline, as a marker of academic respectability and of scholarly depth. Now faculty are increasingly coming up for tenure who work exclusively on interdisciplinary “situations,” to borrow Deutch’s expression, and who do not necessarily have a solid scholarly base in an engineering discipline. They are regularly promoted, but a design- or systems-based case is typically less straightforward than typical engineering science case based on a conventional publications trail.
The broadening of the definition of engineering also poses real challenges to engineering students. They are less and less sure what they will actually be doing if they become an engineer. In the first place, they have to make the transition from lay understanding of engineering as the design of nifty gadgets to the academic version of engineering that stresses underlying abstract principles. One recent MIT graduate has perceptively described his bewildering encounter with this “language barrier”:
Fresh out of high school. . .my concept of electrical engineering was primarily framed by such terminology as compact disc players, radio, stereos, television, computers, Game Boy and Game Gear, the space shuttle, video cassette recorders, video cameras, solar-powered vehicles, magnetically levitated vehicles, stealth bombers, walkie-talkies, Nintendo, lap-top computers, night-vision goggles, lasers, and of course, the Turbo Panther. Yet the electrical engineer and computer science department course descriptions in the annual bulletin spoke of structures and interpretations of computer programming, signals and systems, computational structures, digital signal processing, communications, control and signal processing, devices and circuits, electrodynamics, stochastic processes, signal detection and estimation, multivariable control systems, acoustics, and probabilistic systems analysis. [11]
As he examined other possible engineering majors, the student found a similar “language barrier.” For example, in turning to the aeronautics and astronautics section of the catalog, “I. . .found no explicit mention of F-14 swing-wing Tomcat fighters, the space shuttle, 747s, or the Wright brothers. Instead, I read the words—computational fluid dynamics, high-speed aerodynamics, transition and turbulence, and structural mechanics.” The gap here, of course, involves far more than language: it is the motivational gap between the “fun things” built by engineers, which so often draw students to a place like MIT in the first place, and the abstractions of science and of large-scale systems that dominate their engineering studies.
Like many engineering departments today, those at MIT are trying to improve the balance between abstract and concrete, science and design, rigor and fun. As a result, of all the varieties of crowding at MIT today, none is more acute than the overpacking of engineering curricula, as more and more heterogeneous elements are crammed in. The commitment to engineering science runs so deep that more practice-oriented experiences are often added on rather than substituted. The amount of technical knowledge keeps growing. So does the desire to include ever more professional but non-technical training, notably in communications and teamwork. On top of all this, there is a huge demand on the part of the undergraduates themselves for education in management and business-related economics. According to a recent survey, half the MIT undergraduate student body would minor in management if such an option were available. The Sloan School of Management does offer a major, which is now the fifth most popular major at the Institute (behind the three largest engineering majors—electrical engineering and computer science, mechanical, and chemical—and biology).
The bewildered student quoted above (who eventually majored in Electrical Engineering and Computer Science) noted that all the engineering departments, for all their difficulty in communicating their intellectual content to freshmen, did clearly advertise starting salaries, typical geographic locations, and research and employment opportunities for undergraduates. Engineering has always had a laudable, even noble, role as a route of upward mobility for smart but socially unadvantaged young people. They could begin as engineers, move up to management, and do extraordinarily well in attaining economic and social success.
What is different now is the way entrepreneurship has dissolved the line between engineering and management. Of all the ways that engineering has expanded to encompass other elements, none is more attention-grabbing than the “hot mix” of engineering and management. [12] Few MIT graduates intend to work for a large company as an engineer and then advance up the management ladder. Three-quarters of them expect to leave their first job within three years. They seek a hybrid education so they are technical managers from the start, or they use their technical skills to go into business from the start, or they develop their own mixes of managing, marketing, and designing. Thirty percent of recent (1991-99) MIT graduates aspire to be entrepreneurs, while only 21% want to be technical leaders (defined as senior scientist, lead designer, and research fellow). [13]
The mixture of engineering and business education at MIT is even more pronounced at the graduate level. In the past decade, there has been a proliferation of joint programs involving departments of engineering and the Sloan School of Management. For example, the Leaders for Manufacturing Program (LFM) offers participants two master’s degrees–one in engineering, one in business. Similarly, the Department of Chemical Engineering recently inaugurated a new doctoral program in Chemical Engineering Practice, which includes a good deal of economics and management, in distinction from the traditional research-oriented doctoral degree oriented around engineering science. In these mixes, the dominant partner is business, not engineering. For example, although the goal of the LFM program was to give management background to manufacturing engineers, most of the LFM graduates migrate into management, where compensation is generally higher. From undergraduates on up, the engineering degree is increasingly seen as a useful means to the end of a successful business career, not as an end in itself. In the “hot mix,” the university forms a link between venture capital and manufacturing, and students are eager to become part of this vital connection.
In an older MIT, in an older industrial world, the distinction between engineers and management was an important part of the professional ethos. Growing up in a family where almost all the males were engineers, I well remember the after-dinner conversations about engineers’ place in society. In them, my engineer relatives often expressed their pride—which sometimes bordered on self-righteousness—in the moral superiority of their profession. Unlike managers and politicians, they emphasized, engineers understood objectivity, hard work, and the common good. Engineering was as much about creating a moral world as a world of knowledge or of things—or, more precisely, the three elements of engineering creation were interdependent. My father and grandfather and uncles sometimes reminded me of housewives, complaining that the world failed to appreciate their contributions, but proud of their intrinsic value. In particular, I remember the theme that while engineers worked with the profit system, they were not driven by the profit motive: their motive was rather the good of the community and the progress of civilization. Little e engineering may have been narrow and stubborn, but its ethos was distinctive and proud.
In the past half century, the ethos and identity of engineering as a profession have dissipated. On the one side, engineering has intertwined with science, on the other, with society and business. The process of enlargement could be interpreted as a triumph. Instead of being a narrow discipline, engineering has been an inflationary universe, expanding to include first applied science and now entire sociotechnological systems. But the same process could also be interpreted as a regrettable diffusion of the professional core. One branch of engineering has become indistinguishable from applied science, the other indistinguishable from technical management. The center has not held. If technology indeed drives history, engineers are not at the controls. Karl Marx famously predicted in The Communist Manifesto that under modern capitalism, all things solid melt into air. In some advanced economies, this may include even the professional source of so much capitalist wealth: engineering.
Implications for the History of Technology
At the threshold of the new millennium, we historians of technology should be on the top of the world. We are in the midst of a major technological revolution, the “information revolution.” We live in a “technological society,” we are constantly reminded. Technology stocks are going through the roof. Social conversation invariably veers towards the Net.
Technology seems to be ruling the world everywhere but in historiography. In the midst of a “technological society,” historians of technology should feel like successful venture capitalists. Instead, we sound like engineers approaching retirement age, insecure and self-scrutinizing and unhappy. Though an unselfaware process of transference, we have assumed a mindset similar to that of the supposedly neglected engineers. We feel we are the housewives, the worker bees of history, lacking the appreciation and influence we deserve. In one particularly public expression of this sensibility, in his 1996 Presidential Address to the Society for the History of Technology, Alex Roland asked, “Does the history of technology matter?” and “Why have we had so little impact outside our own small community?” [14] The collective self-examination set off by this address led to further discussion in the Society Newsletter. There Robert C. Post, who succeeded Roland as the Society’s president, took up the theme in expressing his regret that “In the main we haven’t staked much of a claim to having insights of substantial value for the world at large.” [15]
In recent years, the insights most frequently claimed by historians of technology involve the “social shaping” or “social construction” of technological artifacts by the larger society. We emphasize a contextual framework—whatever elements of society that do the “construction”—and examine how this influences the design of something material. We routinely do this by carrying out a case study that shows the “messy complexity” of processes of social and technological interactions. Along with engineers themselves, historians of technology have shifted from little e to Big E Engineering–that is, from studying particular artifacts or inventions to studying technological systems which are defined as including social and natural elements. The case study demonstrating complex social construction has become our standard model.
But this model has strikingly little in common with the views about technology expressed at MIT by my lay colleagues. For them, technology is much less socially constructed than it is determinative and autonomous. They see its social setting in terms of trade-offs, not context. Life may be messy and complex, but technology is not: it drives human beings to work harder as it devours time and space. They assume “technology” is largely disconnected both from engineering and from progress. They are fascinated by the major technological revolution, the “information revolution,” that is transforming their collective and individual lives, but they often find this transformation deeply troubling. The lay reaction to the historical standard model of technology would generally be–so what?
My lay colleagues are intelligent and thoughtful people who are centrally involved in technological change. Their disengagement should be a wake-up call. Unless historians confront and clarify the world of contemporary technological experience—a world that is compelling, complicated, and deeply felt–we will continue to feel marginal, not because we are providing incorrect answers, but because we are failing to ask the important questions. As historians, “We find ourselves unequal to a world of our own manufacture.” Like engineers bewildered by reengineering, we find themselves swamped by multiple, cascading varieties of technology, many of them involving activities beyond their conventional definition of engineering. We keep the mindset of neglect and marginalization in a world where technology has moved to the center of social and economic action, and where much of that action is not carried out by engineers anymore. We keep identifying with engineers at a time when technology has long since overflowed the banks of engineering.
To escape from our conceptual confinement, we need to question two key assumptions of our standard model: first, that technology is material, and second, that society decisively shapes this material.
Most historians of technology are not full-fledged materialists in the Marxist sense. Most of us leave plenty of room in history for non-material agency, but this is typically put into the category of “context,” understood as all the factors (social, cultural, and so forth) influencing the design of the technological core thing. In an exchange in a recent SHOT Newsletter, a European colleague challenged Alex Roland to be more expansive in the definition of “technology” that Roland used in his 1997 SHOT presidential inaugural address. Roland demurred, explaining that, “A technology without tools or machines is a technique. It may be interesting, even fascinating. And it may relate to, even illuminate, our enterprise. But it is not what we are about.” [16]
This response suggests one of the sharper anxieties of historians of technology: if we let go of material tools and machines, we will lose our professional identity, becoming indistinguishable from a larger and more amorphous horde of cultural, intellectual, social, or economic historians. “What we do,” what makes us special as historians, is our interest in lifting the hood, as it were, or opening the black box to see how the thing is designed and how it runs.
The heart of technological history (as it were) is a machine. The next layer out is (in Roland’s terminology) “technique,” which is a regulating scheme that allows power to act through a tool or machine to manipulate material: examples of technique are the sun-and-planet gear in Watts’ steam engine, or software. Finally, even further out and even less material, is the cultural context where “technology” may be used in a strictly metaphorical sense (such as the “technology of justice”). In Roland’s words, “Our goal as historians. . .is to study the relationship between technology and culture; we must resist the illusion that technology is culture.” [17]
The most significant features of contemporary technology challenge this conceptual model. The computer is inherently cultural; the very purpose of this type of machine is symbolic manipulation. In the age of information technology, technology is culture, and the illusion is that matter is necessarily mute. From a materialist perspective, “information technology” is an oxymoron. When the purpose of a tool is to handle information, its primary manipulation is not of the material world but of human consciousness. The effects of information handling on the human body, all too often negative, are only a side effect. What is “practical” a “good design” in the language-based (and therefore highly socialized) realm of information is defined very differently than in the realm of physical labor. Good design in information technology involves prods on the screen as much as chips, servers, or cables. The ultimate goal is to be (in the telling phrase) “user friendly.”
Designing the human interface requires conceptual and communication skills traditionally associated with the arts and humanities. Writing code is now a key element of engineering design, and there are good reasons why the operative verb is writing. Just as war has become too important to be left to the generals, IT design has become too important to be left to the engineers. Accordingly, many other people besides engineers (as they are conventionally defined) are now involved in technical invention and development. In the software business, “techies” have to share development resources with the “humies” (these are the expressions of Michael Dertouzous, head of MIT’s Laboratory for Computer Science). [18] For many software enterprises, it is a major challenge to get “techies” and “humies” working together. In others, it is to educate individuals to combine the previously distinct roles of engineer, designer, and artist.
In a similar way, “systems engineering” is also an oxymoron from a materialist perspective. Like “information,” a system is not a thing but an expression of human consciousness, a symbolic rather than a material manipulation of the world. As the MIT student trying to choose a major discovered, the language of engineering is highly conceptual. The tools of systems engineering are mathematical symbols, which are used to create models that cannot be apprehended by the senses. Not surprisingly, systems analysts often come not from traditional engineering disciplines, but from mathematics or sciences where they learn more abstract approaches to the world.
Both engineers and historians of technology have gotten used to a common-sense, human-scale view of the world. Long ago, science broke with the human scale, to expand its range from the unimaginably small to the unimaginably vast, from the quark to the cosmos. Engineering is beginning to take the leap beyond common sense experience. The MIT student trying to choose a major finally gave up on the abstractions in the course catalog and turned to friends for advice. “Eventually, someone explained to me that I should choose a major based on whether or not I wanted to work with things that were bigger than me, about my size, or smaller than what I could see.” [19] The stuff of engineering now ranges from single electrons and living cell membranes to logistical systems that cannot be apprehended by the senses but that have enormous significance in daily live. If we historians of technology confine ourselves to black boxes, we risk shallowness and irrelevance. We need the conceptual equivalents of microscopes and telescopes to expand our inquiry beyond common-sense experience.
Science has gone one step further, too, in discovering that the matter of the universe cannot be divorced from abstract categories of space and time: the same is true for technology. Decades ago, in Technics and Civilization (1934), Lewis Mumford declared that the clock–not the steam engine–is the most fundamental invention of the Industrial Revolution, because everything else in the industrial workplace depended upon a high degree of time-consciousness. The universe of technology has never been a Newtonian one, in which devices and tools inhabit a constant framework of space and time. As we have seen at MIT, technological change alters not just the material stuff of human experience, but also the very frameworks of that experience. When technological design is redefined as writing code, it means that entirely new systems can be created, if not quite at “the speed of light” (Bill Gates’s words), then in a matter of months, thereby shifting the whole temporal framework of human life. And, once again, the role of machines in altering these dimensions challenges the notion that the realm of “culture” lies outside and beyond them.
The blurring of technology, technique, and culture, and of thing and context, challenges the prevailing methodology–“theory” seems too grandiose a term–of the Social Construction of Technology (SCOT). One of the founders of SCOT, Weibe Bijker, explains that in the Netherlands, at least, SCOT was launched by “worried scientists” who wanted to ensure that the fruits of their labors would be used for the general social benefit. [20] Social constructivism, Bijker has written, assumes people can control the technologies they create. “Determinism,” on the other hand,
inhibits the development of democratic controls of technology because it suggests that all interventions are futile….if we do not foster constructivist views of sociotechnical development, stressing the possibilities and the constraints of change and choice in technology, a large part of the public is bound to turn their backs on the possibility of participatory decisionmaking, with the result that technology will really slip out of control. [21]
In other words, technological determinism is unacceptable as an historical theory because it would tend to discourage democratic action.
In subsequent work, Bijker and his colleagues have moved on from this “bracketing” of technological determinism in order to return to it in a more conceptually sophisticated way. [22] Nevertheless, not a few historians of technology instinctively share Bijker’s discomfort with technological determinism for its supposed anti-democratic implications. If we quote the well-known inscription from an art deco statue at the 1933 Chicago Century of Progress exposition–“Science Finds–Industry Applies–Man Conforms”–it is to confirm that we have outgrown such benighted attitudes. [23] But what if our belief in social construction, and our disbelief in technological determinism, are contradicted by experience? If we insist upon the social construction of technology in an unsophisticated way, we take off the table one of the deepest questions of our field—-that of technological determinism-—at the very start. We need to examine this assumption rather than keep doing case studies intended to confirm it.
Recall the comments of my MIT colleagues, based on their repeated experiences: “We have no choice but to go with the upgrades.” “If our culture isn’t built into the software, we will have to change our culture.” “Email is a huge problem and there is nothing we can do about it.” As a historian, I wince to hear such undemocratic capitulation to technological fate. As a manager, however, I have to agree. I too see no choice but to adopt the next upgrade, to adjust our culture to the software, and to keep plowing through email. Most MIT staff, and not a few faculty, are convinced that “technology” determines their working lives, and they certainly do not think it is under their control. Are they suffering from a sort of theoretical delusion, a mass attack of false consciousness?
My historian side of the brain says back to the administrative side: of course technologies are socially constructed! Is there any other way? They are created by human beings, not by nature. But technologies are no less determinative of human behavior when they have a human origin. Indeed, in many cases they are designed precisely to maximize control of human behavior. Social construction is not necessarily a refutation of technological determinism, but may be its very source. The social aim may be to assert control over other people, to leave no room for democratic intervention. If the latest Windows system is designed so Lotus will crash on it, this is both social construction and technological determinism.
Of course (my historian self continues to muse), this is really market determinism rather than technological determinism. As a historian, I would remind the MIT staff that this is what they are really complaining about: the tyranny of the market, the endless releases, upgrades, the planned obsolescence of the whole IT business that forces them to make the trade-offs of “technology” and “culture” that they do not want to make. To focus on the socially constructed design of any one product is missing the point, which is the larger determinism of market-driven technology that profits from change for the sake of change.
For centuries now, engineering has been evolved in a capitalist marketplace. Well into this century, however, the state as well as the market gave overall direction to engineering activities, sponsoring both civil and military engineering projects. Even in the private sector, much engineering practice was one step removed from the market, because it focused on industrial production–that is, on the design of machines to transform raw materials into consumer products. What is different today is that technology itself is now often, and profitably, the product. It has risen to a position of dominance not as the servant of historical progress, but as the driver of capitalist acquisition.
One of the more intriguing questions today is whether technological development can continue to flourish indefinitely in a market environment. While the market exerts tremendous power in creating technology-based products, the market also places tremendous limitations on technological development–especially in information technology, where the capacity of any one company to develop systems of such complexity is limited. The movement for open source code suggests that the whole concept of a “product” may not be appropriate for some types of technology. There may be inherent limits to the development of software in the form of a commodity. Open source advocates believe that truly robust software can be built only when it is treated as a form of scientific knowledge, so that incremental contributions accumulate indefinitely through a system of peer review. The only significant way to keep Lotus from crashing on Windows is to write the code differently, which implies a significant intrusion into the “free market.” [24]
For the time being, however, technology is primarily constructed not by a non-existent actor called “society,” not by democratic politics, not by anarchistic free-lance programmers, but by capitalists seeking a share of the market. Technological change is fueled by money pouring into product development from interlocking corporations, some of them with virtually unlimited resources and with global reach. The key ambiguity of our “technological age” is the contradiction between our faith in solving social and environmental problems through technology—-which implies social control of technology—and our faith in the free and uncontrolled market as the engine of technological development.
Historians of technology cannot resolve the contradiction. We must recognize it, however, rather than insisting upon social construction as an element of all faith. Then we will begin to address the deep questions in our field: what are the historical forces that drive technological development as an ensemble, as opposed to particular machines? That reshape language, time, space, and consciousness through technological means? That explain major shifts in technological practice and consciousness, such as the present “information revolution”? And that account for not only the design and production of new technologies, but also their reflexive action and reactions on the designers and producers?
Does the History of Technology Have a Future?
Since the founding days of the history of technology, the need for a separate subdiscipline of the historical profession focusing on technology has been contested. At the outset, the historians of science were the ones who wondered why the history of technology could not find room under the intellectual tent they had pitched. More recently, we historians of technology ourselves ask whether it is wise and necessary to gather under our own tent. In a 1991 review of a collection of essays honoring founding father Melvin Kranzberg, Leo Marx impertinently asked, “If we grant the claims of the contextualists, how can we justify segregating the history of technology, for the purposes either of teaching or research, from the history of the societies and cultures that shape it?” [25]
The Marx-Kranzberg exchange that followed still reverberates. Not surprisingly, most historians of technology would prefer to retain a separate identity. The process of obtaining it in the first place has been long and hard-won. Along with many historian colleagues, I believe that the development of the history of technology as a subdiscipline has been a net benefit. Historians of technology have earned a self-conscious professional identity, supported by a well-defined skill set and array of canonical texts, and linked to a the larger academic area of activity generally known as science and technology studies. The question is where go from here. As scholars, we should not really care very much about the merits of a separate subdiscipline. We should ask only whether intelligent and perceptive scholars are advancing the understanding of the role of technology in human life. If the continued existence of a separate discipline encourages intellectual vitality, then it is a good thing; if not, not.
It is hard to maintain this lofty and disinterested perspective for long, however. As graduate students are particularly aware, anxieties about disciplinary identity and status are not, so to speak, purely academic. They also involve very practical desires to maintain a none-too-secure niche in the historical ecology. If nothing else, the professional label “historian of technology” differentiates some scholars from the larger horde when it comes to job-hunting and -holding, at a time when there are precious few slots for historians of any species.
However, this seems short-sighted strategy at a time when the engineering profession itself is losing its clear and separate identity. Departments and schools and even separate institutions of engineering will continue to exist, to be sure. However, the broadening of engineering education is not likely to recede. As the technical hard core becomes relatively less dominant, the rationale for separate schools of engineering begins to dissolve–and along with them, the jobs in engineering education traditionally relegated to historians of technology.
Beyond this pragmatic consideration, two forms of empirical evidence suggest that too much specialization will hurt rather than help the history of technology. First, “outsiders” have provided at least as much intellectual vitality as “insiders.” Self-identified historians of technology regularly and ruefully note that the most influential writing about technological history is often done by scholars who drop into the field for a while and then, often, depart again. The influence of “outsiders” is most evident in the canonical reading lists of graduate students in the field. David Harvey, Michel Foucault, Anthony Giddens, and Donna Haraway are not historians of technology, in any conventional sense, but they and others like them have had more influence on our thinking about the history of technology than most card-carrying SHOT members.
Instead of complaining about this situation, we should leverage it. The pattern is widespread and persistent enough to convince me that it is not temporary and will not be overcome with more professionalization and maturation of the subdiscipline. It seems a more enduring characteristic. What is the peculiar fit between the content matter of technological history and the appeal to non-specialists?
The answer, I believe, lies in the special role of passion in creating the best history of technology. What is most absent from the professional history of technology today is not a skill set, not a canon of agreed-upon texts, but the emotional depth that welds them together into work of insight and depth–often anger, sometimes love, typically a mixture of the two, but never a cool neutrality. I think of Lewis Mumford’s passionate diatribes against the “Pentagon of Power,” and his exultation in the liberating possibilities of Neotechnics; of Tom Hughes’s own enthusiasm about technological enthusiasm, and his only apparently contradictory condemnation of what he alone has the courage to call “technological sin”; or Marshall Berman’s white-hot description of modern technology as at once the very best and the very worst thing that has happened to human civilization. In the history of technology, passion serves an epistemological purpose. Strong emotion acts as a probe. It takes historians deeply into a subject and motivates them to keep them digging more deeply.
Scholars are by definition committed to rational standards and procedures. If they forego these, they are no longer scholars. But rationality does not suffice to tell you what to write about, or how to come at the problem, or why it matters. And rationality does not explain why other people get so passionate about technology. I have already remarked upon the contrast between the strong emotions about technology I confronted as an administrator, and the detached voice of many historian colleagues in discussing technological development as complicated, ambiguous, and full of unintentional consequences. Unless we historians understand the depth of the feelings aroused by technological change, we are missing important evidence.
A particularly wise MIT colleague once remarked, in describing staff resistance to reengineering, “Nothing is more real than feelings.” Both in the present and in the past, feelings are very much part of the human reality we need to understand. They have been out of style, in the cool intellectual twilight of post-modernity, where the prevailing habit is to dissect with irony and the prevailing fear is to make naïve claims. Historians need to risk being uncool.
Putting together these two sets of evidence, I would argue that the history of technology is an unusual if not unique case. In no other branch of history is the central term so complex, amorphous, and contested. Technology is not self-defining. The scholar needs to set the conceptual boundaries, to make a working definition as part of his or her work. This cannot be done from within the subdiscipline. The initial confrontation with technology must come from without, from other experience, from other research, from other concerns. As Leo Marx is fond of quoting Heidegger, “The essence of technology is by no means anything technological.” In that case, the best work in the history of technology is bound to be relational. It will explore the essence of technology from the vantage point of literature, law, economics, science, the arts, business, or politics. Like quarks (except those in the primordial plasma soup), technology cannot exist in isolation, but only bound to other things. Or, to use Heideggerean terms again, scholars need to bring an intellectual structure with which to enframe technology.
These observations suggest some practical outcomes. For example, historians of technology could deliberately work to lower the fence between “inside” and “outside.” This means, most obviously, putting less store on the distinction between historians of technology and other historians. It also means putting less store on the distinction between academics and non-academics. To our credit, this openness to other professions has long been a trait of the field. From the start, engineers have played a major role in developing the history of technology, and in more recent years the subdiscipline has welcomed museum professionals, public historians, secondary school teachers, and other practitioners.
But I hope we go further and reduce not just these sociological boundaries but also the inner ones. More of us who call ourselves historians of technology could cultivate a life rhythm of alternative practice and reflection, of engagement and contemplation, as mutually enriching experiences. Such a model of engaged scholarship would entail some risks – both in terms of career choices, and also in terms of the intellectual courage needed to take positions of matters that are not technological in order better to understand technology. It would also require acute self-awareness about our emotional bearings. If we want to bring something to “technology,” we need to know where we are coming from. If our best work might depend upon an angle of vision we bring to the field, we have to avoid an unselfcritical transference from personal experience to scholarly research.
Despite these potential problems, however, there are great potential strengths for our field if more of us deliberately alternate the accumulation of experience with the development of theory. The first generation of technological historians brought engineering experience to the field. The second generation (myself included) brought the tools of cultural criticism to help frame contextual questions. I like to imagine that the next generation will bring an even richer variety of perspectives to the history of technology: from other academic disciplines, from the arts, from law, from politics, from the profession formerly known as engineering, and even from unanticipated detours into academic administration.
1. The following units report to the Dean of Students and Undergraduate Education: Admissions, Athletics, Campus Activities Complex (including the Office of Campus Dining), Career Services and Preprofessional Advising, Counseling and Support Services, Office of Minority Education, Residential Life and Student Life Programs, and Student Financial Services.
2. John M. Staudenmaier, S.J., Technology’s Storytellers: Reweaving the Human Fabric (The Society for the History of Technology and The MIT Press: Cambridge, Mass. and London, 1985), p. 3.
3. Janet Snover, “Reengineering is Over but Change is Not,” MIT Faculty Newsletter, (November 12, 1999), pp.2, 6, 9.
4. Thomas Parke Hughes, Elting Morison symposium at STS Program, MIT (December 1, 1995), quoting Perry Miller (notebook I, p. 32).
5. Joan Scott, as described in private communication with Philip Scranton, Detroit, Michigan, October 8, 1999.
6. Leo Marx, “Technology: The Emergence of a Hazardous Concept,” Social Research, Vol. 64, No. 3 (Fall 1997), pp. 967, 994.
7. Marx, p. 967.
8. Marx, p. 978.
9. Marx, p. 968.
10. Leo Marx and Merritt Roe Smith, “Introduction,” Does Technology Drive History? The Dilemma of Technological Determinism, ed. Marx and Smith (Cambridge, MA and London: The MIT Press, 1994), p. xii.
11. Lawrence K. Chang, “Multivariate Control of the Fire Hose” (unpublished mss., July 3, 1999), p. 169.
12. In an MIT-Microsoft discussion about educational technology, Professor Woodie Flowers referred to the “hot mix” of engineering and management.
13. Data are from the McKinsey 1999 Engineering Alumni Career Decisions Survey (overall vs. MIT findings, January 2000) of somewhat older MIT alumni/ae (1971-1990), 25% want to be entrepreneurs and 22T technical leaders. Data for overall responses are respectively 24% and 29% for recent engineering graduates (1991-96), and 22% and 18% for older graduates (1971-90). N.B. Data are still confidential, and I will need McKinsey’s permission to publish them.
14. Alex Roland, “What Hath Kranzberg wrought? Or, Does the History of Technology Matter?,” Technology and Culture, Vol. 38, No. 3 (July 1997), p. 712.
15. Robert C. Post, “Post Script,” SHOT Newsletter, No. 78,n.s. (January 1998), p. 2. See also SHOT Newsletter, No. 79, n.s. (April 1998), pp. 1-3.
16. Alex Roland, Presidential Address, Technology and Culture (July 1997); Roland, response to Henry Bjork, SHOT Newsletter, No. 82, n.s. (January 1999), p. 4.
17. Personal communication, 5 March 2000.
18. Michael L. Dertouzous, What Will Be: How the New World of Information Will Change Our Lives (New York: HarperCollins, 1997), pp. 310-16.
19. Chang, p. 170.
20. Personal communication, WTMC Conference, Rolduc, Netherlands, June 1999.
21. Wiebe E. Bijker, Of Bicycles, Bakelites, and Bulbs: Toward a Theory of Sociotechnical Change (Cambridge, MA and London: The MIT Press, 1995), p. 281. A similar view is expressed by Pamela W. Laird in her response to Robert Post’s call for comments on reasons for the marginality of the history of technology: “Accepting inevitability absolves citizens and consumers of responsibility. Inevitability is easy to live with in many ways, but the price is withdrawal and alienation….our stories of technologies’ contingent nature promise to raise questions in other people and stir citizens and consumers to rethink their alternatives in the face of apparently relentless and daunting multinational trends” [SHOT Newsletter, No. 79, n.s. (April 1998), p. 3].
22. The term “bracketed” is used by Bijker in private correspondence (18 February 2000). See especially the development of the concept of technological determinism in Wiebe E. Bijker and John Law (eds.), Shaping Technology/Building Society: Studies in Sociotechnical Change (Cambridge, MA: The MIT Press, 1992), and in Bijker, Bicycles, Bakelite, and Bulbs, pp. 279-88.
23. For example, quoted in Staudenmaier, Technology’s Storytellers, p. xvii.
24. See the stimulating discussion in Lawrence Lessig, Code and Other Laws of Cyberspace (New York: Basic Books, 1999).
25. Leo Marx, review of In Context: History and the History of Technology, ed. Stephen H. Cutcliffe and Robert C. Post, Technology and Culture, Vol. 32, No. 2, Part 1 (April 1, 1991); Melvin Kranzberg and Leo Marx, “Communications,” Technology and Culture, Vol. 33, No. 2 (April 1992), pp. 406-7.