Eglash, R. “Computing Power.” pp. 55-64 in Matthew Fuller (ed) Software Studies: A Lexicon. MIT Press, 2008

When Terabyte Makes Right: Social Power and Computing Power

Computational power plays an accelerating role in many powerful social locations. Simulation models, for example, sneak into our medical decisions, speak loudly in the global warming debate, invisibly determine the rates we pay for insurance, locate the position of a new bridge in our city,

cutting edge
plot the course of our nation’s wars, and testify in the courtroom both for and against the defense. Other applications in which computing power matters are molecular biology, communication surveillance, and nanotechnology. Social scientists concerned with the relations of power and society commonly examine who has money, who owns property, and who owns the means of production.  But the ownership of computing power is more evasive, and far less probed. This paper will outline some of the ways in which we might begin to examine the relations between computing power and social authority.
 
1) The need for alternatives to the realist critique

One of the most common analyses of the relations between computing power and social power is what I call the “realist critique.” This analysis goes something like the following: the computer representation of X is used to substitute for the real X, but since it’s an artificial version it has certain bad effects (prevents us from seeing injustice, from being in touch with people or nature, etc.). There are indeed moments in which some form of such realist critiques are applicable. But the critique has been over-used in ways that are quite problematic.

When we blindly start putting categories of the Real on the ethical side, and categories of the Unreal on the unethical side, we imply a system of morality which mimics the Christian story of the fall from the Garden, or Rousseau’s dichotomy between nobility of the natural and the evils of artifice. We imply that computer simulations are unethical simply because they are unnatural. Similar moral assumptions have been used in attacks on the civil rights of gays and lesbians (“unnatural sex” is a violation of God’s plan), or arguments used for purging Germany of its Jews (because they were not “natural” to Germany), or denying citizens the right to birth control. Notions of the Real or Authentic have been used in colonialism to differentiate between the “real natives” who stayed on their reservation, versus “inauthentic natives” who could thus be imprisoned for their disruptions (seen again in recent times during the American Indian Movement of the 1960s; where activists were criticized as being “urban indians”). Thus, when we read critiques that condemn digital activities as “masturbation” (eg Hacker 1990), we need to think not about artificial worlds as pathologies, but rather about how innocent sexual activity has been used to pathologize and control individuals.

Even in cases where scholars of computing have been very aware of the suspect ethics of realism, it can creep in. Take, for example, computer graphics representations of the human body, such as the Visible Man project. Investigations of such anatomical simulations are immediately queried for all the right reasons – how the social construction of the technical happened, who benefits, how it influences the viewer’s experience, and so on. But inevitably there rises what Wahneema  Lubiano calls “the ghost of the real” --  we are haunted by some element of the pre-virtual past (almost literally in this case by the donor of the body, a 39 year old prisoner who was executed by lethal injection in Texas (cf. Waldby 2000)). Despite the best intentions of these writers, in the end their simulation critique often implies an ethics of the Real. Even Sandy Stone, well known for her commitment to virtual communities and identities, ends her often-cited essay with the line “No refigured virtual body, no matter how beautiful, will slow the death of a cyberpunk with AIDS.” Again the real haunts us; critiques of simulation accuracy or realism tend to move us toward an organicist framework.

Even when a realism critique is warranted—in the case, for example, of a corporate sponsored simulation that attempts to dupe the public into a false sense of environmental or health security—exclusive concern with issues of accuracy can be problematic in that they focus on symptom rather than cause. Ostensibly, one could correct the inaccuracy, and then we would have nothing to complain about. But most critics have a loftier goal in mind: they are really trying to show how social elites have managed to manipulate the power of computing to support their own interests. By focusing on the accuracy or realism of the simulation, we lose sight of the original goal: we focus on getting the American Petroleum Institute to use the right equations rather than asking how they managed to control the truth-making abilities of computing in the first place. How can we get at a more fundamental understanding of the relationship between social power and computing power, and how might we change those relations?
 
2) Three dimensions of computing power: speed, interactivity, and memory

Let us begin with the technical definitions for computing power. On the one hand, the mathematical theory of computation has precisely defined what we mean by saying that one system is computationally more powerful than another. The least powerful system is a finite state automaton, the greatest in power is a Turing machine, and in-between we find machines such as the push-down automaton. But such formal definitions for computing power, collectively termed the Chomsky hierarchy, are essentially absent in the world of commercial computing. There are two reasons for this disconnection. First, there is the quite sensible and responsible distinction that real-world computing systems have multiple physical constraints that are poorly represented by such abstract assessment; that in fact features that matter a great deal for the real world, such as the amount of time it takes to complete a calculation, are absent in the traditional computational models of the Chomsky hierarchy (cf Sloan 2004). But there is also the rather suspect way in which the social authority of computing power requires an unfettered ability to make its claims. Let us now look at three categories for this slippage: speed, interactivity, and memory.

Speed: consider the simulations which produce special effects for Hollywood movies and television commercials. Computing power here is almost entirely a question of processing speed, due to the computational requirements of high-resolution graphic simulation. Movies like Terminator II and Jurassic Park were milestones in visual simulations of physical movement; so much so that they are treated like NASA projects whose “spin-offs” are for the general benefit of humanity. Special effects wizards have now become frequent speakers at mathematics conferences; for example the creator of the wave in the movie Titanic was featured as a speaker for National Mathematics Awareness Week. Often the visual spectacle of their virtual realism is a much greater audience selling point than plots or acting; in fact, it is precisely this uncanny ability to (apparently) manipulate reality that becomes the proof of computing power. When the coca-cola corporation spends 1.6 million dollars on 30 seconds of airtime during the super bowl, it is no surprise that supercomputing is at the center of their message.  Like the Marxist observation that “money is congealed labor” (Haraway), special effects are congealed computing. The power to command reality to do your bidding is sexy, even if it is only a virtual reality. Marshal Mcluhan’s theme that the medium is the message was always too deterministic for my taste, but I am willing to make an exception in the case of computational advertising, where the cliché that “sex sells” has been augmented by the sexiness of simulacra.

Interactivity: We can find a similar account of simulation’s sex appeal in the rise of multimedia computing, particularly for websites. Here the measure of computing power is most often presented in terms of “interactivity.” Yet formal assessments for interactivity, as could be produced through the Chomsky hierarchy, are never brought to bear. To understand this, it is useful to first examine similar questions about the assessment of intricate behavior in simple biological organisms. Spiders are not taught how to spin a web; the behavior is genetically programmed. Even semi-learned behaviors such as bird songs are often characterized as the result of a “serial pattern generator.”  Tightly sequenced behaviors such as spider webs and bird songs can be modeled as finite state automata (cf. Okanoya 2000), because they require little adaptive interaction with their environment. They may appear to be complicated but they are in fact a “pre-programmed” sequence of actions. This stands in strong contrast to animal behaviors that require spontaneous interaction, as we see for example in the social cooperation of certain mammals (wolves, orca, primates, etc.). Even lone animals can show this kind of deep interactivity: a raccoon learning to raid lidded trash cans is clearly not clocking through a sequence of prepared movements. 

In the same way, our interactions with websites can vary from “canned” interactions with a limited number of possible responses—pressing on various buttons results in various image or sound changes—to truly interactive experiences in which the user explores constructions in a design space or engages in other experiences with near-infinite variety. Such deep interactivity does not depend on the sophistication of the media. The 1970s video game of Pong, with its “primitive” low-resolution graphics, has far greater interactivity than a website in which a button press launches the most sophisticated 3D fly-though animation. As Fleischmann (2004) points out in his analysis of web media, rather than measure interactivity in terms of two-way mutual dependencies, commercial claims for interactivity depend on an “interrealism effect” that substitutes flashy video streaming or other one-way gimmicks for user control of the simulation. Such multimedia attempts to create the effect of interactive experience without relinquishing the producer’s control over the simulation. At least speed, for all its elitist ownership, has a quantitative measure that allows us to compare machines; for interactivity we have only the rhetoric of public relations. Even in cases in which we are not duped by this inter-realism effect, and strive for deep interactivity, the informational limits of interactive computing power – the bandwidth of the two-way communication pipeline – is carefully doled out in accordance to social standing, with the most powerful using high-speed fiber-optic conduits of Internet II, lesser citizens using cable connections on Internet I, and the poorest segments of society making do with copper telephone wires – truly a “trickle-down” economy of interactivity.

Memory. Third and finally, there is computing power in terms of access to memory. Increasingly the users’ local hard drive memory has become augmented or even superfluous as internet companies such as MySpace or YouTube shift to the “Web 2.0” theme of internet as operating system. In terms of individual use this is move towards democratization through lay access, but in terms of business ownership it is a move towards monopolization as only large scale corporations such as Google that can afford the economy of scale that such memory demands place on hardware (cf. Gilder 2006). Memory also plays a constraining/enabling role in the professional utilization of large databases. Consider, for example, the agent-based simulations that allow massively parallel interactions, such as genetic algorithms based on Darwinian or Lamarckian evolution. The epicenter for this activity has been the Santa Fe Institute, where mathematicians like James Crutchfield have been admonishing researchers in the field of “Artificial Life” for their supposed willingness to put public acclaim over formal results (Helmreich 1999). Crutchfield is on the losing side of the battle: he is forgetting that science is a social construction, and thus those who are able to best exploit computing power – in this case the artificial life folks – will be able to exploit the social power that can define the contours of the field. To take another example, science historian Donna Haraway expressed great surprise when she learned that critical sections of the Human Genome Project were being run out of Los Alamos labs: what in the world was the modernist location for transuranic elements doing with the postmodern quest for trans-species organisms? The answer was computing power: whether modeling nuclear reactions or nucleic acid, the social authority of science requires the computational authority of machines. From MySpace of layusers to the gene space of molecular biologists, memory matters.

In sum, these three factors – computing speed, computing interactivity, and computing memory – both define the technical dimensions of simulation’s computing power, as well as its social counterparts. Indeed, we can think about them in terms of information equivalents: Computing memory is comparable to social memory, interactivity is comparable to social discourse, and computing speed is comparable to social rhetoric. Thus we see the rhetorical power of special effects, the discursive power of interactive websites, and the mnemonic power of large-scale lay constructions and professional simulations.

 4) Elite versus lay public access to computing power

What can be done about this alliance between computing power and social authority? Looking at the changes in computing power over time, we can see both stable and unstable elements. For example, the public face of computing power is typically portrayed as the steady increase in computing speed per dollar, often encapsulated in Moore’s Law, which posits that the number of components (ie transistors) on a chip will double every 18 months. But privately chip manufacturing companies agonize over strategies to maintain this pace (cf. de Geus 2000).
 
Contrasting elite versus lay public access to computing power through time makes this precarious stability even more apparent. The earlier modeling efforts secured elite access through expertise: even if laypersons were offered access to a timesharing system, they preferred the shallow learning curve of a wordprocessor—it was the user-unfriendly interface of text-based UNIX that separated the hackers from the hacks. This barrier did not become compromised until the advent of the graphical user interface (GUI) in the late 1970s. During the mid 1980s this sparked an unusual moment of lay access; thus the creation of popular “toy” simulations such as SimCity during that time. But by the early 1990s a gradient of computing power began to re-solidify in which the “cutting edge” of elite computer simulations could leverage truth claims in ways unavailable to the “trailing shadow” of the lay public’s computer power. The introduction of techniques such as agent-based modeling and genetic algorithms have established trajectories which tend to re-stabilize this relation between the cutting edge and trailing shadow. Yet new technological opportunities continue to arise. We have recently seen the birth of the Free and Open Source Software movement, of Napster’s challenge to the recording industry, wiki-pedia, and other quasi-popular appropriations. How might similar challenges to the social authority of the cutting edge take place in the domain of computing power?
 
In the early 1990s I had lunch one day with some graduate students in computational mathematics at the University of California at Santa Cruz. They were abuzz with excitement over the use of supercomputers for the design of a yacht that might win the Americas Cup. For them, this was an exciting “popular” application; one that was neither military nor academic big science. But I was struck by the ways in which computing power and financial power had managed to stick together, even in this ostensibly non-professional exception. What did the yacht owners have that made their problem more attractive than poverty, racism, sexism, and other pressing humanitarian problems? The answer, I believe, is that they had good problem definition. Yes it is true that the people associated with the Yachting Club of America are generally more flush with cash than, say, those of the Southern Poverty Law Center, but half the challenge is getting problems defined in ways that high-end computing power can address. We need organizations like the National Science Foundation to support research specifically directed to the challenge of problem definition in the application of supercomputing power to non-elite humanitarian causes.

The other half of the challenge is computing access. A breakthrough in access to supercomputing power came as a result of the The Berkeley Open Infrastructure for Network Computing (BOINC). The system was originally created for SETI@home, which analyzed data from the Arecibo radio telescope in hopes of finding evidence of radio transmissions from extraterrestrial intelligence. Ordinary lay users installed software that allowed the BOINC system to run the background, or run while their computer was not in use, providing spectral analysis for small chunks of the 35 gigabyte daily tapes from Arecibo, and uploading the results back to BOINC where they are integrated together. With over five million participants worldwide, the project is the world’s largest distributed computing system to date. In upgrading to the BOINC system the programmers also called for broadening applications to include humanitarian projects. However none of the current projects seems directed at humanitarian causes for specifically non-elite groups, with the possible exception of Africa@home’s Malaria Control Project, which makes use of stochastic modeling of the clinical epidemiology and natural history of Plasmodium falciparum malaria.

What other kinds of problem definition might allow greater computing power to be applied to the challenges of survival and sustainability for those at the margins of social power? Consider, for example, Flexible economic networks (FENs).  First observed in the revitalization of regional European economies (Sabel and Piore 1990), FENs allow small-scale businesses to collaborate in the manufacture of products and services which they could not produce independently. These networks rapidly form and re-form in response to market variations, creating spin-off businesses in the process which then give rise to further FEN growth.  More recently the Appalachian Center for Economic Networks (ACENet) has demonstrated that this approach can be successfully applied in a low-income area of the U.S. But ACENet found that they were hampered by lack of information about both the resources of potential participants, and potential market niches to be exploited. Similar problems in establishing “virtual enterprise” cooperatives for large-scale industrial production -- collaboration between multiple organizations and companies for the design and manufacture of large, complex, mechanical systems such as airframes, automobiles and ships – has been addressed through the application of cutting-edge computing (cf. Hardwick et al 1996). Why not apply similar techniques to generate FENs for low-income areas in either first or third world contexts?

In conclusion, the social authority of computing power follows the gradient of cutting edge and trailing shadow, stabilizing what might be gains for popular use by always putting that promise for equality in the near future. But we can also see ruptures in both technical and social dimensions of these relations, new opportunities to reconfigure both social and computational power.

 
References
Kenneth R. Fleischmann, 2004. "Exploring the design–use interface: The agency of boundary objects in educational technology," Doctoral dissertation, dept of STS, Rensselaer Polytechnic Institute.
Gilder, George. “The Information Factories.” Wired vol 14 no 10, pp. 178-202, October 2006.

M. Hardwick, D. Spooner, T. Rando and K. Morris, "Sharing Manufacturing Information in Virtual Enterprises," Communications of the ACM, Vol. 39, No. 2, February 1996.

Helmreich, Stefan. Personal communication, 1999.

Hacker, Sally L. Doing it the hard way. Boston: Unwin Jyman,1990.

Okanoya, Kazuo. “Finite-state syntax in bengalese finch song:
From birdsong to the origin of language.” Third Conference on Evolution of Language, April 3-4, Paris 2000. Available online at http://www.infres.enst.fr/confs/evolang/actes/_actes52.html

Sloman, Aaron. “The Irrelevance of Turing Machines to AI.” pp. 87–127 in Matthias Scheutz (Ed), Computationalism: New Directions. Cambridge, MA: MIT Press 2002.

Stone, Allucquére Rosanne.  "Will The Real Body Please Stand Up?: Boundary Stories About Virtual Cultures".  in Michael Benedikt, ed. Cyberspace: First Steps. Cambridge: MIT Press 1991.

de Geus, Aart J. ''To the rescue of Moore's Law.'' Keynote address, 20th annual Custom Integrated Circuits Conference, May 11-14, Santa Clara, 2000.

Waldby, Catherine The Visible Human Project: Informatic Bodies and Posthuman Medicine. New York and London: Routledge, 2000.

I’ve qualified this as non-elite because to simply say “humanitarian” often allows the loop-hole of limiting the studies to those humanitarian causes that elites themselves benefit from, such as the development of expensive medical treatments, expensive new prosthetics, types of pollution that affect affluent suburbs, etc.