Although incrementalism has enjoyed widespread acceptance within political science, it has not spawned a lively research tradition leading to cumulative refinement and amplification of the core concepts. Nor has it provided much guidance for policy making, in part because scholars never attempted to clarify how decision makers could become better incrementalists. This is due in part, we suggest, to the fact that understanding of the concept of "incrementalism" has become extremely muddied, conceivably to the point where the term may have outlived its usefulness; but the problems which motivated the early scholarship remain at the heart of political theory and practice.
To encourage a fresh look at the subject, we revisit the seminal contributions to recall their essential spirit and fundamental purposes. We then summarize the main criticisms, using them to help reconceptualize the questions most worth asking about political decision making in light of incrementalist insights. The key, we suggest, is to aim for a political decision theory focusing on how to make sensible collective choices in the face of sharp limitations on human understanding. Thus reframed, the incrementalist tradition can become both more useful to practitioners and more generative of future scholarship on political decision making.
I. INCREMENTALISM'S ORIGINAL AIMS
Lindblom, Simon, and the other early contributors to empirically-based theories of decision making started from a recognition that human problems are extraordinarily complex, while our analytic capacities and resources are quite limited (Simon 1955; Lindblom 1959). Among other obstacles, we lack sufficient knowledge of cause-and-effect to understand complex social problems, and there is not enough time and money even to conduct most of the partial studies that are feasible. People do not know all their goals or the tradeoffs they are willing to make among them. Humans disagree about almost everything, and have no satisfactory analytic method for resolving disparate perceptions and priorities into collective choices (Arrow 1951; Braybrooke and Lindblom 1963).
How to proceed sensibly in the face of such serious obstacles was a central concern of early work in political decision theory. Recognizing that analytic techniques alone could not determine how to bound the scope of a complex problem, Simon found that problem solvers actually proceed according to a cognitive strategy he called "bounded rationality." Limitations of analysis also mean that decision makers ordinarily must "satisfice" rather than optimize in choosing among policy options (Simon, 1955; March and Simon 1958).
Extending these and related cognitive and organizational insights to governmental settings, Lindblom investigated the strategies available for coping with complexity, uncertainty, disagreement, and the costliness and other limitations of analysis. While democracy has been prized less for its good sense than for its ability to constrain tyranny, Lindblom offered a point-by-point refutation of the notion that central decision makers ordinarily will make better decisions than a more decentralized or democratic system. Interaction often serves as a social method of analysis, he showed, an alternative to ratiocination for taking account of numerous considerations that ought to be incorporated into an intelligent collective outcome. There is a theoretically based "intelligence of democracy" -- although the conditions for its proper working are approximated quite imperfectly by contemporary political systems (Lindblom 1965, 1977). Interactive adjustment also achieves a weighing of tradeoffs in the normal course of its operation, something that usually is impossible using analysis alone (Braybrooke and Lindblom 1963).
Just as entrepreneurs and consumers can conduct their buying
and selling without anyone attempting to calculate the overall
level of prices or outputs for the economy as a whole, Lindblom
argued, so in politics. Under many conditions, in fact, adjustments among competing partisans will yield more sensible
policies than are likely to be achieved by centralized decision
makers relying on analysis (Lindblom 1959, 1965). This is partly
because interaction economizes on precisely the factors on which
humans are short, such as time and understanding, while analysis
requires their profligate consumption. To put this differently, the lynchpin of Lindblom's thinking was that analysis could be -- and should be -- no more than an adjunct to interaction in political life.
While subordinate to interaction, analysis obviously plays important roles in intelligent policy making -- partisans guide their interactions partly on the basis of analysis, for example. Sensible analysis, Lindblom argued, does not aspire to the impossible task of trying to overcome complexity and human limitations by brute analytic strength; rather it proceeds strategically. One among several (yet to be fully specified) forms of strategic analysis is disjointed incrementalism, consisting of: (1)
1. Limitation of analysis to a few somewhat familiar
2. Adjustment of objectives in light of the policies
potentially available, rather than considering ends
in the abstract;
3. More preoccupation with ills to be remedied than
positive goals to be sought;
4. A sequence of trials, errors, and revised trials;
5. Exploration of only some, not all, of the important
possible consequences of a considered alternative;
6. Fragmentation of analytical work to many partisan
participants in policy making (Lindblom 1988, 239).
The above guidelines came to be distilled into the misleading aphorism, "take small steps," an issue to which we shall return. Since a great many management schools, as well as policy
analysts trained in economics and the physical sciences, still
virtually ignore these simple truths about the limits of analysis
and the strengths of interactive problem solving, it remains
important to periodically reassert and update basic points. But
most political scientists now take some form of disjointed
incrementalism and partisan mutual adjustment as a given,
"Muddling Through" has been reprinted in dozens of anthologies,
and there have been several thousand citations to the seminal
works. The insights would seem to have been about as well
incorporated into the discipline as could be hoped for any set of
It is surprising, therefore, to find that among political scientists only Wildavsky has made a sustained effort to refine issues insufficiently addressed by the pioneers. (2) Wildavsky and several collaborators mounted significant empirical research projects on budgeting in the U.S. and Britain, showing that both individual political actors and the system as a whole operated pretty much as Lindblom described (Wildavsky 1964; Davis, Dempster, and Wildavsky 1966; Heclo and Wildavsky 1974). More recently, Wildavsky has studied policy making about environmental and other risks, contrasting the desire for trials without error with the ubiquity of trial-and-error in human life (e.g., Wildavsky 1988). These genres of research aimed more at establishing the plausibility of using incrementalism or trial-and-error at all, however, than at extending or refining incrementalism theoretically.
It apparently has come to be taken for granted that the original insights were important, but that there is not much to be gained from further efforts in the directions Lindblom, Wildavsky, and others began to chart. (3) In fact, some of the most interesting scholarship has examined ways of getting around the limitations identified by incrementalist theorists. Thus, many have criticized incrementalism for being too hard on analysis (e.g., Dror 1964; Etzioni 1967; Etzioni 1986; Forester 1984; Gershuny 1978; Self 1974; Wiseman 1978; Goodin and Waldner 1980); and some of these have sought ways (e.g.,"mixed scanning") of adapting systematic analysis to the limits of human cognitive capacities and time limitations.
A related set of research, initiated by Schulman's work on "large-scale" policy making, especially the space program, has argued that there are circumstances under which non-incremental policy making is inevitable and perhaps desirable (Schulman 1975, 1980). Lustick analyzed policy contexts in which disjointed incrementalism and partisan mutual adjustment would be relatively less useful, where the utility of analysis and centralized action therefore would be enhanced, such as where especially damaging errors could result if policy makers do not get the right answer the first time (Lustick 1980). Studying several such policy arenas, including air traffic control and aircraft carriers, La Porte and his colleagues have tried to understand the organizational requirements for what Wildavsky terms "trials without error" (La Porte et al. 1988; Wildavsky 1988). Hochschild has argued that white flight from cities engaged in school busing could have been avoided only by non-incremental policy making (Hochschild 1984).
These and other works offer important insights, but they attempt to carve out highly selected exceptions more than they refine and extend the essential insights of incrementalism. To help refocus the discipline's attention on that more fundamental task, it may be helpful to clear away some of the underbrush that has grown up and obscured the deeper aims of incrementalism.
Most of the enduring criticisms of incrementalism fall into
four broad categories. First, it is alleged to be insufficiently
goal oriented and ambitious, inviting "complacent acceptance of
our imperfections" (Arrow, 1964, 588), and justifying "a policy
of 'no effort'" (Dror, 1964, 155). Incremental steps are said to
mean proceeding "without knowing where we are going" (Forester
1984, 23), "leading nowhere" (Etzioni 1967, 387), guided by
"ill-defined" themes (Pava 1986). Moreover, incremental learning is "strictly a posteriori and passive" (Grandori 1984, 199).
Nothing in the logic of incrementalism would lead to such conclusions. Political participants obviously have goals, use analysis where convenient, formulate policy trials as best they can given their partisan aims and skills, engage in learning, and try to improve outcomes that matter sufficiently to them. Yet something about Lindblom's formulation encouraged or allowed a large number of scholars to waste a great deal of time over a matter on which no thoughtful person could possibly disagree, a point to which we return.
A second criticism holds that incrementalism is an overly
conservative approach, which "would tend to neglect basic
societal innovations" (Etzioni 1967, 387), and would limit social
scientists' ability to serve as a source of social innovation
(Dror 1964, 155). It is said to favor organized elites over the
poor and disorganized, because weaker actors are not able to
protect values that stronger actors choose to discount (Lustick
1980; Forester 1984). More generally, incrementalism does not
take sufficient "account of crucial factors that are not
powerfully represented in the bargaining process, e.g., the
future" (Logsdon 1986, 105).
Those are very serious problems. But they are caused by prevailing distributions of political power, not by disjointed incremental analysis. No alternative decision strategy would be any less afflicted, given the institutions and authority relations of market-oriented polyarchal societies. The conservatism critique also seems mistakenly to suppose that incremental analysis and partisan mutual adjustment were imagined to be the only inputs to societal policy making. Along with incremental analysis of the type done by mainstream government officials and those who seek immediate influence over them, a complex society obviously needs a wide array of both professional social inquiry and lay inquiry. It should be "broad-ranging, often highly speculative, and sometimes utopian" (Lindblom 1979, 251; Lindblom and Cohen 1979; Lindblom 1990).
A third criticism holds that incrementalism is appropriate in
only a narrow range of decision situations: where the environment
is stable, no crisis is impending, the organization's survival is
not at stake, available resources are not desperately short, and
where current policy problems resemble previous ones for which
the organization has experience (Lustick 1980; Ahrari 1987; Bourgeois and Eisenhardt 1988). These conditions undermine the applicability of disjointed incrementalism, Dror argues, because "Many of today's qualitatively most important problems are tied up with high speed changes in levels of aspirations, the nature of issues, and the available means of action, and require therefore a policy making method quite different from 'muddling through'" (Dror 1969, 154; also see Eisenhardt 1989; Fredrickson and Iaquinto 1989).
Certainly there is good reason to suppose that the above
conditions make policy making more difficult. But will not analysis also be thornier under such circumstances -- indeed,
perhaps fundamentally unreliable? It is by no means clear that unstable decision contexts would systematically disadvantage disjointed incremental analysis relative to realistic alternative methods of analysis and policy making. Unless we assume away the very limitations that incrementalism is designed to address -- such as limited knowledge of causal relations -- then the more turbulent the problem-solving situation, the more important it would be to rely on a decision strategy rather than on unreliable ideas temporarily passing as "knowledge."
Finally, threshold and sleeper effects are said to undermine
the usefulness of incrementalism. Serial adjustment to revealed
error presupposes that errors are reversible or compensable, and
that the resources required to do so are not out of line with the
original cost of the program (Goodin and Waldner 1979). This assumption can be violated if threshold effects show up suddenly. Sleeper effects "deprive the incremental decision maker of prompt negative feedback" (Lustick 1980, 348), and may produce unacceptably large errors before a program can be halted.
Again, there is little question that decision making faces
problems of the sort identified here. The only mystery is what
the critics think it has to do with incrementalism per se. There
is nothing in the concepts developed by Lindblom that abjures information about threshold and sleeper effects, and nothing in other decision approaches offers any guarantee against such effects. Going faster and not responding to whatever feedback becomes available obviously would not be the answer. Threshold and sleeper effects are part of reality, and no decision approach copes very well with them.
In sum, the main criticisms of incrementalism have been wide of the mark. But simply refuting the critics is not enough, for something pretty clearly is amiss. If political scientists are to orient part of our work around strategic coping with the human predicament of small brain/big problems, we need to feel comfortable with the task; and the misperceptions reviewed above clearly indicate that something about the original formulation of disjointed incrementalism confused and disturbed a lot of scholars. Similarly indicative of problems is the field's general unwillingness or inability to pick up the task Lindblom tried to lay out. The story Lindblom told apparently was good enough to appear seamless and complete, not inviting elaboration and testing, yet also difficult to use, or even to fully believe.
An even more fundamental misperception about incrementalism is an extraordinary confusion arising in users' minds over the past three decades among (1) incrementalism as an analytic strategy, (2) political processes tending to foster the strategy of disjointed incrementalism, and (3) policy outcomes consisting of small steps. In the original formulation, these were intertwined but analytically distinct (see Lindblom 1979), but they now have become jumbled together.
First, reconsider the notion of "small steps." Like most shorthand views, this one has a measure of applicability. Lindblom certainly did argue that human understanding is too limited for there to be good prospects of success in undertaking very large political changes; when moving into unknown policy terrain, one way to protect against unacceptably large errors is to proceed gradually. He also pointed out that the clash of democratic politics often blocks grandiose proposals, as do various veto powers in many political-economic systems. In contrast, central decision makers in non-democratic systems are freer to cook up grand plans, such as removing all Kulaks from the land to make way for collective farming, or starting a Cultural Revolution.
But there is a great deal more to disjointed incrementalism than "small steps," as demonstrated by the six component strategies summarized in the previous section. In fact, there is nothing in the denotation of incrementalism that rules out large steps: the key method is successive limited comparisons among alternative policies, potentially including comparisons among marginally differing policy moves of a radical nature. The unfortunate maxim concerning small steps has turned out to be a miserable condensation symbol for incrementalism, turning attention away from the fundamental purpose of the concept, and leading to silly arguments, pilloried by Dempster and Wildavsky as concerning "the magic size for an increment" (Dempster and Wildavsky 1979). We suggest, therefore, that the small-steps notion be greatly qualified or abandoned. (5)
A second confusion concerns the relation between the interactive process of political decision making known as partisan mutual adjustment, and the analytic strategy of disjointed incrementalism. (6) Perhaps partly because the two concepts were designed to serve many of the same functions (particularly explaining how policy making can proceed somewhat satisfactorily despite scarce and unreliable analytic capacity), the two ideas have tended to blend together for subsequent commentators, who frequently conjoin "planning, centralization, and comprehensive analysis," and compare this with "incremental actions...that emerge from interactive processes" (Lustick 1980, 342, 350).
But Lindblom never claimed that incremental analysis would be useful only in the context of policy processes relying on partisan mutual adjustment. To see the mistake in such an assumption, one needs merely to ask, "Does incremental analysis make sense only in political systems utilizing representative democracy or other decentralized reconciliation processes?" Clearly incremental analysis has wider applicability. Indeed, since reconciliation processes in even feebly democratic systems will tend to narrow the scope of feasible options in much the same way that a disjointedly incremental strategy would, incremental analysis arguably is more important in polities that are less democratic. For it is precisely where authority is monopolized by a small number of actors that bold, poorly framed actions are most likely. Since feedback processes usually are worse in non-democratic systems, moreover, error correction will be slower and the total cost of mistakes may be larger. (7) Hence, even a sensible dictator would use incremental analysis. (8)
The converse question then arises: in a polyarchy, where partisan mutual adjustment is being used, is there really any need for incremental analysis? If political power is shared sufficiently widely, among actors with sufficiently diverse interests, will potentially unbearable policy moves be headed off, because bound to offend some group with sufficient political influence to veto the proposal? Will the flaws in human understanding, together with constraints on time and other resources necessary for analysis, more or less automatically nudge all the partisans toward behavior that manifests "the intelligence of democracy"?
No one has observed ideal-typical partisan mutual adjustment in action, of course; we have only its debased, real-world forms of elite-dominated pluralism or corporatism to go by, sometimes producing policy steps that come to be widely perceived as huge blunders, such as the U.S. savings and loan bank debacle. (9) Interaction among competing partisans clearly does tend to force consideration of additional angles on a problem, and of additional problems; it also reduces the frequency of agreement on policy moves greatly disadvantaging large numbers of the politically influential. But there is no careful scholarship estimating the magnitude of these effects, and casual observation suggests that the success of partisan mutual adjustment in this regard varies markedly across polities, eras, and policy areas.
It is essential, then, to keep our ideas about analysis and political process distinct theoretically, even though they necessarily intertwine. In the remainder of the paper we focus not on partisan mutual adjustment, but on the analytic aims of the incrementalist tradition.
How can the underlying issues that provoked "Muddling Through" be made more accessible and more tractable? The confusions reviewed above suggest that the political problematique arising from cognitive and other constraints on policy making has not yet been approached in ways that invite cumulative scholarship. To help jump-start such a renewed collective inquiry, we suggest at least temporarily setting aside the particular conceptual means Lindblom and others were exploring, and refocusing more directly on their underlying questions: How can individuals, organizations, and societies cope as well as possible with political issues too complex to fully understand, given the fact that actions initiated on the basis of inadequate understanding may lead to significant regret? In answer, along with the particulars of their ideas, Lindblom, Wildavsky, and others working in the incrementalist tradition have been offering a simple, but important and underemphasized, response: humans rarely can proceed satisfactorily except by learning from experience; and modest probes, serially modified on the basis of feedback, usually are the best method for such learning. Now that there is broad recognition of the fact that comprehensive analysis is impossible for complex social problems, and with the consequent admission that we all must use strategies of some kind for bounding our inquiries, judgments, and actions, it is time to move on to the deeper question: How can individuals, organizations, and polities develop and use better strategies for proceeding in the face of uncertainty?
Much of social science bears on the problem of coping with uncertainty, or potentially can be brought to bear on that huge area of inquiry. But very little research to date has aimed directly at understanding what tends to go awry in trial-and-error learning. If political participants are condemned to some form of trial-and-error learning from experience, is it possible to specify institutional arrangements, procedures, and strategies to make errors less damaging and to accelerate learning? Under what conditions do partisans (and aggregations of them) target their interactions so as to cope better than usual with uncertainty, limited time, and so forth. (10) And under what conditions, or with what approaches, do they do worse than average?
As an illustration of how research along these lines might be crafted, consider the three main pitfalls in trial-and-error learning: (1) A misguided policy trial may produce unbearably costly outcomes; (2) Policy moves may retain too little flexibility, preventing errors from being corrected readily;
(3) Learning about errors may be very slow.
Potentially Unacceptable Risks
One of the criticisms of incrementalism reviewed above was the possibility that policy trials could produce unbearable errors, before error-correction could occur. While the problem afflicts all decision theories, not just incrementalism, it is well worth addressing. What do we know about how policy making does and should cope with potentially unbearable risks?
Even in highly uncertain endeavors, it is possible at the outset partly to foresee and protect against some of the worst risks. Homeowners, for example, do not have to calculate the likelihood of their house burning down; merely knowing that
it is an unacceptable possibility is enough to warrant obtaining
insurance as an initial precaution against catastrophic loss.
Likewise, rather than relying entirely on preventing all
accidents, U.S. nuclear decision makers required containment
buildings around civilian reactors; most of the radioactivity at
Three Mile Island was thereby prevented from entering the
environment. If the Soviets had taken this precaution instead of
assuming impeccable performance by their nuclear plants, the 1986
accident at Chernobyl probably would have had less serious
consequences (Morone and Woodhouse 1989). Other tactics would be appropriate for other types of problems, but the basic idea is to take some kind of initial precautions rather than merely hoping for the best. The precautions will not prevent errors, but will make them less costly.
If uncertainty is high and consequences are potentially
severe, moreover, it makes sense to take especially stringent
precautions. Thus, in 1976-1978 the U.S. had to decide whether
to take action on potential depletion of stratospheric ozone by
fluorocarbons. There was no solid, direct evidence that such
depletion was occurring, and the American Chemical Society
complained that proposed legislation would constitute "the first
regulation to be based entirely on an unverified scientific
prediction" (Morone and Woodhouse 1986, p. 82). Nevertheless, Congress and EPA acted to ban most aerosol chlorofluorocarbon sprays, even though few other nations did so.
Another aspect of proceeding cautiously is to put the burden
of proof on advocates of risky activities. Whereas government
once had to go to court to prove a pesticide unsafe after it had produced substantial damage, manufacturers now are required to demonstrate prior to marketing that their products do not pose "an unreasonable risk." This tactic is imperfectly applied in
current pesticide regulation, but the burden of proof has shifted
significantly toward proponents of potentially risky chemicals (Morone and Woodhouse 1986).
A second problem with trial-and-error learning is that by the time serious flaws become apparent, a policy may have become quite resistant to change -- deeply enmeshed in implementers' careers, in organizational routines, and in the expectations of those comprising a policy network. In framing policy moves, therefore, partisans who actually seek to solve a social problem (11) can improve their odds by developing policy options capable of being altered fairly readily, should unfavorable experience warrant.
For example, flexibility is higher when a policy's costs are borne gradually, allowing expenditures to be redirected as learning develops. Pressman and Wildavsky characterize this as "payment on performance" (1974, p. 159). In contrast, if payment has to be made in advance -- as through large, up-front capital investments -- when a program does not work out, investment typically will be irrecoverable, and future options are likely to be unduly limited. NASA's space shuttle illustrates the problem: a launch regime relying on expendable rockets would have been much easier to revamp (Logsdon 1986; Byerly and Brunner 1989; Collingridge 1992).
If Pressman and Wildavsky had been asked to assess a proposed policy for job creation in Oakland, they could have pointed out that heavy capital investment was an ill-advised strategy; for funds might be expended without the promised jobs actually materializing. If government instead helped businesses to meet their wage bills for each suitably created job, then payment would depend on how many people are employed -- and could be adjusted every week if necessary. Subsidies also could be increased to give greater incentives for new job creation, or decreased if the budget gets tight, if the program's partisan support shifts, or if new priorities appear. The greater flexibility of the pay-as-you-learn approach is obvious, not because it is free of bureaucratic obstacles, but because it is relatively easier to change courses without writing off huge sunk costs.
Flexibility also can be enhanced in many other ways. Phasing in a policy during a learning period is a common practice in business, for example, as is experimenting in a limited geographical area and/or for a delimited client base. The states' role as "laboratories" used to be taken as a standard article of faith by commentators on American federalism. Other mechanisms of preserving flexibility include simultaneous trials of two or more alternative approaches, using an existing bureaucracy instead of creating a new, dedicated organization with permanent staff, and many other tactics. Some of these are discussed in existing literatures, others await systematic study; none has yet been carefully integrated into the incrementalist tradition. And standard texts on organizational behavior and public administration do not highlight the need for building in flexibility in order to facilitate learning from experience (e.g., Moorhead and Griffin 1992; Fesler and Kettl 1991).
The particular tactics by which flexibility can be achieved obviously vary greatly among policy contexts, and different partisans will find some tactics more advantageous to them than others. Especially for political reasons but also to some extent because of the nature of space exploration, for example, NASA is said to have had to work at a large scale, or not at all (Schulman 1980). Local policies for acid rain or ozone depletion or desertification may not be very sensible (compare Lustick 1980). And Hochschild argues that there are times when policy should be rapidly introduced across large geographical areas to prevent "learning" and adjustment by opponents -- as in the case of school desegregation, subverted by whites with resources to move to other districts (Hochschild 1984).
This points forcefully to the partisan component in considerations of appropriate flexibility, which we readily grant. There may be times when gains from flexibility are outweighed by the possibility that opponents would exploit flexibility to undo what the original partisan majority considered a crucial program. But a rapid, large-scale policy failure can be extremely costly, and not just in dollars; even partisans who wish to marshall opponents out of the policy implementation process, then, will do well to frame policy options so as to preserve room for learning whenever feasible.
Barriers to Learning
Incrementalism as originally presented did not give much attention to the difficulty of learning -- and the consequent need to prepare for it actively. How can organizations learn more swiftly, from their own experiences and from those of others that deal with similar decision problems (Argyris and Schon 1978).
Among many other considerations, feedback from policy trials needs to rapidly reach those with authority to make a change. To the contrary, feedback often takes too long, allowing accumulation of unfortunate results. Thus, the harmful effects of DDT were not persuasively documented for a quarter century after the pesticide's initial use; it took many years before there was clear evidence that high-rise public housing complexes have a destructive effect on many residents (Collingridge 1992); and a long period had to elapse before researchers could hope to determine whether the Head Start program would produce lasting educational improvements in the children who participated.
One response is to rue the misfortune, but to consider it an immutable fact of life. Alternatively, partisans can select among policy options partly on the basis of how long it will take to learn whether their effort is on the right track. Since there rarely is enough funding, time and attention, or other resources to tackle even all the pressing issues or proposals in a domain, it sometimes is sensible to favor those problems offering a potential for quick learning. While public policy can hardly match private business in this regard -- airlines now alter some fares within a few days if they do not produce the expected changes in travellers' choices -- policy making could put far more emphasis on the time lag required for learning (Derthick 1990).
School reformers, for example, typically have on their agenda a plethora of ills and a bewildering variety of partially contradictory remedies, with no prospect of knowing in advance how well a given proposal will work. Yet policy debates do not give priority to those meritorious ideas whose results could be determined fairly quickly. Nor are most other policy domains more attentive to the problem of long-lagged learning, despite the obvious fact that error correction cannot be attempted until feedback emerges, and error correction ordinarily will be necessary to evolve good policy outcomes.
Some regulatory endeavors do make efforts to speed up learning, however. After numerous bad experiences from chemicals such as PCBs, vinyl chloride, and DDT, the Toxic Substances Control Act of 1976 decreed that all new commercial chemicals would have to be approved by EPA prior to marketing, partly on the basis of toxicology testing. The Food and Drug Administration long has required elaborate premarket testing and approval of new pharmaceuticals, and medical devices now are subject to such screening. We do not usually think of these requirements as part of an intelligent trial-and-error process; formally, however, the testing is simply a way of speeding up negative feedback instead of waiting for it to emerge naturally, over a longer period and with greater damage. Less radical ways of achieving this goal obviously might be achieved in other policy areas.
Is it a utopian hope to suppose that these and other strategies of coping with uncertainty might come to be employed somewhat systematically? Each of the elements of intelligent trial-and-error actually is already being applied in various policy areas, though typically not in an explicit or coordinated way. Perhaps the most thorough application to date was in early research on recombinant DNA, the scientific procedures which led to the emerging biotechnology industry. (For reviews of the controversy, see Krimsky 1982; Morone and Woodhouse 1986; and Wright 1986).
Scientists organized a voluntary moratorium on the potentially risky research in the early 1970s, and worked out a regulatory strategy through the National Institutes of Health. Six classes of especially risky experiments were prohibited altogether, and precautions were adopted for the others, varying in stringency according to the degree of risk each type of experiment was believed to pose. The aim was essentially to make the research forgiving of error: special laboratory facilities were used to prevent bacteria from escaping from the research building; and intentionally enfeebled strains of an especially well known bacterium were used for most of the research, so that even if bacteria escaped they would have great difficulty surviving outside the favorable conditions of the lab.
Recombinant DNA researchers proceeded to learn from experience, partly via worst-case experiments aimed for example at finding out whether virulent new organisms might accidentally be created. There was some disagreement about interpretation of some tests, but the great majority of observers found reassurance from the priority testing. Close monitoring of hundreds of ordinary rDNA experiments also provided reassurance. As uncertainty was reduced, more experiments were allowed at lower levels of containment; by the early 1980s, most of the containment requirements were dropped, and no experiments remained altogether prohibited.
The interaction of these strategies can be seen in a number of large-scale, hazardous technologies. Civilian nuclear power embraced potentially catastrophic safety and financial risks, with inadequate precautions. Learning was bound to be slow, with significant time lags before receipt of persuasive feedback, and trials of such incredible complexity -- up to 10 million pieces of paper for a single nuclear reactor -- that interpretation of errors was almost impossible. And the endeavor was extremely inflexible, with most payments required in advance and massive inertia from a host of supporting public and private institutions, including uranium mining and processing, reactor vendors, utility companies, regulatory agencies, and a combination of government, business, and university R&D for reactor design, development, and radioactive waste handling (Collingridge 1983). It took several decades to find out that giant nuclear power plants would be politically and economically unacceptable in most nations, by which time hundreds had been constructed throughout the world for several hundred billion dollars. The error was irreversible, learning slow, and the cost enormous. Policy makers could have pursued much smaller reactors, using different designs that would have been less expensive, more flexible, and apparently incapable of catastrophic meltdown (Morone and Woodhouse 1989).
Similar problems can be found in large-scale irrigation projects (Collingridge 1992), military research and development (Woodhouse 1990), high-rise public housing (Collingridge and James 1990), and the U.S. space endeavor (Brunner and Byerly 1989; Collingridge 1990). The common ingredient is that learning is slow and costly when partisans do not press for initial precautions to head off unbearable errors, flexibility to allow error correction, and deliberate preparation for learning from experience.
The above problems and possibilities apply throughout political life. First, since we do not want to step over a cliff while learning from experience, it makes sense to protect against unacceptable risks where feasible. Second, since learning usually takes awhile under the best of circumstances, it makes sense to arrange policy so that it can be changed fairly readily when negative feedback is perceived to warrant. Third, because people and organizations do not automatically learn to do better -- indeed, we often have great difficulty at learning -- it makes sense to prepare deliberately for learning.
Both in principle and in practice, then, intelligent trial-
and-error appears to be a workable strategy for many types of
decisions. It is not, however, an automatic process that specifies exactly what should be done in any given situation; so all the ordinary work of policy analysis and political choice still must go on. There is frequently a tradeoff between intelligent trial and error as characterized here and the cost of options, moreover: those which promote learning may be more expensive in the short-run. How far to go in employing coping strategies obviously is a political judgment.
We have argued that it is time to return to the original
concern that motivated the pioneers of political decision
theory: coping with the terrible difficulties posed for social problem solving by humans' limited ability to understand the complexities of social life. A generation of commentary on the early ideas has produced disappointingly little progress, in part because virtually every commentator has seized on the particular means Lindblom explored rather than on the larger task that incrementalism was intended to illustrate and begin.
Our analysis is also an illustration, and, we hope, a new
beginning. The ideas advanced here attempt to contribute to a reinvigorated and expanded political decision theory, while also responding to the main criticisms made about incrementalism, including the misperceptions reviewed earlier that have stood in the way of further work in the incrementalist tradition.
To get on with the task, one important prerequisite is to
clear up the confusion between the political process of partisan mutual adjustment and disjointed incrementalism or other strategies for coping with humans' limited capacities for analysis. The strategies discussed here for cutting the cost of errors and for speeding up learning can be recommended to policy makers in virtually any type of decision process, from the most hierarchical and centralized decision context to the least, from tightly authoritarian to maximally decentralized and pluralist systems.
Second, to quell the notion that incrementalism is anti-
analytical, the strategies discussed here clearly emphasize the importance of approaching political judgments armed with strategic analysis; and nothing recommended herein need displace whatever analytic tools would ordinarily be employed.
Third, responding to the misperception that incrementalism is overly conservative and biased toward the status quo, intelligent trial-and-error eliminates the notion of small steps; and all the strategies clearly can be used to whatever degree is judged warranted.
Fourth, to the (accurate) claim that incrementalism ignores
threshold and sleeper effects, as do other decision strategies, intelligent trial-and-error explicitly advocates initial precautions against potentially unacceptable errors. Specific precautions can be taken to mitigate the severity of foreseeable risks, and more generic, procedural precautions (such as built-in flexibility to facilitate quick error correction) can be taken even against errors that cannot be foreseen.
Fifth, regarding the perception that incrementalism is of use in an unduly limited range of decision contexts, intelligent
trial-and-error aims at nearly universal applicability.
Finally, to reassure those who perceive incrementalism as
insufficiently goal-oriented, intelligent trial-and-error centers around the goal of learning from experience, at acceptable cost, to attack whatever problems any partisan wishes to place on the political agenda.
Thus, the concept of intelligent trial-and-error supplements and focuses that of disjointed incrementalism. (12) More generally, this refinement is intended to illustrate and contribute to a much larger project, alluded to above but not yet spelled out: reframing political theory (and political action) as a whole, to take better account of the cognitive and other limits on human decision-making capacities. For all the attention given to the work begun by Lindblom and others in the 1950s, there actually has been little integration into workaday political science -- congressional studies, say, or international relations -- of the notion that political life can be seen as a strategic undertaking centered around experience-based, evolutionary problem solving.
To put it bluntly, the genre of political thought represented by incrementalism has been confined in something of an intellectual ghetto, at great loss to political science more generally. Among many other possibilities for using neo-incrementalist insights to strengthen political theory, leavening diverse inquiries without constricting them, are the following:
* For institutional analyses, regulatory and bureaucratic
politics, and organization theory, what disincentives
and other barriers built into institutional structures
and processes now discourage strategic coping with human
cognitive and other limitations?
* Can instances of conspicuous success and failure in
international relations and defense policy be explained
in part by the extent to which actors (unwittingly?) abide and fail to abide by the requirements for
intelligent trial-and-error learning? If wars almost
never live up to the fond hopes entertained at their outset, for example, is it in part because war-making
activities tend to call for competences well beyond
those actually available?
* For political philosophy: Can we conceive of ways of
evolving moral choices in politics iteratively through the
trial-and-error process humans actually use, instead of
through the logico-deductive process intellectuals try to
use? What fresh insights about legitimate authority can
be gleaned by looking at all knowledge claims and all
authoritative behaviors as experiments, asking what
relations among citizens and leaders need to obtain in
order to have such claims and behaviors subjected to
effective monitoring, interpretation, and reasoned
revision based on learning from experience?
* For rational choice theory, are there conditions in
which political actors oriented toward instrumental
problem solving would have incentives to avoid using
strategies required for intelligent trial-and-error?
* In comparative studies of environmental or other
regulation, can relatively successful endeavors be
distinguished from relatively unsuccessful ones partly
on the basis of the extent to which policy trials were
set up flexibly?
* For organizational research, presently overwhelmed by
myriad studies threatening to replicate the complexity of
the phenomena being studied, might a more strategic
research focus be in order to cope with scholars' limited
cognitive capacities, time, and so forth? Thus, could it
be sensible to sacrifice many other lines of research to
concentrate on understanding what factors promote learning
from experience, and how administrators can modify their
working relations to promote such learning?
Questions of a similar ilk can be posed for political economy, legislative studies, and virtually every other domain of politics. Some scholars may protest that it is unnecessary to do so, because there already is a fair bit of research going on concerning coping with uncertainty, learning, and other topics raised by neo-incrementalist decision theory. Just so. But not in an explicit, focused, or cumulative way, we suggest. And not in a way that connects very well across the many domains of political science.
Most generally, we propose that political scientists borrow a methodological technique from economics: assume, for heuristic purposes, that governmental institutions are problem-solving mechanisms engaged in strategic behaviors designed to
produce experiential learning by coping with uncertainty at reasonable expense. Then, parting company with economics, analyze actual political behavior in terms of how closely the ideal is approached, highlighting obstacles and seeking ways of overcoming them. Much of the same scholarship now being conducted could go on, but it would be integrated into a framework that would allow us to see a forest, not merely interesting trees. And we might actually help humans learn to improve political institutions.
In sum, our intent has been to suggest that great gains can be achieved by moving beyond obligatory genuflection to incrementalism. Both the discussion of intelligent trial-and-error and the brief suggestions of other lines of political science research are intended as illustrations of how diverse scholars, working in diverse ways, can develop a political decision theory incorporating the original aims of incrementalism. For better political theory and wiser public policy surely depend in part on enhanced attention to the task of learning to cope more effectively and more equitably with the sharp limitations on humans' capacities for understanding.
Ahrari, M.E. 1987. "A Paradigm of 'Crisis' Decision Making:
The Case of Synfuels Policy." British Journal of Political
Science 17: 71-91.
Argyris, Chris and Donald Schon. 1978. Organizational
Learning: A Theory of Action Perspective. Reading,
Arrow, K.J. 1951. "Alternative Approaches to the Theory of
Choice in Risk-Taking Situations." Econometrica 19:
Arrow, K.J. 1964. "Review of A Strategy of Decision by
Braybrooke and Lindblom." Political Science Quarterly
Behn, R.D. 1988. "Management by Groping Along", Journal
of Policy Analysis and Management 7: 643-663.
Bourgeois, L.J., III and K.M. Eisenhardt. 1988. "Strategic
Decision Processes in High Velocity Environments: Four
Cases in the Microcomputer Industry." Management Science
Braybrooke, D. and C.E. Lindblom. 1963. A Strategy of Decision.
New York: The Free Press.
Brunner, Ronald and R. Byerly. 1989. "The Space Station
Program: Defining the Problems." Center for Space and
Geoscience Policy, University of Colorado, Boulder.
Byerly, R. and Ronald Brunner. 1989. "Future Directions for
Space Policy Research." Pp. 175-190 in R. Byerly (ed.)
Space Policy Reconsidered. Boulder, CO: Westview.
Collingridge, David. 1992. The Management of Scale: Big Organizations, Big Technologies, Big Mistakes. London:
Collingridge, David. 1990. "Technology, Organizations, and
Incrementalism: The Space Shuttle." Technology Analysis
and Strategic Management 2: 181-200.
Collingridge, David. 1983. Technology in the Policy Process: Controlling Nuclear Power. London: Frances Pinter.
Collingridge, David, and Peter James. 1990. "Technology,
Organizations and Incrementalism: High-Rise System
Building in the UK." Technology Analysis and
Strategic Management 1: 79-97.
Collingridge, David and Peter James. 1991. "Energy Policy
in a Rapidly Changing Market." Long Range Planning 24:
Davis, Otto, Michael Dempster, and Aaron Wildavsky. 1966. "A Theory of the Budgetary Process." American Political
Science Review 60: 529-547.
Dempster, M.A.H. and A. Wildavsky. 1979. "On Change: Or,
There is No Magic Size for an Increment." Political Studies 27: 371-389.
Derthick, Martha. 1990. Agency Under Stress: The Social
Security Administration in American Government.
Washington, D.C.: Brookings.
Diver, C.S. 1983. "The Optimal Precision of Administrative Rules." Yale Law Journal 93: 65-109.
Dror, Y. 1964. "Muddling Through - Science or Inertia?" Public Administration Review 24: 153-157.
Dryzek, J. 1987. "Complexity and Rationality in Public
Life." Political Studies 35: 424-442.
Eisenhardt, K. 1990. "Speed and Strategic Choice," California
Management Review 32: 39-55.
Eisenhardt, K. 1989. "Making Fast Strategic Decisions in a High
Velocity Environment." Academy of Management Journal 32:
Etheredge, Lloyd S. 1985. Can Governments Learn? American Foreign Policy and Central American Revolutions. New York:
Etzioni, Amitai. 1967. "Mixed Scanning: A Third Approach
to Decision Making." Public Administration Review 27:
Etzioni, Amitai. 1986. "Mixed Scanning Revisited." Public
Administration Review 46: 8-15.
Fesler, James W., and Donald F. Kettl. 1991. The Politics
of the Administrative Process. Chatham, NJ: Chatham
Forester, J. 1984. "Bounded Rationality and the Politics of
Muddling Through." Public Administration Review 44: 23-31.
Fredrickson, J.W. and A. Iaquinto, 1989. "Inertia and Creeping
Rationality in Strategic Decision Processes," Academy of
Management Journal 32: 516-42.
George, Alexander. 1980. Presidential Decisionmaking in
Foreign Policy: The Effective Use of Information and Advice. Boulder, CO: Westview Press.
George, Alexander, and R. Smoke. 1989. "Using Case Studies
in Deterrence Theory." World Politics 41: 239-254.
Gershuny, J. 1978. "Policymaking Rationality; a Reformulation."
Policy Sciences 9: 292-316.
Goodin, R. and I. Waldner. 1979. "Thinking Big, Thinking
Small, and Not Thinking at All." Public Policy 27: 1-24.
Goold, M. and J. Quinn. 1990. "The Paradox of Strategic Controls", Strategic Management Journal 11: 43-57.
Grandori, A. 1984. "A Prescriptive Contingency View of Organizational Decision Making." Administrative Science Quarterly 29:192-209.
Hayes, Michael T. 1992. Incrementalism and Public Policy.
New York: Longman.
Heclo, Hugh, and Aaron Wildavsky. 1974. The Private
Government of Public Money: Community and Policy Inside
British Political Administration. London: Macmillan.
Hochschild, Jennifer L. 1984. The New American Dilemma:
Liberal Democracy and School Desegregation. New Haven:
Yale University Press.
Jervis, Robert. 1976. Perception and Misperception in
International Relations. Princeton, NJ: Princeton
Johnson, G. 1988. "Rethinking Incrementalism." Strategic Management Journal 9:75-91.
Lewis, D. and D. Wallace. 1984 (eds.) Policies into
Practice, Heinemann, London.
Lindblom, Charles E. 1959. "The Science of 'Muddling
Through'." Public Administration Review 19: 79-88.
Lindblom, Charles E. 1965. The Intelligence of Democracy.
New York: The Free Press.
____. 1977. Politics and Markets: The World's Political-
Economic Systems. New York: Basic.
____. 1979. "Still Muddling, Not Yet Through." Public
Administration Review 39: 517-526.
____. 1988. Democracy and Market System. New York:
Norwegian University Press.
____. 1990. Inquiry and Change: The Troubled Attempt to
Understand and Shape Society. New Haven: Yale University
Lindblom, Charles E. and David K. Cohen. 1979. Usable
Knowledge: Social Science and Social Problem Solving.
New Haven: Yale University Press.
Logsdon, J.M. 1986. "The Decision to Develop the Space Shuttle." Space Policy 2:103-119.
Lovell, R.D. and B.M. Turner. 1988. "Organizational Learning,
Bureaucratic Control, Preservation of Form."
Knowledge:Creation, Diffusion, Innovation 9:404-425.
Lustick, I. 1980. "Explaining the Variable Utility of Disjointed
Incrementalism: Four Propositions." American Political
Science Review 74: 342-353.
March, James G., and Herbert A. Simon. 1958. Organizations. New York: Wiley.
Mason, R. and I. Mitroff. 1981. Challenging Strategic Planning
Assumptions. New York: Wiley.
Mayntz, R. 1983. "The Conditions of Effective Public Policy:
A New Challenge for Policy Analysis." Policy and Politics
Mintzberg, H. and J. Jorgensen. 1987. "Emergent Strategy for
Public Policy." Canadian Public Administration 30: 214-229.
Mintzberg, H. 1987. "Crafting Strategy", Harvard Business
Review, Jul-Aug: 66-75.
Moorhead, Gregory, and Ricky W. Griffin. 1992. Organizational
Behavior: Managing People and Organizations. Third edition.
Boston: Houghton Mifflin.
Morone, Joseph, and Edward J. Woodhouse. 1986. Averting
Catastrophe: Strategies for Regulating Risky
Technologies. Berkeley: University of California Press.
Morone, Joseph, and Edward J. Woodhouse. 1989. The Demise
of Nuclear Energy?: Lessons for Democratic Control of
Technology. New Haven: Yale University Press.
Nice, D.C. 1987. "Incremental and Nonincremental Policy
Responses: The States and the Railroads." Polity 20:
Pava, C. 1986. "New Strategies of Systems Change: Reclaiming
Nonsynoptic Methods." Human Relations 39: 615-633.
Premfors, R. 1981. Review article: "Charles Lindblom and
Aaron Wildavsky." British Journal of Political Science,
Pressman, Jeffrey L., and Aaron B. Wildavsky. 1973.
Implementation. Berkeley: University of California
Quinn, J. 1980. Strategies for Change: Logical Incrementalism.
Homewood, IL: Irwin.
Reid, D. 1989. "Operationalising Strategic Planning."
Strategic Management Journal 10: 553-67.
Schulman, Paul. 1980. Large-scale Policy Making. New York:
Schulman, Paul R. 1975. "Nonincremental Policy Making: Notes
Toward an Alternative Paradigm." American Political
Science Review 69: 1354-1370.
Schwenk, C. 1989. "Explaining Strategic Change." Journal of
Management Studies 26: 177-88.
Self, P. 1974. "Is Comprehensive Planning Possible?" Policy
and Politics 2: 193-203.
Shapiro, M. 1965. "Stability and Change in Judicial
Decision-making: Incrementalism or Stare Decisis?" Law
in Transition Quarterly 2: 134-157.
Simon, Herbert A. 1955. "A Behavioral Model of Rational
Choice." Quarterly Journal of Economics 69: 99-118.
Steinbruner, John. 1974. The Cybernetic Theory of
Decision. Princeton, NJ: Princeton University Press.
Weiss, Andrew, and Edward J. Woodhouse. 1993. "Refocusing
on the Essential Character of Incrementalism: A
Constructive Response to the Critics," Policy Sciences
Wildavsky, Aaron. 1964. The Politics of the Budgetary
Process. Boston: Little Brown.
Wildavsky, Aaron. 1988. Searching for Safety. New
Brunswick, NJ: Transaction Publishers.
Wiseman, C. 1978. "Selection of Major Planning Issues",
Policy Sciences 9: 71-86.
Woodhouse, E.J. 1990. "Is Large-Scale Military R&D Defensible
Theoretically?" Science, Technology, & Human Values 15:
1. The idea of a family of non-synoptic decision strategies, of which disjointed incrementalism is one, is hinted at in Lindblom 1959; it is discussed more directly, but still briefly, in Lindblom 1979. This list of strategies in disjointed incrementalism is quoted and paraphrased from Lindblom, 1979.
2. In addition to the scholars discussed below, significant single works include Hayes 1992, Behn 1988, Dryzek 1987, Nice 1987, Diver 1983, Mayntz 1983, Shapiro 1965; for reviews of
parts of Lindblom's and Wildavsky's oeuvre, see Spread 1985 and
3. Incrementalist research has gone further in management studies, but it is of sharply limited usefulness for politics: see, e.g., Quinn 1980; Mason and Mitroff 1981; Fredrickson 1983; Mintzberg 1987; Johnson 1988; Reid 1989; Schwenk 1989; Eisenhardt 1990; Goold and Quinn 1990.
4. This section is based in part on a more extended review and appraisal of the criticisms of incrementalism by Weiss and Woodhouse 1993.
5. For further argument on this point, incorporating into incrementalist thinking the need to reach tradeoffs between errors of commission and errors of omission, see Weiss and Woodhouse 1993.
6. To make matters even more complicated, partisan mutual adjustment also is an analytic strategy. Interaction solves epiphenomenally a variety of tasks (such as calculating tradeoffs among competing values) that are difficult or impossible to do through ratiocination alone. Few scholars have paid attention to this important feature of partisan mutual adjustment, or to how its capacities as an analytic device might be better exploited.
7. The term "error" obviously is a thorny one, since so much in politics is in the eye of the beholder. A given action may be an error according to some partisans, and not according to others. Partisans may change their minds, moreover, considering a policy successful at one point, but unsuccessful later -- or vice versa. It is not clear how to address those realities while keeping on center stage the plain fact that governments frequently do undertake policies that are a good deal more damaging and less flexible or correctable than needs to be the case.
8. Conversely, if elites agreed to use incremental analysis, could a political system dispense with partisan mutual adjustment? Clearly not, since interaction can be useful in many situations where even incremental analysis is too time-consuming or otherwise is impracticable.
9. One can dispute whether the savings and loan problem should be seen as an outcome of partisan mutual adjustment, or a failure to really utilize partisan mutual adjustment.
10. This is a simplified version of a larger set of questions, among others: (1) How can analysis be made helpful to individual partisans in playing their interactive roles; (2) How can analysis be made more helpful to interest groups and other political aggregations; and (3) How can the political system overall endeavor to develop the capacities of individuals and groups/organizations for undertaking and using helpful analysis?
11. Obviously, many partisans have goals other than social problem solving, for which they neither need nor want a decision strategy that promotes intelligent problem solving.
12. It does not substitute for it -- indeed, there is no effective substitute for disjointed incrementalism (or for intelligent trial-and-error) in most decision contexts, as far as we have been able to think the matter through.