Donald T. Campbell was a psychologist in the heyday of the 1970s. During this time, the belief emerged that society was a social engineering project that could be planned and evaluated. The general idea was that if you collected enough data, you could plan and control social change in a way that led to desired results. Economists from USAID believed this about economic development, military planners in Vietnam believed it, and Sociologists in the War on Poverty believed it. But by 1976, Campbell wasn’t so sure…
The generation of social scientists Campbell critiqued ran around measuring poverty, illiteracy, disease, Communism, and other bad things. Thus in the 1970s you had Wars on Poverty, Smallpox, Illiteracy, Drugs, and so forth. There were also violent wars in Vietnam (for the Americans), and in Afghanistan (for the Russians). When I lived in Tanzania in the 1980s, the Tanzanian government had wars on Poverty, Ignorance, and Disease, all funded by international donors living out this paradigm. Planners in Washington, New York, Dar Es Salaam, and elsewhere calculated with statistical precision what was needed for victory in their “war,” and allocated government money to produce the desired victory. Their decisions were “data driven” and “evidence based,” to borrow two words common in policy making circles today.
Campbell was involved in such projects himself. He was so much part of them that he wrote an unfortuantely obscure paper, “Assessing the Impact of Planned Social Change” which reflects on the psychology of planners. More interesting for this blog, though, is the fact that what he was really doing was taking the ethnographic temperature of number-assessed planners. Campbell’s Law is as follows:
The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor.
In Vietnam, Campbell pointed out, the quantitative social indicator was “enemy killed.” Thus, he noted, this measure was corrupted as dead civilians were re-defined as “enemy,” and occasionally villages were invaded in the hope that at the unit would have kill-metric which could be rewarded. Two examples added by other social scientists including the following: Cardiac surgeons declined to operate on seriously ill because such patients were more likely to die (duh). They did this because the state began issuing “scorecards” rooted in survival rates. So, since the very sick were the most likely to die in surgery (or on their own), the doctors declined to operate on the seriously sick, and preserved their high survival rates. Another example of Campbell’s Law comes from airline schedules. Airlines began to be scored on the basis of “on-time arrivals” in the 1980s. They responded by simply increasing estimated flight times, thereby driving up their “on-time” rates—anytime you arrive early at a destination, thank Campbell’s Law; tailwinds probably did not have much to do with it!
Citing “Campbell’s Law” when critiquing the United States’ “No Child Left Behind Act” is something of a fad in education circles today. This is because high stakes testing for science and math drives decision-making about student promotion, teacher retention, and school closures. Thus, you get extensive test prep of students in reading and math, with resultant dilution of subjects like history, science, music and the arts which are not tested for. And of course, the ultimate vindication of Campbell’s Law are the cheating scandals by schools and teachers concerned only about “succeeding” on test day.
Campbell’s law also applies well to other bureaucratic endeavors, especially those of applied social scientists. My own experience is in Tanzania where projects to assist refugees or villagers were created with quantitative goals and objectives to satisfy donors, independent of what was needed or wanted by the villagers (or refugees) they were assisting. My favorite version of Campbell’s Law was the many broken diesel powered water projects that littered western Tanzania in the 1980s and 1990s. Indeed a book called Watering White Elephants was written about this phenomenon. Many of these were funded with the bureaucratic “Health for All by the year 2000” goal of WHO in mind. Quantitative reports showing that the goals of the project in terms of villagers served, villages with pumps, etc., were met.
For the refugees I worked with in Tanzania between 1994 and 1996, a good example was the numerical goal established for birth control in the Rwandan refugee camps. This was right after the Rwanda genocide, and the UN was concerned about the exploding birth rates, and the costs that would be incurred by their child health programs. The result was a bright idea: Condoms all around! In `1995-1996, four million condoms were distributed in record time by a USAID program, a quantitative result trumpeted at NGO meetings I attended (Quick: 4 million condoms spread across 450,000 refugees means that USAID is assuming what about the frequency of refugee sex???).
The visiting anthropologist hired to evaluate the program though pointed to the corruption of the condom distribution program. The condoms, she found were not used to prevent births, which continued to rise quickly, even nine months (or more) after the big distribution. Rather the condoms became a marker for young men to display their prowess. The young men cut off the end of the condom and wore it as a bracelet to represent conquests. Campbell’s Law wins again!
Indeed, there is a ethnographic field worker’s version of Campbell’s Law which was written by the development economist Teodor Shanin in 1966 at the height of the Cold War. Central planners in Moscow, Washington, and Beijing were running around the world applying the econometric models (Washington), or assumptions about central state planning (Moscow and Beijing) to Third World projects. The result was Campbell’s Law writ large, as the planners with their emphasis on production targets, development plans, and so forth created goals which implementers adjusted their programs to match. The result was that in places like the Congo, Vietnam,and Afghanistan, all the great powers were ultimately frustrated. Echoing Campbell’s law, Shanin wrote about the corruption of the quantitative social indictors in the following way:
Day by day, the peasants make the economists sigh, the politicians sweat, and the strategists swear, defeating their plans and prophecies all over the world—Moscow and Washington, Peking and Delhi, Cuba and Algeria, the Congo and Vietnam” Shanin (1966).
Which in the end points to the strengths of the ethnographic method, since after all Campbell’s Law applies to quantiative measures, not qualitative. As long as ethnographers are the harmless fuzzballs on the wall, they are able to write about processes, interactions, relationships, and so forth that quantitative measures typically miss. After all, Campbell’s Law itself is ultimately an ethnographic conclusion about the nature of quantitative methods. Ethnographic method may not be grand, or easily adapted to manage large bureaucratic projects, but in its insight, it can be used to describe the limitations of more quantitative projects.