The Eyewitness Fallacy: Are Studies of China Best Done in China, or the British Library?

Ethnographers love to travel.They will always assert that travel is necessary to understand a culture.You need to travel, to feel the culture. And without such exposure, we reason that what is written is less valid because it cannot possibly be written with the critical perspective that local context provides.Or as Bronislaw Malinwoski himself once wrote, field observation is necessary “to grasp the native’s point of view, his relation to life, to realize his vision of his world.” On top of that, field work is the initiation ritual that gives academics “street cred” when preparing lectures about places exotic to your audience.

I like to travel too and truth be told, am a sucker for Malinowski’s point that you need to get off the metaphorical mission verandah, and into the village if you are going to understand what is going on in another social world.I put this into practice by encouraging my students to study abroad, join the Peace Corps, and seek out any and every opportunity overseas. So imagine my discomfort when I came across a comment in Cultures Merging: A Historical and Economic Critique of Culture by the historical economist Eric Jones.  Jones points out that “straining for street cred can lead to the ‘eyewitness fallacy’ in which foreign travel substitutes for deeper inquiry.” And that “one can learn more about [China] in the British library than by visiting the country…the best ticket is a library ticket, because things may be found in books that are not apparent on the ground, and books offer more ideas than most of us can dream up for ourselves.” (Pp. 33-34). Huh, could he be serious? Whaddyamean that a library ticket is better for understanding culture than an airplane ticket to Beijing? What about the thick description? Emic and etic perspectives? The deep understanding of culture that comes from being on the ground? Thick description? And of course the awe that casual mentions about your last malaria attack brings in the antiseptic developed world. I bet Jones never had malaria, so what kind of street cred can he possibly have?

But then I read further, and I found out that Jones is not only uncomfortably correct, in fact had a really good point to make about the relationship between field experience and the deeper inquiry best done in the library.This is because the individual participant observer’s view is always limited to the contacts they personally make.  Meaning that our personal contacts limit the ideas that we can dream up.How can a single observer, then write about a society as vast as China (population 1.2 billion) based only on what they themselves see?Indeed, even tiny Liechtenstein (population 35,000) is too big.This is because even the best participant observer can come in contact with only an extremely limited number of people on their own.Jones went on to point out that libraries (and presumably the massive electronic data bases that are their descendants) are a much better way to get to know a country—you come not only in contact with the people you know, but many times that number as well.What is more, you are not limited to the views of your own friends and acquaintances, but can delve into those of people with whom you are not familiar, and even those who are dead.

The really embarrassing thing for those of us who romanticize the importance of travel is that much of the world’s great literature—and social science—has been generated by library jockeys.Indeed, Jones made his point particularly well by pointing out that one of the leading translators of Chinese poetry, Arthur Waley, never went to China, because he wanted to protect his “personal image of the scene.”Better known is Jules Verne who wrote fantastic stories about the world without leaving France.Karl Marx did the bulk of his research in the British Library, and various archives.Max Weber wrote much about the Protestant Ethic of the United States before leaving for his first (and only) trip to the United States.Charles Darwin never went back to the Galapagos after his only visit while on The Beagle, either

The problem is that I still like to travel, as does, for that matter, Eric Jones who himself has a well-used passport.And travel does continue to shape my thinking about the world, and I still encourage students to leave the United States and experience the world.Travel can still be a good corrective to world’s imagined up in the library.But does an airplane really ticket replace a library card?No, not yet.And neither does being an “eyewitness” or even surviving malaria automatically create a more valid viewpoint.Indeed, on the critical variables of wisdom and validity, I am afraid that the library ticket still trumps the plane ticket until proven otherwise.

Reference

Eric Jones (2007) Cultures Merging: A Historical and Economic Critique of Culture

Posted at Ethnograpny.com November 8, 2008

Rants, Ranting, Flame Wars, and the Like

Most of us like to rant now and then.  Usually we do this in the quiet of a bar, with the assumption that as long as we never run for political office, the rants stay in the bar.  But with the invention of the world wide web, there are new parameters to the dissemination of rants.  Witness what has happened here on www.ethnography.com during the last week where Mark Dawson shot his virtual mouth off with the rant right below this posting.  Witness too the responses over at zeroanthropology.net.  Two guys in virtual bars a continent apart rip into each other, calling each other “moron” and “bigoted” across cyber space, while the rest of us vicariously and anonymously enjoy the fireworks.  The good news for www.ethnography.com is that the two rants by Mark Dawson during the last month or so have sent the hit rate, the thing that counts in cyber-space, through the roof.  His first successful rant was an April Fool’s joke about the dissolution of the AAA, and in May there is the “butterfly” rant.  It seems that some people like rants much more than ethnographic commentary; I guess that it gives us déjà vu to when we were eight years old.  In contrast, Mark has done some enchanting writing about the ethnography of clowns, and some girl’s picture on his bedroom dresser which have attracted less than 100 hits even after 3 years.  All people seem to care about are his rants—which can go into four digits within a few days of posting.

Rants by definition are rooted in opinion and emotion.  They are not logical or analytical.  Good rants make us look at the ridiculousness of life.  As Max Forte has implicitly pointed out, Mark Twain was a great ranter.  On the other hand, bad rants make us roll our eyes and mumble “there he goes again.”  Mark did this for me last week with his first rant about Anthropologists for Justice and Peace.  The rant was emotional and made a big deal about other people who were making a big deal over not much.  In other words, there was ranting about others’ ranting.  Big deal.  This type of rant is common on talk radio.  If you want to hear more such ranting from the right, I recommend Sean Hannity, Rush Limbaugh, and Glenn Beck.  On the left you can go to a Michael Moore movie.  Depending on your political views, you will find them funny or not (for the record I typically put on rock and roll when Hannity intrudes into my evening commute).

But to Mark Dawson’s credit, he caught himself in a boring rant, and posted a mea culpa about butterflies and the Anthropologists for Justice and Peace.  This riposte in my view was a really good rant, and had me laughing.  I laughed at the rant because the rant made more general fun of cultural anthropology’s tendency to put their own political views at the center of their discipline.  Max Forte has in turn responded with an astute and thoughtful paragraph about the contagion of laughter, and what it might (or might not) mean about the one person in the room who is not laughing.  If you want to read it, scroll down into the comments section of Forte’s blog—it is thoughtful.

Anyway, to stick to Mark’s version of ranting, I have seen the political self-absorption described in Mark’s rant in any number of disciplines in the academic world, and agree that is a great thing to make fun of.  Much such ranting is on the left, but over in the Business and Engineering schools, there are plenty of people doing it on the right.  Perhaps I like hearing cultural anthropology made fun because the condition is worse there, but I doubt that it is any worse than Physics, Business, English, Biology, Sociology, or anywhere else.  Maybe I enjoy seeing cultural anthropology made fun of is more likely for more selfish reason, i.e. because my own application for graduate study was rejected in 1987-1988.  Whatever. Like I mentioned earlier, rants are not about analysis, and certainly not about self-analysis.  But, speaking of Mark’s butterfly posting, judging from the hits we’ve taken to the site since the revised version was posted last Wednesday, lots of people are laughing with us, since they have been linking it to their Facebook accounts to share with their friends and family.  In the blogosphere this is a definition of success, so whoop-ti-do, and good for Mark.

I will admit to wishing that my more academic and boring comments on www.ethnography.com would be a bit more popular.  I would really like it if readers posted them to your Facebook account like you do the rants that Mark writes.  For that matter, Mark would appreciate it if you read his ethnography of clowns, and the girl’s picture on his bedroom dresser.  But warning:  Such posts tend to describe ethnographic techniques, research methods, cite guys like Erving Goffman, and talk about the British Library rather than ranting about morons, fascists, and bigots, words which I think should be excised from ranting vocabulary.

Bottom line: Such serious ethnographic postings get far fewer hits than rants.  All I can hope for is that Mark’s rants besides making some of us laugh, point people to the more serious and boring stuff that Mark, Cindy, Donna, Jennifer, and I have posted to www.ethnography.com over the last 5 or 6 years.  But I have little hope.  In our post-modern world rants work, and Malinowski doesn’t.  Just ask Glenn Beck over at Fox News.  He never cites Malinowski!

Fatuous, Naïve, and Bold at the Same Time: Welcome to the Wonderful World of Peer Review

 

Fair warning from an anonymous peer reviewer on one of my academic articles…

The author is hampered by an inaccurate, naïve, and highly simplistic understanding of the basic principles…which leads him to make ludicrous statements like the following…

Yes, that’s me: inaccurate, naïve, and highly simplistic! And so forth. If you share that sentiment, do not read further!

I actually posted a blog about this for the first time in July 2008 after being pummeled in the peer review process. Some anonymous yahoos out in peer-review land accused me of the above transgression and more. What can I say? Only that someone else later thought about the same paper that:

This is a strong paper that makes some interesting connections between advances in contemporary neural science and some early observations from America’s first sociologists. While it treads ground familiar to anyone who has taken introductory sociology (elementary patterns of socialization, affectivity as a social product, empathic understanding), the paper marries this more familiar work to recent ideas emerging from neural science. This makes it a novel contribution. In particular, the claim that sociologists have known of the imminently social character of human beings all along and didn’t need fMRI to discover it, strikes me as a bold one, but a claim that is worth making, especially since it reaffirms the value and relevance of sociological concepts to those beyond the discipline’s boundaries.

 

But even that journal rejected the paper, too. My neurology paper may be bold, but that review was not enough to get the paper actually accepted for a long time. For a long time, anonymous reviewers were more into the “naïve” evaluation, and the paper went down in flames. This phoenix of a paper actually went through six years of peer reviews, in the process collecting a range of laudatory and insulting reviews.

It is somehow believed that “peer review” is the gold standard of academic achievement. Really?

Here is what a couple of hotshot editors are reported by Wikipedia to have said:

Drummond Rennie, deputy editor of Journal of the American Medical Association is an organizer of the International Congress on Peer Review and Biomedical Publication, which has been held every four years since 1986. He remarked:

There seems to be no study too fragmented, no hypothesis too trivial, no literature too biased or too egotistical, no design too warped, no methodology too bungled, no presentation of results too inaccurate, too obscure, and too contradictory, no analysis too self-serving, no argument too circular, no conclusions too trifling or too unjustified, and no grammar and syntax too offensive for a paper to end up in print.[47]

Richard Horton, editor of the British medical journal The Lancet, said:

The mistake, of course, is to have thought that peer review was any more than just a crude means of discovering the acceptability—not the validity—of a new finding. Editors and scientists alike insist on the pivotal importance of peer review. We portray peer review to the public as a quasi-sacred process that helps to make science our most objective truth teller. But we know that the system of peer review is biased, unjust, unaccountable, incomplete, easily fixed, often insulting, usually ignorant, occasionally foolish, and frequently wrong.[48]

This assumption persists even in the context of well-publicized fraud scandals involving well-funded high fliers in physics, human cloning, and cancer research all indicate that peer reviewers at journals like Science and Nature are somehow sloppier than the social scientific journals I typically send my papers to–so far as I know, none of the journals where I have published have such a sordid record. They are just nasty.

Nevertheless, peer review does often add to the seriousness of academic publication. Plus, if you did not have peer review, as is often said, you are no better than a newspaper, a blog or (horrors) Wikipedia!

But, that does not mean that peer review is always encouraging, nurturing, or wvwn fair. Sometimes peer review is only tin beneath the gold plate. Peer reviewers with the cloak of anonymity permit insecure scientists the chance to level the artillery at potential competitors.

On top of that, editors do not always do their part by protecting writers from the more unreasonable attacks. Does this make for better science? Maybe. My own view is that in the long-run peer review makes for a more careful and conservative science-if that is “better” I’ll let you decide. But in the short run it often adds fuel to the insecurities of the most vulnerable in our midst—the graduate students, untenured faculty, and others who are kept at arms length by self-described tenured “gatekeepers.”

In short, peer review can discourage challenges to the status quo, even though such challenge is what good science is about in the first place. Most crucially, writers without a thick skin are discouraged from pursuing ideas further (whether good or bad), all because some anonymous reviewer had a fight with their spouse or teenager that morning, and took it out on you.

Scientific Publication—The Theory.
 But I still think that the ideal of peer review is good. It is that rational, unbiased, and anonymous experts evaluate the work of others to verify whether an idea is new, rigorous, and important enough for publication. You submit a paper to a journal, and then the editor selects colleagues within your discipline to read what you have written.

Anonymity is important to this process (ideally both the reviewer and reviewee do not know who each other are), because there are friendship cliques and elites within the scientific community that bias review. Papers judged by editors as “possible for publication” are sent to reviewers selected for their expertise. The reviewers then submit their reasons for acceptance or rejection to the editor. Such reasons ideally entail 2-3 pages (single spaced) discussing the strengths and weaknesses of a paper’s data and argument, which are then forwarded anonymously to the author. Often, suggestions are made about literature that may have been missed in the paper, irrespective of whether the paper is accepted or rejected.

Hopefully, this results in a “revise and resubmit,” though “reject” is more common. With revise and resubmit, a paper often has up to five reviewers (plus the editor) read and make anonymous comments for the author. Because so many minds are focused on the development of the paper, the overall quality, rigor, and accuracy is often improved. Survive this, and you get a final acceptance which is important to anacademic community which controls jobs, promotions, and the distribution of status.

Between first submission, and the final arrival of a paper in print, months, and possibly years may pass. But this care is why your anthropology professor prefers to see you cite the American Journal of Sociology, American Anthropologist, or Social Forces, rather than Newsweek, CNN’s website, Ethnography.com, Wikipedia, or even Encyclopedia Britannica. All of these sources may be edited for style. Most importantly, though,there is no expert review of the scholarly reasoning.

The result of all this peer-reviewing literature is a scientific literature which academics (and especially graduate students) pore over. The peer-reviewed literature is considered valid and reliable because it has been through the “rigorous” review process. Acceptance rates in the most prestigious journals are often less than 10%, meaning that only the self-described “very best” is published, while the rest is rejected and perhaps submitted to a less prestigious journal, or perhaps find itself into publication in an “edited collection” prepared by a group of colleagues interested in a shared subject. Note that neither of these final two conditions are all that bad, since they do indeed put a new idea “out there” for those diligent graduate students to find. This is particulalry the case in the modern world of the internet where with a little navigation skill, and the help of Interlibrary Loan services, publications from anywhere in the world can be located, often in hours. Still, the stamp of approval from a “prestigious” journal makes it more likely to be noticed by a wide audience tuned.

Scientific Publication—The Practice
.  Anyway, that’s the theory of peer review. As I indicated above, in 2007-2008, I went through the process with two separate articles and a book proposal four times in six months. Only sometimes did the process meet the ideal. The book proposal resulted in a contract, and eventually a book, Schooling, Bureaucracy, and Childhood. One article on neurology was flat out rejected once, and then received from another journal a “rejection but you can submit again,” which eventually resulted in a flat out rejection, and another round of peer review. The third paper about African history was rejected, and the editor recommended I pay attention to one reviewer’s comments, and submit to another journal.

All together, the reviews during this time incorporated the opinions of six reviewers. Two were brief, insulting, and without redeeming value. They dismissed my work in a few short lines. One was insulting, but made good recommendations about things that should be incorporated in the article. One was frustrated with my “sloppiness” but the reviewer thought the paper was worth a “revise and resubmit” which the editor did not give me. The fifth thought the paper was worthwhile, but needed to be fleshed out for the “new parts” more, and the editor gave me the “reject but you can resubmit in a revised form.” The last was the “accept.”
In other words, three of the reviews were constructive, and reflect the very best of the peer review process. Two of them reflect the worst impulses found in the review process. The one which was insulting (called me naive, etc) still gave some good suggestions.

Here is a sampling, with some of my own ripostes:

“…There is little that is based on original research and no substantial intellectual or theoretical content…I am sorry to be so negative, but this [paper] is simply a non-starter.” (This comment was on a 40+ page paper, and the whole review was only about six sentences long. This reviewer has an ego problem and is lazy).

The second review on the same paper was three pages long, and pointed out in excruciating detail a number of errors on my part:

Despite this rather frustrating sloppiness [which was pointed out in excruciating detail], I am willing to see the author revise and resubmit… (ok, ok, you got me this time…I will go back and fix things)

Comments on the sociology and neurology article included the following. First the extremely short dismissive review:

This leads him to highly fatuous arguments… (Not as fatuous as your stupid review).

A second comment on the same paper:

The author is hampered by an inaccurate, naïve, and highly simplistic understanding of the basic principles…which leads him to make ludicrous statements like the following…(this review included some good references to what the reviewer thought were key to the discipline, so he got me on that one. I will cite them, but also note that they present an inaccurate, naïve, and highly simplistic understanding of basic sociological literature…which leads to ludicrous statements. Except I will say this with more respect, and not anonymously.)

The neurology paper was resubmitted to another journal after I took a number of issues raised in the second review into account. I received the following comments back:

I’m very sympathetic to one of the paper’s central claims…but I don’t believe that the paper as a whole has a sufficiently clear and sustained focus. .. What exactly do the two ideas have in common (apart from a central metaphor) and how do they differ? What can we learn from the comparison … But to make a substantial contribution to this more general debate, it would need to canvas a range of examples, … and to break some ground; advance some new arguments or shed new light on old ones. (This comment ended in a rejection and resulted from the comment below from the editor. But thanks for the thoughtful comments!)

I agree with the reviewer`s opinion that the basic line of thought in this paper is interesting and plausible. But I think the reviewer is also probably right that these basic ideas need more sustained development… (ok, you have a good point. I will do it, and get back to you in a couple of months which incorporate some of the specific points raised—thanks for being encouraging even though this was not an acceptance!)

And finally a note from the one acceptance out of the four submissions:

I’m not sure if I have a plan to order things differently than they are currently ordered, but it strikes me as potentially a little awkward… (I think that this reviewer was probably right—but any type of acceptance makes me pretty happy!)

My own strategy for working with this range of commentary, is to assume that anything complimentary is really correct, suggestions for including other books as a citation should always be followed, and that anyone that includes words like “fatuous,” “naïve,” or “ludicrous” means that I have a really good paper that justifiably ruffled feathers, and I should try again. As for the reviewer? That person is in need of psychiatric help.

What I like about Anonymous Peer Review.
 So there you have peer review, from the nasty to the constructive. If you are ever asked to do peer a review, I would urge you to avoid the nasty side—visit a therapist instead. Be constructive in your comments, even if your conclusion is to “reject.” Remember too, that many papers go through many iterations—papers are only rarely accepted on the “first try.” My own experience is that papers might be accepted on the second to fifth try. Or even the twelfth try.

     The mirror neuron holds my record, having been rejected by a motley collection of psychology, sociology, and biological journals. Who would have guessed that it would be eventually accepted by a Philosophy journal? But of all the papers I have written, in my view it is the most original—and also the most difficult to get published. First keystrokes were in 2007, first submission in 2008, and actual publication was in 2014!

Usually—though not always—the peer review process is a constructive part of developing a paper. There are a lot of journals out there, and a rejection is sometimes the luck of the draw. How could the editor have known that the reviewer he met a conference five years before had tortured frogs as a child, and was also going through a bad divorce? Ignore the comments about being naïve, simplistic and ludicrous which probably tell you more about the reviewer’s mental health than the quality of your paper. Fix what is fixable, while recognizing that good papers by definition displease.

While peer review sometimes (but not always) eliminates some poor scholarship, in my view the greatest contribution peer review offers is in its ability to encourage and nurture good scholarship. Some of the more prestigious journal in sociology note this, telling reviewers that despite the fact that 90% of the submissions are not published, Comments are important because eventually many papers are published somewhere. What they don’t note is that they are also rejecting some of the best sociology.

Indeed, many of the most important and revolutionary ideas were first described in remoter areas of the academic literature. In part this happens because the papers were first nastily received at the prestigious “mainstream” journals which are so heavily vetted by the big shots. It is only after validation in the nether reaches of a discipline that the conservative review process means that they great ideas make their way into the more “prestigious” mainstream literature.

Which still doesn’t explain how fraudulent papers get through the “rigorous” process.   I still can’t figure out how the fraudulent writers for Nature and the other journals managed to get their papers published when so many anonymous reviewers come after my papers with chainsaws!

 

Reference List: The Fatuous, Naïve, and Bold

Waters, Tony (2014). Of Looking Glasses, Mirror Neurons, and Meaning. Perspectives on Science. Behind Paywall available here:

http://www.mitpressjournals.org/doi/pdf/10.1162/POSC_a_00152

Prepublication Version available here: https://www.academia.edu/5064752/Of_Looking_Glasses_and_Mirror_Neurons–Manuscript_Version

 

Waters, Tony (2012). Schooling, Childhood, and Bureaucracy: Bureaucratizing the Child. New York: Palgrave MacMilllan.

Free Chapter Available Here. Also can recommend to libraries! http://www.palgraveconnect.com/pc/doifinder/10.1057/9781137269720.0005

 

Waters, Tony (2009). Social Organization and Social Status in Nineteenth and Twentieth Century Rukwa, Tanzania. African Studies Quarterly 11(1)

http://asq.africa.ufl.edu/files/Waters-V11Is1.pdf

Originally posted October 7, 2014 at Ethnography.com by Dr. Tony Waters.

“Teach Like You Do in America,” While Still Doing it the Tanzanian Way!

The first time I was told to “teach like you do in America” was in 2003-2004 in Tanzania where I was a Fulbright Scholar at the Sociology Department at the University of Dar Es Salaam (see Waters 2007). UDSM is a large sprawling African university, spread across “The Hill” near the Indian Ocean coast. UDSM prides itself for schooling presidents from Tanzania, Uganda, Congo, and South Sudan and its many graduates who played critical roles in first the decolonization of Africa, and now the political leadership of many countries.

When I was at UDSM, the university suffered from the common short-comings of Africa higher education, including old facilities, limited computer capacity, a dated library collection, inadequate faculty staffing, low salaries, and the occasional strike by students. And despite UDSM’s record of creating much of eastern Africa’s elite, it made little dent in the ranking systems highlighted by The Economist. After all creating a future for an area of the world that is growing rapidly is not a metric in such ranking systems.

The pedigree of UDSM in 2003 was inherited from both the British colonial rulers, and more importantly the rapidly expanding Tanzania of the 1990s and 2000s when ambitious students were swept into the university far faster than faculty were hired. In this context I was told to “teach like you do in America!” But I was told it would also be nice if I included the revolutionary Frantz Fanon who wrote Wretched of the Earth on the reading list for my Race and Ethnicity class (I was also asked to include, Marx, who some of the better-read Tanzanian graduate students insisted was not an atheist!). Fanon fortunately gave me an African example which was far better than “teaching like I did in America,” which would have meant illustrations rooted in studies of U.S. American minority groups, which lacked resonance for my east African students.

Tanzania, certainly, has ethnic divisions, based in religion, merchant minorities, and most salient of all, “tribal” identification. But tribal identification was tricky for a foreigner to navigate in 2003, because during the pre-1961 days of British colonialism, such identities were a basis for political, legal, and professional discrimination. And so tribal identification was “banned” in independent Tanzania at independence, although of course such identities persisted, and do persist. But how to talk about this in a 90+ student race and ethnicity class? Indeed, when I first raised the issues of tribes, I received another visit from assertive students pointing out that that tribes were a subject inappropriate in Tanzania, since the categories no longer existed and “we are all Tanzanian.” It was nevertheless pointed out that I was free to use east Africa’s merchant minorities (Arabs and Indians) as examples. This was particularly the case I learned, if I reinforced the stereotypes of a student body steeped in family lore about how the greedy Arab and Indian merchant minorities took advantage of black Africans. And they still insisted on carrying my briefcase and books!

But for me, the most difficult task in the Tanzanian system was managing the large classes in a hot humid climate using blackboards with dusty chalk.  There were no computers in the classroom, nor could I distribute course materials by email. Everything was done with a blackboard and piece of chalk, the dust turning to chalk-mud on my skin and clothing in Dar Es Salaam’s muggy climate. Projecting an Excel spread sheet, much less requiring students to access computers, was out of the question. The culturally appropriate t-test (How many spoonfuls of sugar do males and females like in their tea?) I did on a dusty blackboard, and students copied, copied, and copied with pen and paper.

My classes were in large lecture halls—remnants of an impressive 1970s era building boom—which included an architectural masterpiece, Nkrumah Hall, which is featured on the back of Tanzania’s 500 shilling note. I gave just two tests, far fewer assignments than I do in the United States, where demands for student work in the form of homework sets and quizzes are considered to be pedagogic best practice. Following UDSM regulations these tests comprised 40% of the overall grade, with a final exam worth 60% (in comparison in California, my final assignment is worth 25% or less). All assignments were written in long-hand and needed to be hand-graded—no machine grading. I read every exam in my 400+ student social statistics class.

Student academic culture at UDSM was different as well—students were from diverse areas of Tanzania, and supported financially by extensive family networks and a government loan system for the majority of students who did not have enough money to attend. Students were older than my American students, and certainly had less money—no cars in the student parking lot! The rich Tanzanian student might have a scooter. Tanzanian students also had their own study rhythms, with a strong emphasis on collaboration which some of my expatriate colleagues defined as cheating. But collaboration also meant that in the muggy evening when the weather cooled off just a little bit, students gathered under the electric street lamps, where one student read out loud one of the few textbooks available, while the others listened. The culture of the university—and the future of Africa—emerge from such gatherings, more so than from my “American-style” teaching.

Student finance is what led to a student strike—a phenomenon unheard of in the United States in recent decades. The students receiving the “monthly” loan payments used them to purchase food, and pay for on-campus accommodation. Payments were frequently late—which meant that students might start eating less food later in the month. How did I know this? The unspoken cultural cue was that the males started wearing neckties in the sweltering heat as meals became fewer—the ties it was said, distracted attention from sallow cheekbones.

One morning in May 2004, I went to class as usual. But very few students showed up because a student strike to object to policies regarding repayment of student loans was scheduled that morning. At 9:01 a.m., we heard the sound of the rushing strike coming, and my students politely asked to accompany me to my office—they told me staying risked a beating from the striking students (for a description of a similar strike see Ernest 2011). A strike meant no classes, period, and striking students cleared the classrooms by waving tree branches. The university administration responded by summarily closing the university that afternoon, an order that was enforced by police on campus, with help from the army. Marching strikers were blocked from going into town on that hot day by tear gas wielding troops from the army’s “Field Force.” A whiff of tear gas later, I simply settled down…to mark stacks of papers. The shut-down lasted about two weeks, as I slowly made my way through the stacks of sweat-stained papers on our dining room table.

The final surprise in UDSM culture came as I prepared and administered my finals in late June. The course that I remember most clearly is that social statistics class. 400+ students showed up to take the final, a grading task I was dreading. And then, surprisingly, the finals were whisked away from me—one of my Tanzanian colleagues did the first pass, which was then reviewed by an independent outside reviewer from South Africa. Unlike the United States, professors do not have the final word on grades in Tanzania. Rather grades there are a product of a consensus. In this way Tanzanian faculty hold themselves to internationally validated academic norms, in ways that professors in the United States are not.

The above is from my recent article published at Palgrave Communications: (2015) “‘Teach Like You Do in America”‘Personal Reflections from Teaching Across Borders in Tanzania and Germany.”

Fatuous, Naïve, or Bold? The Wonderful World of Peer Review

 

Fair warning from an anonymous peer reviewer of one of my academic articles…

The author is hampered by an inaccurate, naïve, and highly simplistic understanding of the basic principles…which leads him to make ludicrous statements like the following…

Yes, that’s me: inaccurate, naïve, and highly simplistic! And so forth. If you share that sentiment, do not read further.

I posted a blog about peer review for the first time in July 2008 after being pummeled in the peer review process. Some anonymous yahoos out in peer-review land accused me of the above transgression and more. What can I say? Only that someone else later thought about the same paper the following.

This is a strong paper that makes some interesting connections between advances in contemporary neural science and some early observations from America’s first sociologists. While it treads ground familiar to anyone who has taken introductory sociology (elementary patterns of socialization, affectivity as a social product, empathic understanding), the paper marries this more familiar work to recent ideas emerging from neural science. This makes it a novel contribution. In particular, the claim that sociologists have known of the imminently social character of human beings all along and didn’t need fMRI to discover it, strikes me as a bold one, but a claim that is worth making, especially since it reaffirms the value and relevance of sociological concepts to those beyond the discipline’s boundaries.

But even the journal receiving that review rejected the paper. My neural science paper may have been bold, but that review was not enough to get the paper actually accepted for a long time. I guess reviewers and editors were more into the “naïve” evaluation, and so the paper went down in flames repeatedly. This phoenix of a paper actually went through four or five years of peer reviews, in the process collecting a range of laudatory and insulting reviews.

It is somehow believed that “peer review” is the gold standard of academic achievement. Really?

Here is what a couple of hotshot editors are reported by the non-peer reviewed Wikipedia to have said:

Drummond Rennie, deputy editor of Journal of the American Medical Association is an organizer of the International Congress on Peer Review and Biomedical Publication, which has been held every four years since 1986. He remarked:

“There seems to be no study too fragmented, no hypothesis too trivial, no literature too biased or too egotistical, no design too warped, no methodology too bungled, no presentation of results too inaccurate, too obscure, and too contradictory, no analysis too self-serving, no argument too circular, no conclusions too trifling or too unjustified, and no grammar and syntax too offensive for a paper to end up in print.

Richard Horton, editor of the British medical journal The Lancet, said:

“The mistake, of course, is to have thought that peer review was any more than just a crude means of discovering the acceptability—not the validity—of a new finding. Editors and scientists alike insist on the pivotal importance of peer review. We portray peer review to the public as a quasi-sacred process that helps to make science our most objective truth teller. But we know that the system of peer review is biased, unjust, unaccountable, incomplete, easily fixed, often insulting, usually ignorant, occasionally foolish, and frequently wrong.

But the assumption about the “gold standard” persists even in the context of well-publicized    scandals involving well-funded high fliers in fields like physics, human cloning, and cancer research that all indicate that peer reviewers at journals like Science and Nature are somehow sloppier than the social scientific journals I typically send my papers to. Indeed, so far as I know, none of the journals where I publish have such a sordid record (certainly SocJournal does not!), but the journals I send my papers to can indeed send me back nasty reviews.

Despite all this, I still think peer review does often add to the seriousness of academic publication. Plus, if you did not have peer review, as is often said, you are no better than a newspaper, a blog or (horrors) Wikipedia!

But, that does not mean that peer review is always encouraging, nurturing, or even fair. Sometimes peer review is only tin beneath the gold plate. Peer review with the cloak of anonymity permit insecure scientists the chance to level the artillery at potential competitors. Editors in turn do not always do their part by protecting writers from the more unreasonable attacks. Does this make for better science? Maybe. My own view is that in the long-run peer review makes for a more careful and conservative science–if that is “better.” But in the short run it often adds fuel to the insecurities of the most vulnerable in our midst—the graduate students, untenured faculty, and others who are kept at arms length by self-described anonymous tenured “gatekeepers.”

In short, peer review discourages challenges to the status quo, even though such challenge is what good science is about in the first place. Most crucially, writers without a thick skin are discouraged from pursuing ideas further (whether good or bad), all because some anonymous reviewer had a fight with their spouse or teenager that morning, and took it out in the peer review.

Scientific Publication—The Theory.
But I still think that the ideal of peer review is good. The theory is that rational, unbiased, and anonymous experts evaluate the work of others to verify whether an idea is new, rigorous, and important enough for publication. You submit a paper to a journal, and then the editor selects unbiased experts within your discipline to read what you have written.

Anonymity is important to this process (ideally both the reviewer and reviewee do not know who each other are), because there are friendship cliques (and elites) within the scientific community. Papers judged by editors as “possible for publication” are thusly sent to reviewers selected for their expertise without the name of the author. The reviewers then submit their reasons for acceptance or rejection to the editor. Such reviews ideally entail 2-3 pages (single spaced) discussing the strengths and weaknesses of a paper’s data and argument, which are then sent anonymously back to the author. Often, suggestions are made about literature that is missing, irrespective of whether the paper is accepted or rejected.

Authors hope that this results in a “revise and resubmit,” though “reject” is more common. With revise and resubmit, a paper often has about five reviewers (plus the editor) who read and make anonymous comments. Because so many minds are focused on the development of the paper, the overall quality, rigor, and accuracy is often improved. Survive this, and you might get a final acceptance which is important to an academic community which controls jobs, promotions, and the distribution of status.

Between first submission, and the final arrival of a paper in print, months, and possibly years may pass. This care is why your professor prefers to see you cite the American Journal of Sociology, American Anthropologist, or Social Forces, rather than Newsweek, CNN’s website, Wikipedia, or even Encyclopedia Britannica (Socjournal of course is in its own category which will indeed be apparent to the PhDs graduating in 2020!). All of these sources may edit for content and style, though they may not go through the formal peer review process which is so careful and conservative.

The result in the peer-reviewed literature is a science which academics (and especially graduate students) pore over. The peer-reviewed literature is more considered valid and reliable because it has been through a “rigorous” review process. Acceptance rates in the most prestigious journals are often less than 10%, meaning that only the peer-described “very best” is published. Often unsaid is that the rest is rejected but often submitted to a less prestigious journal, or perhaps finds itself into publication in an “edited collection” prepared by a group of colleagues interested in a shared subject. Note that neither of these final two conditions are all that bad, since they do indeed put a new idea “out there” for that diligent graduate student to find. Indeed, today “out there” includes the readily accessible world-wide-web open to everyone, while that prestigious publication is tucked by a pay-wall accessible only to those with expensive library privileges. Still, the stamp of approval from a “prestigious” journal makes it more likely to be noticed by the “right” audience.

Scientific Publication—The Practice
Anyway, that’s the theory of peer review. As I indicated above in my 2008 blog, in 2007-2008, I went through the process with two separate articles and a book proposal four times in six months, and only sometimes did the process meet the ideal. The book proposal resulted in a contract, and eventually a book, Schooling, Bureaucracy, and Childhood (2012). The article on neural sciences was flat out rejected once within a few weeks, and then shortly after received from a “rejection but you can submit again.” This “reject but resubmit” eventually resulted in another flat out rejection after another round of peer review. The third paper about African history was rejected, and the editor recommended I pay attention to one reviewer’s comments, and submit to another journal. I did that, and it was published about a year later.

Altogether, the reviews during just those six months incorporated the opinions of six reviewers. Two reviews were brief, insulting, and without redeeming value. They dismissed my work in a few short lines. One was insulting, but made good recommendations about things that should be incorporated in the article. One was frustrated with my “sloppiness” but the reviewer thought the paper was worth a “revise and resubmit” which the editor did not give me. The fifth thought the paper was worthwhile, but needed to be fleshed out more, and the editor gave me the “reject but you can resubmit in a revised form.” The last was the “accept.”
In other words, three of the reviews were constructive, and reflect the very best of the peer review process. Two of them reflect the worst impulses found in the review process. The one which was insulting (called me naive, etc) still gave some good suggestions.

Here is a sampling, with some of my own ripostes, which six years later in 2014 still make me feel better!

…There is little that is based on original research and no substantial intellectual or theoretical content…I am sorry to be so negative, but this [paper] is simply a non-starter. (This comment was on the 40+page Africa Studies paper, and the whole review was only about six sentences long. This reviewer has an ego problem and is lazy).

The second review on the same paper was three pages long, and pointed out in excruciating detail a number of errors on my part:

Despite this rather frustrating sloppiness [which was pointed out in excruciating detail], I am willing to see the author revise and resubmit… (ok, ok, you got me this time…I will go back and fix things).

Comments on the neural science article included the following. First the extremely short dismissive review:

This leads him to highly fatuous arguments… (Not as fatuous as your silly review).

A second comment on the same paper:

The author is hampered by an inaccurate, naïve, and highly simplistic understanding of the basic principles…which leads him to make ludicrous statements like the following. (This review included some good references to what the reviewer thought were key to the discipline. I will cite the suggested references, but also note that they present an inaccurate, naïve, and highly simplistic understanding of basic sociological literature…which also leads to ludicrous statements. Except I will say this with more respect, and not anonymously).

The neural science paper was then resubmitted to another journal. After I fixed a number of issues later raised in the second review, I received the following comments back in 2010:

I’m very sympathetic to one of the paper’s central claims…but I don’t believe that the paper as a whole has a sufficiently clear and sustained focus… What exactly do the two ideas have in common (apart from a central metaphor) and how do they differ? What can we learn from the comparison… But to make a substantial contribution to this more general debate, it would need to canvas a range of examples,… and to break some ground; advance some new arguments or shed new light on old ones. (This comment ended in a rejection and resulted from the comment below from the editor. But thanks for the thoughtful comments!)

The editor in the rejection letter responded:

I agree with the reviewer’s opinion that the basic line of thought in this paper is interesting and plausible. But I think the reviewer is also probably right that these basic ideas need more sustained development… (Ok, you have a good point. I will do it, and in the submission to another journal incorporate some of the specific points raised—thanks for being encouraging even though this was not an acceptance!)

And finally a note from the one acceptance out of the four submissions, on my book proposal about “Bureaucratizing the Child”

I’m not sure if I have a plan to order things differently than they are currently ordered, but it strikes me as potentially a little awkward…(I think that this reviewer was probably right—but any type of acceptance after so much rejection makes me pretty happy!)

My own strategy for working with this range of commentary, is to assume that anything complimentary is entirely correct, suggestions for including other books as a citation should generally be followed, and that anyone that includes words like “fatuous,” “naïve,” or “ludicrous” means that I have a really good paper that justifiably ruffled feathers, and I should try again. As for the reviewer? That person is in need of psychiatric help.

What I like about Anonymous Peer Review.
So there you have peer review, from the nasty to the constructive. If you are ever asked to do a peer review, I would urge you to avoid the nasty side—visit a therapist instead. Be constructive in your comments, even if your conclusion is to “reject.” Remember too, that many papers go through many iterations—papers are only rarely accepted on the “first try.” My own experience is that papers might be accepted on the second to fifth try. Or even the twelfth try.

     My mirror neuron article holds my record, having been rejected by a motley collection of psychology, sociology, and biological journals between 2008 and 2012. Who would have guessed that it would be eventually accepted by a Philosophy of Science journal? But of all the papers I have written, in my view it is the most original—and perhaps this is why it was most difficult to get published. First keystrokes were in 2007, first submission in 2008, and actual publication was in 2014!

Usually—though not always—the peer review process is a constructive part in developing a paper. There are many journals out there, and a rejection is sometimes the luck of the draw. How could the editor have known that the reviewer he met a conference five years before had tortured frogs as a child, or was also going through a bad divorce? So ignore the comments about being naïve, simplistic and ludicrous which probably tell you more about the reviewer’s mental health than the quality of your paper. Fix what is fixable, while recognizing that good papers by definition displease.

While peer review sometimes (but not always) eliminates some poor scholarship, in my view the greatest contribution peer review offers is the capacity to encourage and nurture good scholarship. Some of the more prestigious journal in sociology note this, telling reviewers that despite the fact that 90% of the submissions are not published, comments are important because eventually many papers are published elsewhere. What they don’t note is that they are also rejecting some of the best sociology, because of the inherently conservative nature of peer review.

Indeed, many of the most important and revolutionary ideas were first described in remoter areas of the academic literature. In part this happens because the papers were first nastily received at the prestigious “mainstream” journals which are so heavily vetted by the big shots. It is only after validation in the nether reaches of a discipline that the great ideas make their way into the more “prestigious” mainstream literature.

Which still doesn’t explain how fraudulent papers get through the “rigorous” process at places like Nature and Science.   I still can’t figure out how the fraudulent writers managed to get their papers published when so many anonymous reviewers come after my papers with chainsaws!

 


 

Reference List

Waters, Tony (2014). Of Looking Glasses, Mirror Neurons, and Meaning. Perspectives on Science. Behind Paywall available here:

http://www.mitpressjournals.org/doi/pdf/10.1162/POSC_a_00152

Prepublication Version available here: https://www.academia.edu/5064752/Of_Looking_Glasses_and_Mirror_Neurons–Manuscript_Version

 

Waters, Tony (2012). Schooling, Childhood, and Bureaucracy: Bureaucratizing the Child. New York: Palgrave MacMilllan.

Free Chapter Available Here. Also can recommend to libraries! http://www.palgraveconnect.com/pc/doifinder/10.1057/9781137269720.0005

 

Waters, Tony (2009). Social Organization and Social Status in Nineteenth and Twentieth Century Rukwa, Tanzania. African Studies Quarterly 11(1)

http://asq.africa.ufl.edu/files/Waters-V11Is1.pdf

 

Troping the Enemy: Culture, Metaphor Programs, and Notional Publics of National Security

By Robert Albro

American University

 

The Intelligence Advanced Research Projects Activity (IARPA) – established in 2006 in the spirit of the Pentagon’s DARPA to sponsor research for groundbreaking technologies to support an “overwhelming intelligence advantage over future adversaries” – is a little-known US agency that social and behavioral scientists (especially sociocultural anthropologists) should pay more attention to. This is because IARPA is notably social scientific in orientation and has been developing concepts in specific ways for use by the intelligence community (IC) that US anthropology in particular is significantly historically responsible for introducing to the social sciences, if in different ways, most obviously: culture, its coherence and the extent of cultural consensus, its relationship to society and to human agency.

At its inception IARPA was tasked with developing better ways, in USA Today-speak, to “help analysts measure cultural habits of another society.” And its portfolio continues to sponsor research intended to develop big data-type tools to process the linguistic and cultural information of countries, societies and communities of interest to US espionage. While there are anthropologists who work along the frontier between their discipline and the rapidly emerging computational social sciences, it is unlikely that many anthropologists would approach cultural analysis in the terms currently pursued by IARPA. The agency’s formulations of cultural problems likely strike most social scientists as well outside of, or as at odds with, the standard or prevailing disciplinary usages of this concept, including the concept’s basic significance and what legitimately can be done with it. But, it is often the case – and unfortunately so – that there is scant traffic of any kind between academic anthropology and the IC, even when there are clearly things to talk about, like the culture concept.

 

A Public Anthropology of the IC?

In the era of Wikileaks and Edward Snowden, journalists have increasingly sought to shine a light on “top secret America,” to borrow Dana Priest’s phrase. And public debate has in large part focused on the new circumstances of privacy (or the lack thereof), clandestine data collection, and the ethics of new largely internet-based and social media-derived means used by intelligence agencies to amass colossal troves of information while mining people’s online signatures. Much less often considered, if at all, is whether the sociological or anthropological theory – the tissue of ideas and concepts underwriting these programs – actually makes any sense.

Instead, the vast majority of attention is given to extolling and further exploring the possibilities for data collection opened up by new computational and social media technologies. Too often, wide-ranging and critically grounded academic discussion and debate has played virtually no part in how these programs are conceived and implemented. A lack of more substantive dialogue about the social science informing IARPA’s programs, and the possibility of skewed or flawed results built upon misguided or unexamined assumptions, is a serious problem with the potential to negatively and mischievously – but perhaps not altogether obviously – influence intelligence priorities in the US, and if indirectly, the country’s foreign policy footprint.

IARPA’s several culture-focused programs point to the need for more critical discussion among social scientists about IARPA-style social science and related priorities of security and intelligence agencies, a discussion which could at once address and more trenchantly appraise the particular assumptions (and social scientific world view) underwriting such projects, their limits, and the ways that questionable or debatable concepts and practices with traction in the social science of the securityscape are potentially relevant, defensible, or ill-conceived. This is a conversation that should also include the IC itself. But this is not a conversation that social scientists outside the IC either are regularly aware of, want, or perhaps even know how to have, with a few exceptions.

When academic social scientists do address the social science of the securityscape, the prevailing approach is to take issue with the politics and ethics of social scientific involvement with the present version of the military industrial complex, advanced from a position well outside this work and often at a considerable distance from the specific details – and many of the implications – of it. But we also need more grounded and zoomed-in discussion about the epistemologies, research designs, data, analysis, and conclusions drawn by this work, and associated implications, which take account of the ways this realm of social scientific ideas and concepts also drives IC priorities and outcomes in ways sometimes constructive but perhaps at least as often, problematic.  If such discussions sometimes do take place, they need to be broader, deeper, more inclusive, and sustained.

A program to measure cultural habits suggests a quantitative approach to a hermeneutical problem, which, at the very least, takes for granted a very different conception of culture as a source of insight than the various ways that anthropologists usually engage with this concept. These differences are not trivial. “Culture” is a concept from which US anthropology notably retreated in the 1990s and which the discipline has continued to qualify in multiple ways, while for the IC interest in “socio-cultural factors” – often as cultural intelligence and as enlisted in exercises of prediction – has been notable since the mid-2000s, if to various ends. The reasons why the IC and academic anthropology appear headed in opposite directions vis-à-vis the culture concept would certainly be a timely discussion.

 

Metaphor for the IC

Several of IARPA’s programs have attracted at least some journalistic attention of late as well, such as 2011’s Open Source Indicators program. But here I consider instead IARPA’s Metaphor Program, also launched in 2011, because it is a particularly revealing example of the recent technologically-enhanced version of the cultural turn by the US intelligence community. Most simply, a “metaphor” is a linguistic relationship of similarity, where one experiential domain (the target) is understood by way of reference to another (the source). Astronomer Fred Hoyle coining the term “big bang” to refer to one theory for the origin of the universe is a case in point. IARPA’s program aspires to provide decision-makers with a more systematic understanding of the “shared concepts and worldviews of members of other cultures” by compiling a given culture’s metaphors and making these available to intelligence analysts.

My questions about this objective are several, if connected: As part of a larger IC project for culture, what does “metaphor” currently mean to the US intelligence community? And, given what “metaphor” means for the IC, how does this understanding influence the ways the IC might conceptualize specific cultures or foreign publics of interest? And what, in turn, might this mean for the footprint of the US intelligence community, as it offers policy decision-makers a particular account of global geopolitics, at least in part informed – if indirectly and in ways most likely invisible to any given decision-maker – by programs like this one?

IARPA’s solicitation for its Metaphor Program promotes the goal of a better understanding of “the tacit backdrop against which members of a culture interact and behave,” or the patterned “cultural norms” which compose the “worldviews of particular groups or individuals.” And metaphors are the program’s choice because they are both “pervasive in everyday language” and, IARPA assumes, metaphors “shape how people think about complex topics.” More importantly, IARPA understands metaphors to “reduce the complexity of meaning” because their usage is patterned.

As the program’s manager, Heather McCallum-Bayliss, observed, “Culture is a set of values, attitudes, knowledge and patterned behaviors shared by a group.” IARPA’s conception of metaphor is assumed to be a key to understand cultures, in large part because cultures, in turn, are understood – channeling the ghost of Ruth Benedict – as patterned and shared group behavior. Such a preference for disciplinarily obsolete but hyper-coherent conceptions of culture like Benedict’s in the broader military and security environment is far from unique. And a consistent preference for such starting points is telling about IARPA’s objectives and the computational steps it intends to take to achieve them.

IARPA is investing in research on metaphors because it is convinced such research has the potential to uncover the “inferred meanings,” “conventional understandings” and “underlying concepts that people share,” thus allowing the intelligence community to gain better analytic purchase on identified “cultures of interest,” but more importantly for the agency, on the “decision-making and perception of foreign actors.” To this end, IARPA’s approach to metaphor is largely derived from one influential story about metaphor most closely related to the species of cognitive linguistics associated with George Lakoff and colleagues. And Lakoff, it turns out, is not coincidentally, a member of a research team now working to develop a multilingual metaphor repository with IARPA funding from its Metaphor Program.

 

Lakoff’s Tristes Tropes

So, first we need to know a few things about the Lakovian approach to metaphor, since there are other contenders in the scholarly field of trope theory. If beginning with his influential Metaphors We Live By, co-authored with Mark Johnson in 1980, in recent years Lakoff has also established a reputation as a public intellectual of sorts, applying his metaphor-heavy analytic hand to the US political landscape. In his most recent incarnation, Lakoff has used his approach to metaphor to support the progressive cause, and has often presented his work in the form of guides, handbooks and toolkits instead of as research. But, while Lakoff’s heart might lie with progressives, his conception of metaphor is deeply conservative, as I make the case below. And this has direct consequences for IARPA’s program, taking for granted as it does the Lakovian world view on metaphor.

Here’s Lakoff’s take on metaphor, in a nutshell. As he explains, conceptual metaphors – which typically employ a more abstract concept (e. g. politics) as a target and a more concrete topic (e. g. family) as a source – shape the ways we think and act, and underwrite a system of related metaphorical expressions that appear more directly on the surface of our language use, which Lakoff calls linguistic metaphors. If much more can be said about this, here’s the rub so far as IARPA is concerned: conceptual metaphors are the key for understanding how speakers – typically, members of the same “culture” – systematically map relationships between conceptual domains. Mapping in the Lakovian mode refers to the patterned set of correspondences that exist between source and target domains.

While there is good reason to assume that the map is not the same as the territory, one imagines that IARPA sees potential in such a mapping exercise because maps promise empirical predictability. Another probable attraction is that Lakoff’s work on metaphor has, in recent years, become increasingly slanted toward neuroscience. He now describes “neural metaphorical mappings,” where metaphors are “fixed in the brain” along “pathways ready for metaphor circuitry.” Lakoff’s marrying of cognitive linguistics to neuroscience has transformed a woolly term from the humanities – metaphor – into a building block for a new “neural theory of metaphor,” now presented as a scientific tropology, in ways conversant with a growing obsession across US military and security agencies with the potential of neuroscience.

 

Machines Learning Metaphors

IARPA’s Metaphor Program is, essentially, about combining emerging techniques and technologies in computational modeling with cognitive linguistic theories of metaphor like Lakoff’s. At the Proposer’s Day brief explaining its new metaphor program, IARPA described linguistic metaphors as “realizations of the underlying pattern or systematic association of abstract concepts” – a set of relationships IARPA assumes to be “defined by mapping principles.” IARPA would like to be able to data-mine online textual data on a large scale, as a “rich source for identifying cultural beliefs” about key societies of interest, and to develop new automated techniques to identify, map and then analyze the metaphorical language of entirely online native-language text. (I won’t take up here why online text – as a particular technological platform, set of expressive conventions, and kind of performance – is unlikely to be unproblematically representative of peoples’ cultural beliefs.)

What is critical for evaluating this project is making sense of the conviction that the relationship, for example, between a given metaphoric target and source (e. g. understanding “government corruption” as a “disease”) is conventional and predictably mappable; or that the development of unsupervised machine learning of such metaphor mappings is possible; and that this will then enable computational metaphor identification and categorization, as part of a “metaphor repository,” a database IARPA would build and maintain for a given language; against which analysts will eventually and ideally be able to compare “real-life statements” to predict intentions of people who may represent a threat to the US. (The agency has identified American English, Farsi, Russian and Mexican Spanish as initial languages of interest.)

For IARPA’s program to be successful, a basically Lakovian approach to metaphor has to be uncritically accepted as correct: linguistic metaphors, assumed to be representative and available in large numbers at the surface of online native-language texts, will be massively mined; their relationships of source to target, it is further taken for granted, will be able to be sytematically reliably mapped; these analogical maps, goes the reasoning, will enable identification of more fundamental conceptual metaphors among cultures of interest; and this will allow analysts to infer relevant cultural patterns informing the behavior of foreign nationals; and perhaps even to help predict their likely decision-making on complex topics.

Lakoff on metaphor, in other words, has to be coded into the computational tools to be used to build the repositories before any such metaphors are even collected. And Lakovian-type metaphorical maps seem to be the extent of IARPA’s data-mining game. This is to say, the theoretical starting point and technological requirements of IARPA’s metaphor program are largely determinative of what “metaphor” can mean in this case. But, since a scholarly consensus about metaphor eludes us, and since one could choose to emphasize other features of the diverse work of metaphor, IARPA’s choices tell us perhaps more about its own world view than about anyone else.

 

Metaphor through the Looking Glass

Each metaphorical mapping in a given repository, we are told, will be validated using metrics designed to confirm “native-speaker knowledge of the metaphorical relations.” Such an idea works only if each language were a reliably monoglot standard, underwritten by conventional metaphoric associations recognized as such and in the same ways by any typical and competent native speaker. And so, each metaphor is at once culturally-specific – let’s set aside that languages and cultures are not the same – but also culturally entirely conventional. Yet the idea of native competence is an increasingly suspect one among linguists.

IARPA’s choices have consequences. As with its consistently topographical conception of culture, where patterned cultures can be organically decomposed into constituent and mappable relations of figure to ground, IARPA seemingly relies almost entirely upon the conventionality of metaphor. A consequence of its peculiar approach to metaphor and to culture as a limiting condition upon how people think, is that IARPA’s working conception of its notional publics – the people it is trying computational to figure out – is seriously limiting. IARPA is all in with a conception of metaphor, we might say, as stuck in the mode of mechanical solidarity, giving its attention to what are otherwise called “dead metaphors,” which, it can be argued, are in fact no longer really metaphors at all.

IARPA’s metaphor repositories would be cross-cultural collections of metaphoricized commonsense, that is, composed of already recognized and accepted metaphoric relations, informing the predictable parameters – maybe more accurately, limiting frames – of analogic reasoning of members of a given culture. This has the potential to be perversely conservative, since IARPA would understand decision-makers as drawing upon an identifiable cultural aggregate of figurative relationships which are always already assumed to exist. Such a situation makes of prediction, paraphrasing Yogi Berra, an exercise in déjà vu all over again.

Given Lakoff’s fashionable redressing of his approach to metaphor in the terms of neuroscience, and the ways a technologically-enhanced culture concept is being engineered by IARPA’s Metaphor Program as a difference engine keyed to cultural consensus, the conventional, and metaphoric persistence, it would not be hard to imagine analysts, as beneficiaries of this data and when considering how the people they study make decisions, adopting an analytic shorthand to refer to the “Russian brain” or “Farsi brain,” in ways reminiscent of a Cold War era fascination with American, Russian or German “modal personality types.” For many anthropologists, research scenarios like these are troubling because they raise a Levy-Bruhl-like specter of “how natives think,” troubling because also aggressively “othering.” A cynic might go even farther to suggest programs such as this one are developing technologies for “enemy-making.”

 

Metaphor’s Multiple Futures?

Ignored or sidelined in IARPA’s efforts are competing conceptions of metaphor. Ricoeur, to pick one, emphasized the ways that metaphors creatively transform language by revealing new ways to conceive of a referent. Metaphors generate and regenerate meaning. Black  explored the open-endedness of metaphors, which he understood as too unstable to function referentially, but as introducing previously unavailable meanings in the dynamic interplay of figure and ground. Davidson remained unconvinced that metaphors could function as propositional at all, insisting instead that it was a mistake to assume metaphors possess any particular or stable “meaning.” These several conceptions of metaphor point to the limits of consensus around the conventionality of metaphor and the ways that backward-looking exercises in mapping and archiving metaphoric relations can fail to anticipate the future.

To take a case in point: Genetics historically has been a field shot through with metaphors. Metaphors describing the work of genes are particularly ubiquitous, including: map, code, blueprint, and recipe, where DNA is understood to “write” the hereditary possibilities for our biological future. The biologist Richard Dawkins’s influential concept of the “selfish gene,” for example, promotes a gene-centric theory of evolution, where human beings are mere vehicles for successfully self-propagating individual genes, as the architects of natural selection. But the success of Dawkins’s selfish gene metaphor is beginning to obscure the changing meaning of “gene,” including a growing variety of technical usages.

Researchers now emphasize the idea of a “post-genomic” biology, where combinations of networks of less selfish and more managerial genes are also influential, where “writing” can be less important than “reading,” and the relation of heredity to the environment appears increasingly complex and dynamic. But there are as yet no convincing off-the-shelf metaphors to describe what we continue to learn about the behaviors of genes. In other words, even given the technical and highly shared vocabulary among evolutionary biologists, the shape-shifting of genes under scientific inspection eludes easy description. And whatever might follow the selfish gene story is still emergent as a set of metaphors that cannot be mapped without significant distortion.

If sharply divergent from IARPA’s starting point, what these several conceptions of metaphor share is an attention to the arguments at the center of culture, to the work of metaphor for social shape-shifting, and where identity is always in motion in relationship to – paraphrasing William James – the blooming buzz of experience. They attend to the translational and problem-solving work of metaphor, and to the ways metaphor might animate new inquiry. Conceived in such ways, metaphors do not so much express similarity but create new relations among “unlike things.”

Accounts like these foreground the properties of metaphor as extensive rather than conventional, and as emergent rather than underlying. Concerned as they are with the ways that metaphors, in the words of anthropologist James Fernandez, are strategic predications upon the inchoate – that is, predications upon frontiers of life and experience that elude our ready classification – these offer alternatives to the conception of metaphor currently being reinforced in the social science of national security. And these alternatives run devastatingly counter to any possibility for a predictive tropology of the near future.

My New Book: Schooling, Bureaucracy, and Childhood: Bureaucratizing the Child

Please ask your library to order my new book, Schooling, Bureaucracy, and Childhood: Bureaucratizing the Child.  It us about the bureaucratization of our schools, and the commodification of our children–and about the paradox of our humanistic dreams for schools clash with the cold rationalism of the bureaucratic order.   A sample chapter is available from the British web-site of the publisher here

The way that book publishing works, means that on October 16, the hardcover version is released, which is designed for libraries.  The cost is $90 directly from the publisher, and $77 from Amazon.com.  My hope is that a paperback designed for classroom adoptions and individual purchases will be out in a about a year.  That’s how academic publishing works!

I have worked on the book for the last four years or so.  Despite the ponderous title, much of the book reflect my thoughts about my own education, and that of my children.  Much of it reflects my frustrations with mass public schooling, but more importantly it puts the subject of schooling into the larger perspective of what the sociology of education in the modern world.

 

China and Wikipedia’s Top 100 Lists

I went to Linyi, China, in June because a chance to teach in Thailand suddenly evaporated due to the May crackdowns on “Red Shirts,” leaving me with under-utilized air tickets.  So I asked a colleague to arrange for an invitation to lecture at the University of Linyi, China, in her home town.  She has always apologized for her home town, which as a demographer she points out is “not that important” even in Shandong Province which has several cities more important than Linyi.  As for Linyi itself, it only has about ten million people.  Unimportant though Linyi may be, that type of population should be enough to put it in the top 20 large cities for the world, i.e. somewhere between New York City and Los Angeles! Alas, a search of Wikipedia’s list revealed that while Linyi did indeed have 10 million people, but apparently such raw numbers are insufficient for getting it on any of the top-20, or even top 100 lists.  Apparently raw numbers is not only what such biggest cities lists are about.  China is big, but why shouldn’t a city of 10 million make some kind of list?

So what does the city of 10 million which makes none of the lists look like?  The answer is that it really looks new.  Proud citizens of Linyi drove us around at night and during the day to show off the new construction.  Remarkable was the multi-story apartment buildings of which there were scores, if not hundreds.  Indeed, there were at least 100 thirty story buildings under construction, and expansive hopes that China’s they hope rural poor will soon fill them.  Hundreds of other buildings between about eight and 30 stories were already filled.

Equally remarkable were the many public plazas, parks, and beaches constructed during the few years.  Sand was imported to make miles of the river shoreline into public beach front.  Public art was erected in many locales, and an impressive suspension bridge, and artistic lighting added an enchantment to the bridges and t.v. tower.

I lectured at a sprawling university which was newer, or under construction.  Linyi University dates from 1941, but construction on the new campus I went to began in the 1990s, and students first arrived in 2002.  Today Linyi University has 30,000 students.  The ninety or so I lectured to (in English) were attentive and eager to listen to what I confess became a lot of very dry sociology.  Children of China’s rural areas, they were eager to identify how sociology could fix what they call the “economic gap” between rich and poor.  Anyway, they laughed at most of my jokes.

Linyi of course is part of China’s policy of rural transformation which is about the government’s attempt to address that “economic gap.”   In yet another giant leap, the city planners of Linyi are hoping to revolutionize life for the rural poor of Shandong, this time pouring them into the modern new skyscrapers.  In doing this they hope to re-create some semblance of the old life by keeping village groupings together, while serving the very new needs of what is hoped will be a new industrial economy.  This is of course a high risk plan—many earlier attempts have foundered on the limits of planned central change, and the laws of unintended consequences.  Whether China’s newest attempt at rural transformation will depend on the success of places like Linyi, and their capacity to absorb the hundreds of millions still living in China’s impoverished country-side.

In the context of such scale, it is of course easy to forget that point my colleague first made: Linyi is not that big or unusual in the context of a larger China.  Indeed, broader questions abound.  What will a Linyi of 15 or 20 million people look like in 2025? Or more important how many Linyi-size cities will there be in China?  As this happens, not only will China change, but so will the top ten, twenty, and hundred largest cities lists on Wikpedia.

Or just maybe this is another case of China and the eyewitness fallacy I wrote about before?