This Week in Ethnography: Writing Live Fieldnotes With Social Media: Towards a More Open Ethnography | Ethnography Matters

This Week in Ethnography, the big news was Mitt Romney “using” the word culture but that news is already very well described by Jason Antrosio.

So I found another hidden gem that came out this week: a great post on “Writing Live Fieldnotes“.  It describes a technique that could solve a challenge I am facing in a research project where I will be tracking the behavior of a group of high school students. My challenge is to generate solid data on their entire lives without actually following them (minors) off campus.

TECHNOLOGY: I’ve used diary techniques elsewhere but I fear high school students will be less reliable than the college students I have studied earlier.  In the post below, Tricia Wang describes a technique that got me thinking about a solution to my problem.  Technologically I’m considering purchasing a number of Ipod touches, distributing them to the subjects, training them in some basic observation and self reporting techniques and seeing what happens.

METHOD: Shirley Brice Heath was is the first person I ever heard use the phrase guerrilla ethnography at a talk she gave at the U. Penn Ethnography in Education Conference in the mid 1990s.  Basically, she took a group of high school students and trained them to extend her observations at a high school.

What I am thinking of doing is have my subjects, read Tricia Wang’s post and follow her lead.  I’ve added the first few lines of her post her, but I urge you to read the entire thing.  There is real knowledge there!

Writing Live Fieldnotes With Social Media: Towards a More Open Ethnography

I just returned from fieldwork in China. I’m excited to share a new way I’ve been writing ethnographic fieldnotes, called live fieldnoting. I spoke about live fieldnoting in a recent interview with Fast Company that also featured a slideshow of my live fieldnotes. I want to elaborate on the process in this post.

At one point in time, all ethnographers wrote their notes down with a physical pen and paper. But with mobiles, laptops, iPads, and digital pens, not all ethnographers write their fieldnotes. Some type their fieldnotes. Or some do both. With all these options, I have struggled to come up with the perfect fieldnote system.

I have experimented with the Livescribe Pen, regular old notebook, and a laptop. The Livescribe digital pen didn’t work for me because it’s really uncomfortable to use after a half hour of writing and its dependency on digital paper makes it inflexible for fieldwork outside of the US and longterm extended fieldwork (my review of the pen on The notebook seems like the most practical solution. But I can’t seem to find the “perfect” notebook. Do I use a really small one that fits in my pocket? A medium size one that allows me to write more? If it’s too big then it looks like a “notebook.” And what should this notebook look like? Does a black moleskin look too nice for my fieldsite? Does it look too official? Does my notebook allow me to fit in with teens? But the notebook with bears and hearts that I use around teens doesn’t work for my meetings with government officials. And in the end no matter what kind of notebook I use, I still have to type all my notes to Evernote. So using a laptop is inevitable as all notes eventually end up there and are cleaned up there.

But the problem with a digital pen, notebook, and laptop is that they are all extra things that have to be carried with you or they add extra steps to the process. If I forget to charge the Livescribe or if it runs out of batteries, then I would have to remember to pack a backup notebook and pen. If I was in an area where I couldn’t get electricity, then I couldn’t charge my laptop or pen. If I’m in situation where I can’t take out a notebook because it would distract from the situation or it would be too cumbersome, then I would have to memorize everything.

I still haven’t found the perfect fieldnote system, but I wanted to experiment with a new process that I call, “live fieldnoting.”

via Writing Live Fieldnotes With Social Media: Towards a More Open Ethnography | Ethnography Matters.

Methods to Mind: Long or Short Term Approaches to Ethnographic Research

The following post from culturnicity got me thinking about the ongoing grudge match between those who demand a year in the field [imagine someone with a long beard in an arm-chair saying "to record a full record of experiences during the ecological annum"] compared to those who are more focused on the content and outcomes of the project.  In Ethnography as participant listening, Forsey drives this point home with the following point:

Defining ethnography according to its purpose rather than its method encourages participation in, and engagement with, the lives of our fellow human beings (Forsey, 2011: 569).

In the following post, Casey has moved beyond the debate by focussing on a comparison between what he calls a “Team-Based Categorical Model” and a “Team-Based Geographical Model”.  This is a great example of Forsey’s point and I urge you all to read the entire post, a tidbit of which follows:

Ethnographic Research – Long or Short Term Approach?

Traditional ethnographic research takes a long time, but the time is necessary for an accurate and in depth understanding of the culture or phenomenon under observation.  When an anthropologist embarks on a research project in a totally new area, a year of language study is often needed before detailed ethnographic research can begin.  Fieldwork often lasts 2 or more years.  Such longer term approaches to ethnographic research are crucial for accurate understandings of culture.  These published ethnographies have been the basis for many of the major theories that have arisen in cultural anthropology over the last 150 years.

While there is tremendous value to the long term approach to research, there are instances when a short term model can produce accurate and helpful results.  For example, anthropologists are more readily hired as consultants by companies looking for specific and focused research on a particular aspect of society.  Other times, an anthropologist may be employed to give a general overview of a culture with specific findings and suggested strategies for doing business in the area.

I’ve been involved with several of these short term ethnographic research projects.  In some instances I was the sole researcher.  In other instances I was part of a team commissioned to research and report on some culture or aspect of culture.  I’ve found that the short-term, team approach to ethnographic research can be a very helpful, time efficient means of understanding a culture.  Look at it this way – one researcher can spend two months in an area and put in about 400 hours of research.  A team of eight can spend less than a week in an area and put in the same number of man hours.  In this post, I want to give a brief overview of two approaches to the team based research method, along with pros and cons of each.

via culturnicity | Thinking about culture and all the ologies and icities that go along with it.

This Week in Ethnography: Second Digital Ethnography Week _ Trento 17-21 sept. 2012

There is not much to report in “This Week in Ethnography”, a segment I am inventing as a means of reporting on the global pulse of this most important subject.  The one item that jumped out of my feeds at me was that I missed the application deadline (of July 22, 2012) for the:

Second Digital Ethnography Week _ Trento 17-21 sept. 2012

The second “Digital Ethnography Week” (DEW), an intensive week focused on the study of digital methods and digital ethnographic approaches. The DEW is intended for Ph.D. students and researchers interested in developing advanced methodological skills to account for the digital in contemporary social life.
As their website reports, this looks like a great opportunity for aspiring digital ethnographers.
Ethnography and Journalism
Be warned: The Data Journalism School in Rome is involved in this effort.  I know the conflation of ethnography and journalism is shocking to some. During my graduate training, I recall one of the senior faculty members of my anthropology program criticizing a students’ work by referring to it as “journalism”.  The context for this event was a thesis draft presentation based on ethnographic fieldwork in a doctoral colloquium.  I believe the Professor’s intention was to imply that the student was “only out for a story” and had “little theoretical or methodological reasoning” for how they had generated the data they were reporting on.
The irony of this situation was that this was a program in applied anthropology.  In any event, let us not “throw the baby out with the bath water” or in this case, throw good data or solid technique out with the researcher using it.  Data Journalism is a fantastic means of getting at reality.  For example, check out the following TED talk by David McCandless: The beauty of data visualization and try to tell me that this “journalist” is “only out for the story”.

“Culture” in the Science Fictional Universe of “Big Data”

As the Obama Administration’s new “Big Data Research and Development Initiative” has made clear, the “big data” era is officially upon us. The term – “big data” has been used in multiple ways, but most generally refers to the avalanche of “raw data” generated by the internet and other new kinds of data-capturing sensor and digital technologies. Or, as one big data guru more pithily put it, it is “all the stuff we do online” – and more. With the “big data revolution” comes unflagging optimism regarding more comprehensive methods for the collection of vast new stores of technologically-produced data, enabling the pursuit of previously unanswerable questions, and carrying the promise of breakthroughs in how we access and understand the information composing our world. Time will tell.

The turn to “big data” represents a potentially exciting set of developments along multiple frontiers of advanced supercomputing, new software tools, other information collection technologies such as GIS, database management systems, and massive data sets, such as the exponentially expanding corpus of information generated by Web 2.0 social media. Government funding has followed a corporate lead, where in recent years the likes of Google, Facebook, Apple, and Amazon have turned a pursuit of “big data” into a major business proposition focused on gathering increasingly nuanced information about consumer behavior to better service and target customers. Making sense of the implications of all this will preoccupy us for some time.


As the press release from the White House Office of Science and Technology Policy explains, “big data” projects hold great promise for “scientific discovery, environmental and biomedical research, education, and national security.” The very early returns on “big data”-derived research are already turning heads, from predicting political upheavals like the Arab Spring, market volatility, or new epidemic outbreaks, to mapping emerging cultural trends or the evolution of languages.

And the attraction of “big data” hits a number of sweet spots. Most generally, “big data” is now carrying the torch for the whiz bang potential of the next Silicon Valley-derived infotech revolution for enhancing “innovation” – whatever that might specifically mean. For universities, it is a readily available advert for a more technologically-enabled higher education, which also happily relieves budgetary pressures to expand the physical holdings of campus libraries and other facilities.

“Big data” also has mass appeal: leveraging big medical data promises to help fix our broken healthcare system by making it less expensive; it has been presented as the newest super tool to combat global poverty; it also helps to power the imagination of urban planners hoping to incentivize new creative economies; for the security community, it beckons by offering “crystal ball”-like certainties of greater information dominance and more precise prediction; and in the spirit of C. P. Snow, it confers legitimacy on the so-called digital humanitiesin a cost-conscious era, as an apparent collaborative bridge for the “hard” scientists to bring more rigor to their colleagues in the humanities and “soft” (or social) sciences. Among other frontiers.

The “big data” train has left the station, with all the concomitant hyperbole and hoopla that so often appears to accompany promising new developments in science heralding paradigm shifts in research. However, from my perspective missing from the enthusiastic rush to adoption is a critically grounded accountability regarding what big data advocates are claiming as opposed to actually doing: attention not only to the benefits but also the costs, to its potential but also its limits. Unrelenting techno-futurist optimism does not nurture this.

Trained as a sociocultural anthropologist, I have been most interested in how “big data” has intersected with efforts to better leverage sociocultural information to different ends. Most notably, this includes the Google-powered development of the new “field” of culturomics, elliptically defined by some of its founding practitioners as “the application of high-throughput data collection and analysis to the study of human culture.”

This sounds promising, if not altogether clear. The novelty of culturomics is its potential “to investigate cultural trends quantitatively” by generating previously hidden “suitable data” from hitherto unavailable massive databases. Despite this potential, breathless claims about the unprecedented access offered by culturomics to our own cultural history or for the Isaac Asimov-style prediction of future cultural events have derailed more grounded attention to what the “culture” of culturomics actually corresponds to and what kind of knowledge it provides. More on culturomics presently.

Critically Engaging Data

In this era of teraflops, terabytes, and cloud computing, big data represents the future. But the field has so far also displayed a notable lack of interest in addressing what the term fundamentally references, what it’s relationships might be to other sorts of disciplinary and scientific pursuits, what these related developments might helpfully enable but – perhaps more importantly and most neglected – what “big data” either obscures or cannot meaningfully address.

The biggest problem with our conversation so far about the potential of “big data” efforts is that we are spending too much time enamored of the “big” – the prospect of the unprecedented and vast volume and scale of the collection, organization, and processing of mostly digital information, primarily through new data mining applications that rapidly amass unique digital data sets – and virtually no time thinking about what the “data” part might consist of – what the data essentially are. Often exhibiting a naïve digital positivism vis-à-vis “data,” in many ways the turn to “big data” is more like a return to the past. But we need to be much more scrupulous about what we mean by “data” here. What, in short, are the data of “big data” and what, basically, is their value?

What we mean by “data” for emerging “big data” fields like culturomics is an important question for a number of reasons. Big data projects are notably cross- or interdisciplinary. For example, the affiliated researchers at Harvard’s Cultural Observatory, where culturomics has been pioneered, include: several computer scientists and Google software engineers, mathematicians, evolutionary biologists, and one doctoral student in history.

Absent from the team is balance on the cultural end, or a range of disciplinary expertise likely to sustain fruitfully interdisciplinary back-and-forth, say, that might usefully problematize specific, perhaps directly competing, frameworks, perspectives, and characteristic forms of producing and evaluating knowledge, across different communities of computational and cultural research. Understandably, most computer scientists are at best only passingly aware of the characteristic methods and relationships to data among colleagues from the social sciences or humanities.

Its apparent “interdisciplinarity” is a big part of the enthusiasm the turn to “big data” has generated. Big data projects using computational techniques often involve carrying over methods from one disciplinary environment (e. g. the computer sciences) and applying them to often long-standing problems in other disciplines such as economics, hydrology, or in the applied humanities. Sometimes this is a good fit. But sometimes it is not. And, it is often hard to tell, since big data researchers often treat data questions as straightforward, with data presented as unproblematically readily available to collect and to manipulate.

However, when a computer scientist develops a new data mining tool to systematically harvest often vast quantities of online digital information, s/he is not simply collecting data. S/he is also carrying over specific assumptions about what “data” is, how it is identified and recognized, where it sits in a larger context or field of endeavor, how it is determined by an encompassing information ecology of concern to computer scientists, how it can be made legibly available for analysis, and what sorts of conclusions can be derived from it. We might say that this data carries a particular signature identifying it with its disciplinary source — a signature with technical, methodological, and meaningful consequences.

When asked about this, the Harvard team’s response was, “It’s irrelevant. What matters is the quality of the data…” But “data” is not all of a piece, varying simply in quality and quantity. Particular disciplines understand their knowledge production and their relationship to data in often starkly different – or even incompatible — ways. And culturomics relies upon a conception of data that makes particular sense for computer scientists but is not necessarily consistent with the ways different social sciences deal with the cultural data with which they work.

Different disciplines have historically specific relationships to data, and which significantly express that discipline’s unique development and characteristic pursuit of problems. And “data” are not self-evident, universally fungible, straightforwardly equivalent or comparable across these pursuits, say, in the same way as we might think of the circulation of currency in the global economy. But this is exactly how theNSF is talking about the “big data revolution.”

The data of “big data” are in fact a particular kind of data: largely digital in nature. And this has definite consequences. Early adopters of the techniques of culturomics are so far spending little time with the implications of this, instead opting to promote the seemingly limitless potential of such techniques. In part, the reason is because for them questions about data are more often than not technical problems to be solved (e. g. about building the platform architecture, writing computer codes and algorithms, or compatibility with one or another digital database) instead of more fundamental questions about the identity of “data,” the sources of knowledge, and – for culturomics – the relationship of culture to meaning.

Simply “plugging in” data collected and understood for use by one community of practitioners might, from another’s point of view, simply add up to: “garbage in, garbage out.” This problem can quickly lead to fundamental misunderstandings about what is being done with such work and about the potential it offers for better understandings of cultural questions.

Culturomics and Data

As the “big data” trend gains momentum, the concerns that have been raised have primarily revolved around two issues: privacy and transparency. On the one hand, primarily in the debates have focused on the potential negative implications of the increased vulnerability of personal information as a result of the tremendous improvements in online data mining and technological surveillance. On the other hand, researchers have pointed to the lack of public availability of these massive data sets, often because they are corporately owned, which makes restudies or assessments of results based on these data almost impossible.

These are legitimate and important concerns, deserving attention. But, in themselves, they do not add up to a nearly robust enough discussion of these data. Culturomics is not the only “big data” front to apply comparable techniques to trying to make sense of sociocultural knowledge. We can also point to the rapid growth of attention to computational sociocultural modeling and simulation on the part of the security sector, which uses similar techniques. Given this incredible enthusiasm, much more critical scrutiny of these tools is required so that users can better determine their appropriate niche.

For the universe of culturomics, if we were briefly to characterize its “data” – to identify its particular disciplinary signature – we might point to a variety of factors. First, culturomics pursues a quantitative content analysis but on a colossal scale, using automated forms of collection derived from algorithms – computer code – designed to look for, and to sort through, particular properties of information already identified as a relevant data set, like Google Books, financial market indicators, twitter feeds, or country surveys. Its goal, in other words, is to record the frequencies or associations of key words and phrases over time and across these already structured sets.

A “culturome” (yes, arrived at via analogy to the “genome”) has, therefore, been described as “the mass of structured data that characterizes a culture.” Like a “gene” or a “meme,” it seems to be largely taken for granted that the data of culturomics are standard, and comparable, bits of information. This claim is controversial for a contemporary sociocultural anthropology engaged with a diversity of forms of cultural expression, and for which cultural meanings are not generated in just one way.

Digitally, the data of culturomics largely are standard bits of information: they are frequency counts of 0’s and 1’s, that is, variables processed according to particular search and classification criteria that are themselves written into the search algorithm of the data mining phase of work. And yet, in the results stage, these variables are re-presented as “data,” but with an empirical and even positivist sensibility. They are presented as if preexistent “stuff” out there in the world waiting to be extracted, processed, and explained. This is a sleight-of-hand. They are in fact “variables.”

For the case of culturomics we might point to a close, even closed, relationship between a specific data mining and processing tool and the data it generates. Any work with Google Books, including Google’s N-gram viewer – created to allow researchers to generate frequency counts and distribution curves of words or phrases from the Google Books archive – of course ignores non-written, non-published words, and all non-linguistic expressions of culture. It is also limited to those books which have been scanned and digitized (approximately 4% of all published books), and works only where a book has been digitized with adequately extractable metadata tags (e. g. indicating publishing date, author, genre, etc.). Too, the Google Books project has been limited by other prevailing factors, such as legal limitations upon public dissemination presented by intellectual property restrictions.

Why, then, would we even suppose that any results from a culturomics study using Google Books could “roughly represent the larger culture that produced it”? Or, more ridiculously, why are we hearing talk about the promise of culturomics to help identify “power laws for culture”? Books are particular kinds of cultural artifacts not simply ciphers for them. But experts seem willing to suspend disbelief. Part of this suspension includes a lack of attention to the ways that culturomics data are notably prefigured – even determined – by the technical choices made, the platforms used, the algorithmic codes written to mine the data, as well as the digital availability and legibility of the already-formatted data in the first place.

Another way to say this is that, even as researchers treat culturomics data as interchangeable, we might suggest that the data of culturomics more accurately express the world view of culturomics. Culturomics researchers have acknowledged that their work is not intended to replace existing varieties of cultural analysis. But they refer only to the “close reading of texts,” presumably the activity of historians, literary critics, some semioticians or cultural studies scholars.  This is a kind of interpretive work also conversant with the largely digital textual landscape with which culturomics is concerned, but in no way exhaustive of other cultural research methods and kinds of interpretive attention. Minimally, we need more regular reminders of the partiality of such projects.

Culturomics: Market Trend

One of the techniques culturomics researchers are using is “tone analysis” or “tone mining.” The object is to establish whether a particular word, phrase, or text possesses a positive, negative, or neutral tone. Terms like tone, mood, style, or texture have long been mainstays of the lexicon of literary criticism, in particular for the “new critics” inspired by the work of I. A. Richards. Tone has also come to inform other interpretive approaches, including contemporary attention to “voice.” Often associated with the work of Michael Bakhtin, such work is distinguished by attention to the dialogic interactions between a speaker in a text and multiple other points of view, for which any particular utterance is always multi-voiced. In other words, tone has been a doorway for appreciating the ways that texts are variously embedded in and animate different social and cultural contexts.

But culturomics treats tone as a “metric,” which can be turned into computable numeric data. A recent project funded in part by NSF’s Extreme Science and Engineering Discovery Environment program used a database from the Open Source Center and Summary of World Broadcasts of approximately 100 million news articles between 1979 and 2011 to measure shifts in the “global news tone,” which retroactively appears to forecast the recent Arab Spring. Such forecasting tricks are impressive.

But it is exactly at this juncture that much more scrutiny of what is involved in “tone mining” (also called “sentiment” or “opinion mining”) is needed, if we hope to come to terms with what such forecasting or trend data in fact mean in cultural terms. Here it is important to understand where this computational attention to tone comes from – what the genealogy of this kind of data is.

Amazon, among others, pioneered the proliferation of digital apps which transmit an increasing variety and volume of consumer preference data back to retailers. And for several years now many Fortune 500 companies have utilized tone mining to monitor news coverage and social media activity associated with their products. These companies, of course, have an interest in learning as much as possible about what consumers are saying about their products and in identifying new demographics. Most often they would like to be able to map or to anticipate consumer responses to particular products.

The work of data mining for tone, sentiment or opinion – incorporated into so-called culturomics 2.0 – basically works like this: 1. First, identify precompiled dictionaries of “positive” and “negative” words against which other digital texts can be compared and scored; 2. Develop an algorithm as the basis for an automated computational method for mining tone data; 3. Record frequencies of these properties across so-called “opinionated texts,” as comparable items that compose an already “structured” online database or archive; 4. Assign a “value” to each so that it can used as a variable to plot trend data; 5. For culturomics, take a leap of faith by treating these plots as meaningfully indicators of cultural trends of one sort or another, often spanning decades or centuries.

However, in the enthusiasm for culturomics we have been too quick to shake off the origins or history of these data. They are certainly not “raw data” of some sort. They are, instead, specific artifacts of digital business practice. Attention to “tone” or “sentiment” – as data – works well if you are invested in trying to figure out peoples’ preferences. But its meaningful or representative relationship to culture, or as any sort of expression of culture, requires much more unpacking and qualification than we are getting so far.

In interdisciplinary terms, this kind of quantitative knowledge about culture (read: products) might not be usefully complementary to other forms of cultural research, data, or analysis. It might simply be an entirely different sort of information, for which use of the word – “culture” or field “culturomics” – is in fact misleading and unconstructive.

I have emphasized briefly some of the ways that tone mining generates not “data” but a very particular kind of data significantly prefigured by the technological architecture of the tools used, organization of existing digital databases, and computer code supporting such tools. These are preconditions that queer the game, as it were, as doorways encouraging certain kinds of attention to information while rendering other kinds illegible or marginal. In their very form, we might say, culturomics data already answer the possible questions to ask.

But there’s more. Culturomics relies on an alarmingly consumerist, or neoliberal, theory of meaning, for which tone or sentiment is the product of choices by cultural agents (originally, consumers), only insofar as they take the form: pro/con, either/or, positive/negative, or similar variant. This makes perfect sense if you want to know what people think of a toaster or if you want to record distributions of “thumbs up” among Facebook or Twitter users – after all, the impetus for collecting such information in the first place.

Contesting Culture, Data, Meaning

The “culture” of culturomics expresses the organization of available, countable, compilable information, which can be systematically extracted from digitizable texts like books, newspapers, maps, and twitter feeds. In this way culturomics is itself an often very creative exercise in selective choice-making. But it is not in any way describing the shapes of previously undescribable macro-cultural landscapes.

Whatever “culture” is, to proceed as if it can be assembled from discrete and comparable units derived from algorithmically-assigned “values” of machine-processed digital information is to emphasize very particular structured properties available for a technically and commercially specific prior purpose. And it equates culture with consumer choice. But to reduce the meaning of cultural trends to the prodigious mass of opinion data generated online by consumers is to grossly reduce what “culture” is to a narrow market calculus. We are better off leaving the question of the sources for cultural meaning open-ended.

Despite frequent assertions to the contrary, “more – and better – data” does not automatically lead to “more robust results.” We need to temper our techno-futurist optimism with basic questions: What is meant by cultural data in the first place? What is significant about frequency counts of cultural “stuff”? How do we attribute meaning to cultural data? And what is their relationship to real-world referents? Among other relevant questions. Such a constructively skeptical approach should inform “big data”-type projects of all sorts.

Some early critiques of culturomics have complained that it cannot address the humanist “search for meaning.” But I have suggested that, with their focus on the interpretation of texts, such concerns are still located well within the culturomics world view. They represent a latter day revival of C. P. Snow’s “two cultures” debate about science and the humanities, which sets up a goal of interdisciplinarity that assumes a pride of place for the technologically-enabled “sciences” (specifically, computer science) to make sense of the world.

Developments like culturomics have intriguing potential. But the claims associated with them – in this case about “culture” – can obfuscate and confuse. Sociocultural anthropologists also aspire to make sense of cultures. They typically do this ethnographically, and where cultural meanings are not simply latent and extractable, but instead emergently negotiated with counterparts (people we encounter “in the field” who we used to call “informants”). The data are usually multivocal, polysemic and perspectival, and not simply reducible to a pro/con or either/or-type choice.

The often serendipitous open-endedness of ethnography also contrasts with the technological and other prefigurements of the method of culturomics. More proximate to different specific contexts of meaning-making, ethnography is likely better located to apprehend emergent ground truths, other cultural points of view, and the diverse ways difference travels through the world. It is not clear at all that culturomics is even compatible with, let alone complementary to, ethnographic apprehensions of culture. And this raises serious questions about the celebratory interdisciplinarity with which big data projects continue to be met.

Note: This post originally appeared here:

An HTS Debate

An experience I’ve been meaning to share since the end of December concerns the Human Terrain System (HTS). Dr. Henry Delcore at California State University, Fresno, invited me to act as a judge for a class debate. The question of debate was, “Should the American Anthropological Association (the main professional organization for anthropologists in the US) discourage anthropologists from working in the Human Terrain System program?”

The debate was part of their final requirements for passing the course, and I thought it was a new and interesting method of engaging the students. It was evident that it was also effective in getting the students to really research not only the HTS system, but also techniques and etiquette of formal debate. One reason it was probably effective is that if you weren’t fully prepared with a firm knowledge of the information, it would have been pretty embarrassing when it came your turn to speak! The students had excitement, healthy competition, and seemed sincerely interested in the topic and task at hand.

I was very impressed because the student’s arguments were so good that I assumed that they got to pick sides ahead of time and that they chose the team that represented their own personal viewpoints. I found out after the debate was over that the students were asked their personal viewpoints ahead of time and purposely placed on the team to argue the alternative viewpoint! Kudos to the students for being so objective and convincing even when they were debating a viewpoint they did not personally support! In addition to this, I found myself constantly analyzing which team was in the lead, and I found that it swayed many times. In the end, the team arguing the negative came out ahead, but it was certainly a close call.

One of many major points of arguments came when the team arguing the negative viewpoint said that the HTS system is a new program and therefore has the opportunity to make positive changes in our military and in reducing harm. They argued that it was up to those anthropologists accepting positions on the HTS teams to develop the HTS program into a program that is positive, transparent, and which upholds high ethical standards. The affirmative argued that this was not possible because of environment and situation, and due to the fact that the anthropologists would be associated with the military, dress in military attire, and would have to carry weapons. This, they argued, prevented the anthropologists’ ability to be seen as a neutral party. The debate went back and forth, both sides making strong points.

I believe that activities like this are such a great way to capture the student’s attention and to get them really passionate about researching a topic. As a former and future student, I know that I am certainly more satisfied, excited even, when an instructor implemented new methods of graded activities rather than just sticking to the typical lecture, reading, examination routine. I was so impressed with the students’ excitement, I even found myself wishing to join the debate!