De Landa – Synthetic Reason and Computational Simulation

DMF recently posted this video to Deterritorial Investigations. In it Manuel De Landa discusses various computational simulation techniques and how they influence scientific thought. Having not read De Landa in a while, it was refreshing to return to his mode of thought, and, in fact, find some parallels with my own research. Specifically, there are some similarities between what he describes as “synthetic reason” and the way that I found researchers to be using computational models in a scientific context.

Part of what De Landa is arguing here is that computer simulations function as a new kind of synthetic reason, and a new way of understanding the world we live in (I gather that this line of thought is developed in his recent book  Philosophy and Simulation: The Emergence of Synthetic Reason, which I have not read). Instead of simply observing behavior in the natural world and drawing assumptions from those observations, simulations allow us to synthesize the conditions of our assumptions and test them in the space of a limited world. This can then lead to new insights and intuitions about the world that we can apply to our observations in order to develop new assumptions. The idea is encapsulated when he says:

“… what virtual environments are making possible is precisely to synthesize more and more ideas in that direction, to allow the machine to be kind of a probing head into a world that we don’t see because we have these blinders on.”

This synthetic approach was similar to the kind of science I encountered in my own research on the Chesapeake Bay watershed. I’ve talked about it in another post, but essentially, researchers were using models as tools for exploring aspects of the watershed that they can’t observe directly for various reasons, and for examining the limits of their understanding (i.e. where the models are wrong). The result is a continual feedback between simulation and empirical data that accelerates and intensifies an engagement with the natural world. When models are part of this kind of feedback, I think they can be very productive tools for appreciating the complexity of ecological systems and recognizing the limits of our ability to understand them. This is important to understand because computer modeling is often criticized for being too abstract, technological, and reductionist.

On the other hand, models can also reinforce our predefined assumptions, particularly when they are displaced from the simulation-data feedback. This was the case with the application of computational models to environmental management. That’s not to say that the models used were inaccurate or not scientific, but that the management context is displaced from the feedback system in which the models are constructed. In those cases, the models are often modified to make them more useful for the management staff – incorporating things like costs and benefits of various management approaches, and simplifying the model to make them easier to run quickly and easier to understand for non-modelers. As a result, certain assumptions get built into the models that simply reinforce the assumptions of the broader management context. In the Chesapeake Bay watershed, this meant that the models reinforced the neoliberal approach structured by the Total Maximum Daily Load (TMDL) method (costs and benefits, best management practices, etc.). As a result, alternative forms of engagement with the environment and non-quantitative ways of valuing environmental benefits are ignored. This leads into a whole other critique of the neoliberal management approach, which I won’t address here.

So essentially, my findings support De Landa’s synthetic reason argument to an extent. I think it’s important to recognize that context matters. Simulations can be really valuable tools for understanding the natural world and for breaking through our ingrained assumptions, but only when they are part of a broader context (e.g. science) in which that kind of work takes place. When removed from that feedback process (e.g. management), models can often serve to reinforce and replicate our ingrained assumptions, further preventing us from seeing alternatives and recognizing our limitations. These are things we need to be aware of and on the lookout for as modeling becomes increasingly prevalent in environmental science and management.

Temporal Dissolution in Hawksmoor’s London

To get back in the swing of writing blog posts, I’m going to try sharing some quick thoughts on the books that I’ve read recently. In the next week or so, I’ll talk about Mapping the Interior by Stephen Graham Jones, The Ark Sakura by Kobo Abe, Days Between Stations by Steven Erikson, and a few other things I’ve read since I finished my PhD this past summer. But today I want to talk about Hawksmoor by Peter Ackroyd.Yes, I bought into Duncan Jones’s homage to his father, David Bowie, in the form of a mass reading group. Actually, I bought the book because I found it at a used book store in New Orleans at the beginning of the year. I wouldn’t have known about it if not for Jones’s reading group, but I wouldn’t have bought and read it if I hadn’t stumbled across it at the book store. So, it’s a little bit providence, I guess.

The book is a murder mystery or thriller that tacks back and forth between present day (1980s) and 17th century London. The 17th century chapters are written in a quasi-archaic, but decidedly readable style from the first-person perspective of the architect Nicholas Dyer (who is based on an actual 17th century architect named Nicholas Hawksmoor who designed and built several churches in London, and who is also the subject of Iain Sinclair’s Lud Heat). Dyer is a strange man who recounts in the first few chapters his unfortunate childhood in the wake of a disease that strikes the city and leaves both of his parents dead. As a youth he takes up with a cult figure and learns the secrets of a kind of black magic. Over time, Dyer becomes a well-known architect in the city and applies the occult principles he learns to the design and construction of his churches.

The alternate chapters follow a series of unusual murders in 1980s London that all take place at the very churches that Dyer has designed. These murders are being investigated by a detective at Scotland Yard named… Nicholas Hawksmoor. As you can see, there is already an element of temporal slippage taking place here, but as the novel develops, the parallels between Dyer’s occult architecture and Hawksmoor’s investigation become more clear. Finally, there is almost a total blurring of temporality on the London landscape with the churches forming a kind of focal point.

The book is beautifully written and extremely intelligent. The narrative elements in each alternate chapter contain references to one another so that you feel a sense of continuity, as if the past is never quite gone and the present was there all along. I am not very familiar with London, having only been there once as a teenager, but Ackroyd is well known for his flaneur-like accounts of the city and its people. Reading Hawksmoor, this intimacy with the city comes across, and the reader is left with a visceral sense of the temporal depth of the cityscape, and the nuances of every street corner. I would like to read his non-fiction books about London – particularly (given my interest in watersheds) his book Thames: Sacred River. If it carries any of the atmosphere of Hawksmoor, it might be the kind of dark environmental writing I’ve been looking for.

I have not kept up with whatever discussion is happening around the novel in Duncan Jones’s open reading group. For all I know the whole idea might have fallen off the map. But I know that many book stores were sold out shortly after Jones announced the first reading, which makes my encounter with the book all the more serendipitous. If you happen to be in a used book store in the near future, be sure to check the A section on the literature shelf. Maybe you will be pleasantly surprised.

We need more dark environmental writing…

Most of what I’ve been reading lately has nothing (directly) to do with the environment, which always causes me some anxiety. As an environmental anthropologist, I should be reading things about the environment and environmental issues! But then when I look at the literature that’s out there, it’s all very disappointing. I mean, it’s not all optimistic, bright green cheeriness, and there’s a fair amount of doom and gloom, but it’s all about how much we are fucking things up. That’s true, of course, and it’s important to be aware of, but there’s an underlying or unexamined romanticism to it all that makes it, ultimately, all too human – like, if we could just stop fucking things up, everything would be beautiful and happy. I think there’s a general failure to look at the gross, dark, dangerous, monstrous, and terrible aspects of the natural world – to encounter it on its own terms and really grapple with the non-humanity of it all. Like right now I’m reading this book River Horse by William Least Heat-Moon, which is great – it’s funny, and interesting, and well written – but the way the natural world is depicted is as this lovely, peaceful, wonderful space that is just being destroyed by humanity.

I don’t know exactly what I want, but I guess I would know it when I see it. I think the closest thing to it that I’ve read recently is the Southern Reach Trilogy by Jeff VanderMeer. What’s really amazing about that series is that Area X is so not human, and it’s threatening and scary. Also, the bureaucracy, the Southern Reach, is completely entangled with Area X, but not in any kind of deterministic or unilateral way. But it’s fiction, and weird genre fiction at that, so the effect is a bit dulled. What I want to read is a non-fiction Southern Reach trilogy about the uncanny, eerie, and strange aspects of the actual natural world, and its entanglements with the human.

Anyway, just a thought. If you have any suggestions, let me know.

What is a Watershed?

Image of a watershed as a network of streams and riversWe think of watersheds as intricate networks of rivers, streams, and creeks that converge towards a single point. Thinking about my research and the ecological relationships that make up a watershed, I’ve come to realize that this image doesn’t really tell the whole story.

It’s better to think of a watershed as an inundation – a flow of water through the landscape rather than across it. Instead of a network, it’s better to think of molecular condensations percolating through the soil, coalescing gradually to form into the rivers and streams we usually think of when we imagine a watershed. The mud in your back yard is as much a part of the watershed as is the creek that runs through the woods or the river that cuts through town.

Now, thinking metaphorically, I wonder how it would change our framing of social relationships to think in terms of these molecular condensations. How would our lives be different if we considered ourselves part of the mud rather than nodes in a network of relations?

It’s just a thought – something I might explore more later…

Playing Games with Anthropology #2: Anthropological Futures

This weekend at the AAA Conference in Washington DC, we say an unprecedented amount of sessions related to various forms of anthropological futures. In a panel that I coorganized with several friends on twitter, we had a great discussion about the relationship between science fiction, anthropology, and imagining a better world in the present, past and future. In addition, there was an Anthropology Con, which focused on gaming, and included a “lounge” in which folks could come and play anthropological games including an ongoing D&D campaign. It was one of the best conferences I’ve been to in a while!

Piggy-backing off of this, I want to share an activity I designed for the students in my biological anthropology course. I’ve talked about using games to teach anthropological concepts before, and shared a game that I designed for that purpose with mixed effects in the classroom. This time, I’ve simplified the game, and made it more about building a narrative than about winning or defeating other teams. I ran it last semester, and it worked out great, so this semester I’m only making a few small tweaks before running it again on the final week of classes.

It goes like this. In the first session (download the PDF here), which I have students do after talking about environmental issues, they are asked to conceptualize a post-apocalyptic community. I give them an environmental scenario and ask them to make some basic choices to make about how their community is structured. I then ask them to think about how these choices help and/or hurt them in their given environment. Then I encourage them to creatively flesh out the details of their society including events, rituals, religious and political practices, foods, games, etc.

In the second session (download the PDF here), which runs a couple of weeks after the first session, I ask students to imagine the future of their community and how they will respond to challenges. The students must first describe their community ten years after the first session and think about how it has changed in that time. Then I have them run through three rounds, each representing one decade of life in the community, in which they encounter some challenge and must adapt to it. They roll dice to see which challenge they encounter and how severe it is. They are also given “resilience points” (I’m not a huge fan of resilience theories, but it’s all I could think of to call them), which they can spend to reduce the effects of the challenge. After rolling the dice and spending resilience points, the students must narrate the effects of the challenge, describing the event, how it affects the community, and, if they spend resilience points, how they were able to adapt to the challenge.

As I mentioned in my talk at the AAA panel, I think the ability to imagine other worlds and futures is an important muscle that needs to be exercised regularly. My hope is that this activity gives students the opportunity to think about potential challenges we might face and various ways to confront them. Feel free to let me know what you think in the comments.

Welcome to the Simulocene

In my contribution to Cultural Anthropology’s Lexicon for the Anthropocene Yet Unseen, I suggested that, perhaps, another name for our epoch, among the many other names that have been proposed (anthropocene, capitalocene, chthulhucene, etc.), could be the simulocene – a world made from models. In Mackenzie Wark’s latest post on “general intellects” he discusses Bruno Latour’s most recent book Facing Gaia: Eight Lectures on the New Climactic Regime. I have not read Facing Gaia, but the dichotomy between Wark’s and Latour’s views on simulation is a familiar issue that I’ve been grappling with in my dissertation research.

As Wark describes it, Latour seeks a return to a premodern politics – turning away from the totalization and globalization of modern science. Specifically, Wark reminds us that we only know climate change and the other environmental catastrophes of our time as they are mediated through simulation. Simulation, Wark explains, merges first nature (the environmental background of modernity viewed as resource) with second nature (the built human environment) to produce a third nature, or what Wark refers to as the inhuman (the messy combination of human technologies and nonhuman agents). For Latour, this third nature is the problem, because it takes us away from a connection with a “lived world.”

This aversion to mediated forms of knowledge is quite common – Anna Tsing, for example, makes some similar statements in both Friction and The Mushroom at the End of the World. There is an almost intuitive sense that mediated knowledge is less reliable and valuable than knowledge derived from unmediated, direct encounters with the natural world. But Wark sees this not only as impossible given that we are dependent upon computer simulations for our understanding of large and complex processes like climate change, but also sees it as a reactionary move:

The decision here is whether to further develop the artifice of the sphere to include in the simulation its conditions of existence, or to think without it, and return to the territoriality of the past — and all that implies.

I agree with Wark, to this extent. We cannot simply return to a world where all knowledge is first-hand and unmediated by technology and abstract quantification. We cannot simply exit the simulocene. But there are still clearly problems with simulation – not all models are created the same, and models can be used, not necessarily to distort ecological conditions (as the climate deniers would claim), but to promote a particular approach to ecological issues that reduces decision-making to a series of financialized cost-benefit analyses. In other words, despite a kind of methodological cooptation (Wark claims “Stack-based, information-based conflict puts one simulation up against another: earth science versus financialization”), neoliberal capitalism continues to have the upper hand.  So it’s not as simple a choice as Wark makes it out to be. Rather, we have to ask, what kind of sphere is being produced, and how (if at all) can models be used to make it differently.

The site of the Chesapeake Bay Hydraulic Model, demolished in 2011.

In what I describe as the simulocene, models are not only ways of constructing knowledge about the world. They are also relational artifacts and processes that contribute to the material production of the world. The iconic reminder of this for me is the physical model of the Chesapeake Bay that was constructed in the 1970s, but never used. The model was, very distinctly, a material artifact. It was housed on the shore of the Chesapeake Bay itself, drew water from the estuary to fun through its concrete structure, and dumped that water right back into the Bay. These are the material relations that it embodied, but its relational architecture extended far beyond the shores of Matapeake, Maryland.

The model was constructed by the US Army Corps of Engineers which was the original agency mandated with managing water across state boundaries – primarily for navigational and military purposes. The fact that the model was never really used and became obsolete almost as soon as it was finished was not simply a matter of physical models being superseded by computational models at the time, as Christine Keiner describes in her history of the model. It also signals a shifting institutional order wherein the Army Corps was superseded by the Environmental Protection Agency as the primary agency for water management in the US, and water quality concerns began to take precedence over navigational and water quantity issues. In other words, the physical model was made obsolete not only by technological change, but also by economic and social changes taking place at the same time.

This, along with the rest of my research, suggests that we can think of computer models not simply as tools for understanding the world, but also as the embodiments of social/institutional frameworks that extend far beyond the technology itself. I’m not trying to suggest that there is a deterministic relationship between the technology of modeling and the social institutions in which it is utilized, but there are feedbacks and broader processes at work that make the two emerge alongside one another.

However, if we are not to go back to “the territoriality of the past” without models then the question is whether they can be redeemed from the neoliberal institutional frameworks in which they are embedded. As I’ve hinted at in a prior post and will elaborate upon in future posts and articles, I think that they can be redeemed, but that the process of building and utilizing a computational model might take on a significantly different character in the process. In other words, we may not be able to exit the simulocene, but we might be able to figure out a way to use computer models to undermine neoliberal management and make a different world possible.

Computer Models and Neoliberal Environmentality

Do computer models inevitably lead to neoliberal forms of governance? This is one of the persistent questions that I’ve grappled with in my research on the use of computational models for environmental management in the Chesapeake Bay watershed. Over the years, I struggled with the idea that maybe computer models, despite their incredible power to help us understand complex issues, only end up making things worse. Models inevitably reduce that complexity to simplified numerical representations, and in so doing. As a result, they feed into the kinds of neoliberal governmentality that subjectivizes us towards individualized economic rationality and away from the kinds of relational engagement that are needed to actually deal with the social and environmental problems we face. My question is, can computer models be redeemed? Can they actually contribute to relational engagement rather than liberal individualism? I think I have the beginning of an answer now. I’ve written the full argument up as a journal article, which is currently under review, but I’d like to share the basic structure of my thinking here so that others can start thinking through some of its implications.

Computer models – indeed all kinds of models – are inevitably reductionistic. It’s a fundamental characteristic of models that they must reduce the complexity of systems either by reducing the size (i.e. in a scale model) or reducing the amount of factors affecting the system. The latter is what computer models do. They isolate those components of a system that are relevant to the issues at hand and represent only those factors as numerical equations. The result is a model that can take some input like quantity of nitrogen applied to the landscape and generate an output such as the amount of oxygen dissolved in the waters of the Chesapeake Bay. Between these two quantities, there are a number of different complex processes at work, but the computer model has reduced and simplified them to provide a baseline estimate that is more or less accurate depending on the quality of the simulation.

Chesapeake Bay Hydraulic Model
The original Chesapeake Bay Model was a massive physical model of the estuary. Despite its size, it was still a reduction from the actual system.

That process of reduction is not necessarily a problem in and of itself, since all cognition works through reductionism. But it can become a problem if it leads to other practices that are ultimately harmful. One possibility is that such a reductive and quantified process lends itself well to governance practices that reduce human decisions and interactions to market transactions – a form of governmentality known as neoliberalism. In order for neoliberalism to work, non-market values and complex human activities and relationships must be quantified and given a market value. Modeling serves this purpose very effectively, and is rightly implicated in the rise of neoliberal governmentality. So there is no doubt that computer models can play an important role in neoliberalism, but my question is whether they always do so, or if they can be part of a process that promotes non-market values and complex human and non-human relationships?

In my doctoral research, I conducted interviews and participant observation with computational modelers and environmental management staff throughout the Chesapeake Bay watershed. The modelers I spoke with included not only those who work for the Chesapeake Bay Program (CBP) in the context of environmental management, but also those who work in academic contexts as well. There is a lot of interaction between the two contexts and it was important for me to understand how heterogenous groups of modelers are assembled to work towards management goals. However, this also means that I have data that allows me to compare the two different contexts: modeling for management and modeling for science. It is this comparison that has provided some insight and the beginning of a potential answer to my questions about modeling and neoliberalism.

Confluence of Chenango and Susquehanna Rivers
The Chesapeake Bay watershed is a beautiful and user lex socioecological system that can never be reduced to simple quantification.

What I found when talking with the scientific modelers was that there is an immediate recognition that models are inherently simplifications and, therefore, limited or “wrong” in some ways. The phrase “All models are wrong, but some are useful.” demonstrates their pragmatic approach to modeling, and frequently came up in my discussions with them. But there’s more to it than that. In fact, it is this very “wrongness” of the models that makes them such useful tools for scientists. By engaging the models in a continual process of feedback between empirical data and simulation, the modelers are able to recognize the limitations of our understanding which drives further research to understand those limitations in order to improve or expand upon existing models. Ultimately, this resulted in a high degree of appreciation for the complexity of natural systems and a recognition that they cannot be reduced to the quantitative inputs and outputs of the model.

Modeling in the management context was not entirely different. There is still a scientific drive to understand the complexity of the system and processes, but ultimately it comes down to a question of applying the models to management decision-making. I should point out here that environmental management in the Chesapeake Bay watershed is decidedly neoliberal. The primary regulatory structure that guides management is the “total maximum daily load” or TMDL. Implementing a TMDL requires the EPA – in this case, by way of the CBP – to set an upper limit on the quantity of contaminants (in this case nitrogen, phosphorous, and sediment) that can be introduced into the system. The difference between the upper limit and the present load is then distributed as a load reduction to the various agents involved and they are required to reduce their input to meet the TMDL requirements. In order to meet their load reductions, the agents are supposed to implement “best management practices” or BMPs, which are will help to reduce the loads. As a result, the process is often reduced to an economistic cost-benefit analysis of trying to determine what BMPs will generate the greatest load reduction for the lowest cost.

Computational modeling plays a significant role in this process, at least in the Chesapeake Bay watershed. The CBP has developed a complex model called the Chesapeake Bay Modeling System or CBMS. This model is used to identify the TMDL limit, distribute load reductions across the watershed, and track the implementation of BMPs and their effect on water quality in the estuary. On the whole, the CBMS is a scientific model and undergoes the same process of feedback that other scientific models undergo. It is very impressive to observe the development of the model and the discussions about how different processes are to be simulated. However, ultimately, this complexity must be made to fit within the management context. That means that the model must be understandable to management staff who are generally not computer scientists or mathematicians, and that it must be useful to them in applying the management process. As a result, the enormous complexity of the model must itself be reduced to a simple set of factors that are relevant to the cost-benefit analysis of decision-making within the TMDL framework. In fact, in the management context, the model is sometimes referred to as an “accounting tool” because it allows management staff to calculate the costs and benefits of different BMPs within their region.

This comparison shows that, while computer models might contribute to neoliberal forms of governance, they do not necessarily do so. When they do, models themselves are torn from the continual feedback process of scientific understanding and, in the process, reduced to simple accounting tools. In other words, there is hope that computer models might be redeemed for non-reductionist, non-market relationships with the environment. What still remains to be answered, and is perhaps for another research project, is whether the feedback process of scientific understanding can be generalized outside of academia. Is it possible, in other words, to engage management staff and members of the public in a modeling exercise that is not reductionistic – one that might even foster non-market values and complex human non-human relationships? I will continue to examine this issue in further research, and as I do, I will be sure to share my findings here.

What’s the purpose of anthropology?

Are we to be “custodians” of a body of “cultural knowledge”? Or, rather, a body of knowledge about cultures written from a particular (white, colonial, male…) perspective?

Sure, I find it interesting and worthwhile to learn about the many different ways that people live and survive in a harsh world. And I think we have important stories to tell, as well as some useful insights we can share. But is the purpose of doing all of this ethnographic research simply to add to that pile of “cultural knowledge” so that we can preserve it for future generations and rattle it off in lectures to our students? I don’t know about you, but that’s not the anthropology I am interested in pursuing and building.

When I think about the purpose of anthropology, I always go back to an excellent lecture by Ghassan Hage (who has his own take on the Sahlins post) in which he suggests we are left with a question: “…how can we make a bad relation a good relation?” In other words, how can we work on improving relationships between and among the very different kinds of people who must share the world today? That’s as good a purpose for anthropology as I can come up with, and I think it’s the mode of anthropology that I see many of my colleagues striving for today.

In that sense, what is the role of that body of knowledge that we might love or despise, but with which we are nevertheless burdened and/or blessed? It either helps us on our relational work or it hinders us – maybe sometimes both. It’s stupid to bemoan the fact that graduate students are “ignorant” of something like “matrilineal cross-cousin marriage.” That, in itself, is not something to bemoan, since knowing about those cultural practices is not an end in itself. What would be truly tragic, however, is if anthropology students and their instructors lost an interest in making a better world for all of the people who live those cultural lives today simply to preserve a static body of knowledge. Our task, then, is to understand the legacy of that knowledge, listen to the critiques coming from those who were the subjects of its production, and consider the kinds of relationships we compose and maintain by acting as its uncritical “custodians.”

Towards a radical acceleration, or, We have no idea what’s going on anymore…

In the past month, we have seen: two of the biggest storms ever recorded, one of the largest earthquakes seen in decades, wildfires consuming enormous portions of the Western US and other parts of the world, heat waves hitting the West Coast, and many other disasters that haven’t made it into the news. A new report says that global ocean currents could be collapsing due to climate change, and that article contains the most honest quote I’ve seen in weeks:

While geologists have studied events in the past similar to what appears to be happening today, scientists are largely unsure of what lies ahead.

In addition to all of these “natural” disasters, we’ve got two nuclear-armed world leaders playing a game of chicken with one another. We have a fascist in the White House and Nazis proudly roaming the streets. All of this was impossible to imagine just a year ago, but here we are, and we have no idea what’s going on anymore.

Paul Stoler suggests we take a lesson from the Songhay of Niger and slow down, rebuild some of the “harmony” we have lost. I would say the idea of slowing down is incredibly inane right now. The harmony Stoler calls for was only ever localized and never involved the kinds of globalized geopolitical processes underway today.

At the same time, the accelerationists don’t really have a response to these kinds of threats to life and livelihood except to say “fuck it and we’ll see what happens.” It’s not a position I find very helpful or useful, but maybe total resignation to the inevitable flux of global events is the only response we really have anymore.

Or maybe it’s a question of what exactly is accelerated. Maybe we can start accelerating care and compassion. Maybe we can accelerate the process of building alternatives to the extractive and accumulative economy of Capitalism. I guess that’s a kind of left-accelerationist position, but I’ve not been convinced by any of the social-democrat plans I’ve seen so far. Maybe a more radical kind of left-accelerationism is needed to really confront the horrors of climate change and the violence of Capital accumulation.

We really have no idea what’s happening anymore. No idea what to do about any of it. I’ve written three blog posts in the past few days, and none of them seemed worth posting. I’m posting this one because sometimes “we are unsure what lies ahead” is the only thing anyone can really say…

Watershed Ethnography

This post is an early draft of a chapter that didn’t make it into my final dissertation. I’m sharing it here because I think it still carries and important line of thought that I hope to develop further in future research and writing.


I set out to do an ethnography of a computer model; I ended up doing a watershed ethnography.

When I began researching computational modeling in the Chesapeake Bay region, my initial goal was to understand the effects that modeling had on management and decision making at different levels. I had planned to do a comparative study looking at different modeling projects taking place in the region and exploring the ways that different modeling techniques and methods influenced the relationships that make environmental management possible. However, as I undertook the research, things changed dramatically, as often happens in ethnographic field work, and gradually the watershed began to emerge as a framework for my project.

Two key changes – one research driven, the other resulting from a personal shift – really brought the watershed into focus. First, as I attempted to undertake comparative study, it became increasingly clear that there was no clear division between the different projects that I was researching. They all kind of flowed together much like the water flowing through the various tributaries converges in the Chesapeake Bay. All of the projects I had set out to study – including one that was technically outside of the watershed – drew upon the same sets of data, the same modeling methodologies, and fed into the modeling taking place at the Chesapeake Bay Program. This meant that a comparative analysis, while not impossible, would have been confounded by the interconnections of the different projects. After a great deal of deliberation, I decided to continue studying the projects, but now as a reflection of how different modeling practices can flow together. As a result, the focus of my research became the Chesapeake Bay Program’s modeling efforts and the convergences that underlie it.

The second change was the result of a personal decision. In 2005, I moved from College Park, Maryland to Binghamton, New York to be with my current wife who had just started a PhD program there. While this move made it necessary for me to commute to conduct my research in Annapolis, it also opened up my understanding of environmental management and shifted my perspective from the Chesapeake Bay to the watershed as a whole. Despite the distance between the two cities, Binghamton is still part of the watershed, sitting at the confluence of the Susquehanna and Chenango rivers. My house is blocks away from the Susquehanna and within walking distance of the park where the two rivers converge. Living up here has enabled me to not only see more of the watershed itself, but to see the processes and practices of environmental management that ultimately affect the Chesapeake Bay. It has made me cognizant of water quality issues that I would not have been aware of if I had stayed in the DC area. And it has allowed me to understand the perspectives on modeling from the people whose lives and livelihoods are affected by it.

It seems to me that theory and method often develop to take on the shape of the objects we are studying – especially in an environmental anthropology that is sensitive to ecological relationships and processes. This is evident in Anna Tsing’s work on matsutake mushrooms – she describes her writing style as a “riot of short chapters.” She then goes on:

“I wanted [the chapters] to be like the flushes of mushrooms that come up after a rain: an over-the-top bounty; a temptation to explore; an always too many… They tangle with and interrupt each other – mimicking the patchiness of the world I am trying to describe” (viii).

In this way, mushrooms are not only the objects of Tsing’s research, but also models for the process and practice of studying and writing about them.

Similarly, Laura Ogden’s ethnography of the socio-ecological processes that shape the Everglades takes on a “rhizomatic” quality that mirrors the rhizome of the vast, and thickly tangled swamp itself. Describing her ethnographic approach, Ogden says:

“This book should be read as a part of the everglades entanglement, or better, as an experiment with the rhizome’s logic… I have allowed the rhizome to guide this book’s composition. Each chapter maps the course of a particular trajectory within the Everglades rhizome” (31)

Following from these examples, I think it makes sense that my own research and writing would take on some of the characteristics of a watershed. And so, in what follows I hope to explore what I have come to think of as watershed ethnography – one that goes with the flows,” to find the places where they converge, and to navigate the currents that emerge from the intensive variations in their viscosity.

Watershed Theory

The Chesapeake Bay’s watershed covers 64,000 square miles and includes approximately 18 million people. Conducting an ethnography of the watershed on this scale would be an enormous undertaking far in excess of a doctoral dissertation, and potentially even the confines of ethnographic methods themselves. Instead, I characterize my research as a “watershed ethnography.” It is a subtle, and some may argue, small semantic difference, but there is good reason for inverting the terms. An ethnography of a watershed implies simply that the object of the research is a drainage basin, but “watershed ethnography” is something different, and, at the same time that it is shaped by the contours of the flow of water on the landscape, it also redefines our conception of the watershed itself.

A watershed is traditionally defined as “the area that drains to a common waterway, such as a stream, lake, estuary, wetland, aquifer or even the ocean” (EPA). For example, all of the water that falls on the Chesapeake Bay’s watershed ultimately flows into the Chesapeake Bay. Ecologists have come to recognize that there is more to a watershed than water drainage, and have begun to think of them as complex, interconnected systems. This notion includes the myriad living organisms who depend on the water for their livelihoods, and, to varying extents, the social lives of humans that utilize and affect water quantity and quality. Integrating human and non-human practices and processes in a way that adequately represents both presents a challenge to existing social and natural science theories and methodologies.

An ethnography of a watershed would apply an ethnographic methodology to the drainage basin and its inhabitants. In other words, an ethnographer might conduct interviews and participant observation with people who live in the watershed – especially those who are engaged in work to improve water quality. In more recent years, the ethnographer might incorporate some quantitative methods as well to develop a broader sense of the perspectives of the people who live throughout the watershed’s boundaries. The questions driving such research might be the way people think of themselves as part of the watershed, or the ways that their actions affect the water and other natural elements that constitute the watershed. In this sense, ethnography itself is taken to be an abstract methodology that can be equally applied to any issue or cultural practice. Additionally, regardless of how “integrated” the natural and social systems are thought to be, the focus of ethnography will always be on the “social dimensions” of life in the watershed because it fails to present the watershed itself as a product of social relations.

Several developments in the social sciences and humanities in the last few decades have made it possible to imagine a different kind of ethnography – one that is able to expand beyond the “social dimensions” and the dualistic view of natural and social systems. This is not meant as a way to usurp the role of ecologists and other environmental scientists. Because their methods were also designed with a particular object in mind, they are very effective for studying natural processes. What natural science methods are not well designed for is studying interconnections and the complex entanglements of humans and non-humans precisely because they must in some ways isolate themselves from social and political discourse. Attempting to study social practices objectively creates ethical and political conflicts because research is always socially situated and cannot be isolated in the same ways.

Recent approaches enable a better integration of natural and social systems because they transform our conception of the relationships between humans and nonhumans including materials, organisms, and technologies. Specifically, these developments have come together around social science and humanities research that attempts to address the myriad political and ecological crises we face around the world today. We live in a time that many have called the “anthropocene” – a time when human activity has reached a point where it can be recognized as a global geological force. Although the timeline for the epoch is contested, its origins have been traced  back to the 16th and 17th centuries. This period has been characterized by climate change, deforestation, and mass extinctions as well as global inequalities in the form of colonization, slavery, and economic exploitation. I would not be the first to point out the links between these processes, and some philosophers and social scientists have gone so far as to propose alternate names that reflect the social causes underlying both ecological destruction and human exploitation.

Haraway (2016), as a remedy for the anthropocene/capitalocene, proposes “tentacular thinking” in what she refers to as the Chthulucene “…a name for an elsewhere and elsewhen that was, still is, and might yet be” (31) and “…an ongoing temporality that resists figuration and dating and demands myriad names” (51):

“The tentacular ones tangle me in SF. Their many appendages make string figures; they entwine me in the poiesis – the making – of speculative fabulation, science fiction, science fact, speculative feminism, soin de ficelle, so far. The tentacular ones make attachments and detachments; they ake cuts and knots; they make a difference; they weave paths and consequences but not determinisms; they are both open and knotted in some ways and not others.”

In other words, tentacular thinking attempts to overcome the dualistic thought that created the anthropocene by exploring the entanglements of beings and the processes of sympoiesis – or making-with – in which we are all always engaged. These modes of thought force us to reconsider not only our relationships with nonhuman organisms, but also with the technologies we utilize, the knowledge practices in which we engage, and the institutions that govern us. And it challenges us to be cognizant of the kinds of relational systems that are produced from these myriad interactions including our own.

Within this conceptual framework, all forms of scientific research – including ethnography – take on a new character. The anthropocene requires a scientific practice that is not only oriented around the production of knowledge, it demands “… new research practices to excavate, encounter and extend reparative possibilities for alternative futures” (Manifesto, ii). This means that scientific practices must be considered as part of the social milieu in which they operate…Ethnographies of science began this work starting in the 1980s. In fact, much of the theory and practice that has developed into the kinds of tentacular thinking I have been describing emerged within the field of science studies.

Science poses a unique problem for social scientists. It is, without question, a social process. The knowledge it generates would not be possible without the social practices, norms, and institutions that foster it. But it is a social practice primarily oriented around understanding the material world. As a result, anthropologists and sociologists who study scientific practice can either relegate it to a unique category somehow outside of the social field, or we can incorporate it into our understanding of social practice. This is what sociologists of science refer to as first-order symmetry – treating all science, and not simply faulty science (e.g. Lysenkoism), as inherently social.

However, once we begin to treat science as a social process, a second problem emerges – the scientists resist. Science is not social, they argue, it is an objective process that produces objective knowledge. To say otherwise would be to reject the “facts” that science generates as mere “social constructs” – as if to stop believing in Boyles Law would cause airplanes to fall from the sky. In fact, this is exactly what happened in what came to be known as the “Science Wars” in the 1980s and 90s. The response on the part of some science studies scholars was to extend our conception of the social field through what came to be known as second-order or generalized symmetry.

Generalized symmetry maintains that science is indeed a social process, but that social processes always include nonhuman participants – the very materials, objects, technologies, and organisms that scientists themselves are tasked with studying. With this, science becomes a very different kind of practice whose primary project is not to develop accurate or truthful representations of the world, but rather to build effective relationships with the nonhuman beings with whom we share it. In other words, symmetry forces us into a non-representational position wherein practice and process are primary (Pickering).

Pickering advocates for what he calls a “performative idiom” for science and technology studies. The philosophy and sociology of science, he argues, has had an “obsession with knowledge” and performativity rebalances our understanding “toward a recognition of science’s material powers” (7). In a performative idiom, “science is regarded as a field of powers, capacities, and performances, situated in machinic captures of material agency” (7). Through what he calls the “dance of agency” scientists interact with the material world through the medium of machines in order to “capture” material agency and domesticate it. However, the outcome cannot be predicted or known in advance – the materials resist capture, and the machines must be remade:

“… we have no idea what precise collection of parts will constitute a working machine, nor do we have any idea what its precise powers will be. There is no thread in the present that we can hang onto which determines the outcome of cultural extension” (24).

Thus, the “dance of agency” or “mangle of practice” as he also refers to it is temporally emergent. But this process of emergence and mangling is not simply a scientific procedure. Pickering describes a metaphysics of agency in which the world is “…continually doing things, things that bear upon us not as observation statements upon disembodied intellects but as forces upon material beings” (6). He calls to mind the weather “winds, storms, droughts, heat and cold” that not only affect us materially, but also in “life threatening ways” (6). Everyday life, he contends involves “coping with material agency” that cannot be reduced to human causes. “My suggestion is that we should see science (and of course technology) as a continuation and extension of this business of coping with material agency” (Pickering; 6-7).

So we come back to the entanglement of human and nonhuman practices in the anthropocene that far exceeds the practices of science, and this has resulted not only in a reconceptualization of the role of science, but also of the nature of society as well. Social practices, too, must be reconsidered as the entanglement of human and nonhuman agencies leading to the further breakdown of the boundaries that define the social and natural sciences. Haraway’s cyborgs and companion species are two familiar examples wherein boundaries are breached – between organism and machine, and human and animal. Where Haraway differs from many science studies scholars is in recognizing that these boundaries are not simply oriented around objective knowledge production in the sciences, but also shape and are shaped by the asymmetries of power that operate within our broader social world. She advocates for a “situated knowledge” that is responsive to these asymmetries and responsible for the kinds of asymmetrical relationships in which it is engaged.

Making such a situated knowledge possible requires a thorough understanding not only of the entanglements of humans and nonhumans, but also of how these interactions combine to produce systemic asymmetries, and how scientific and ecological practices are in turn affected by these systems. Haraway, for her part, shows not only how science is entangled with the nonhumans it is tasked with understanding, but also with global capitalist markets, the military, and patriarchal society. In other words, in order to think about the entanglements of scientific practices, we must also think about the broader entanglements that make scientific research possible in the present era.

We seem to be operating in what Moore (2016) refers to as a capitalist “web of life” – a political ecology of exploitation that reduces both humans and nonhumans to their market value. This “web” is produced from the entanglements of humans and nonhumans and also reciprocally influences the contours that those entanglements take, including those of scientific practices. Thus, conceptualizing the web of global capitalism helps us think through our day-to-day entanglements and the possibility for building a different kind of political ecology.

But capitalism is not global by default. As Tsing (2005) points out, capitalism “spreads through aspirations to fulfill universal dreams and schemes” but “it can only be charged and enacted in the sticky materiality of practical encounters” (1). Thus, we must understand capitalism not as a global system, but as a series of “global connections” linked together by capitalism’s universalizing project. This enables us to recognize the localized entanglements in which we are all engaged, while also indexing the universal project of capitalism that influences many, if not all, of them.

To take universals at face value, Tsing argues, is to “erase the making of global connection” (7). She asks, “How  can universals be so effective at forging global connections if they posit an already united world in which the work of connection is unnecessary?” (7). Universals are “…knowledge that moves… across localities and cultures” (7) but movement cannot happen without what Tsing calls “…friction: the awkward, unequal, unstable, and creative qualities of interconnection across difference” (4). Universals cannot simply travel unimpeded anywhere they please, rather, it is through friction that universals gain “grip” and they become “charged and changed by their travels” (8). In other words “friction gives purchase to universals, allowing them to spread as frameworks for the practice of power” (10).

Techno-science also works within and amongst these global connections, and frictions. Edwards (2010), for example, refers specifically to the concept of computational friction or “the expenditures of energy and limited resources in the processing of numbers” (112), however, his concept mirrors that of Tsing’s in many ways. He describes the history of the field of meteorology and the processes by which it became a global science able to chart climatological effects around the world. In the early days, such a science simply wasn’t possible. Material frictions including computational power (“computers” at the time, were people whose job it was to process complex equations by hand), and data collection (a lack of devices for tracking atmospheric processes) were compounded by the social frictions between nation-states that impeded the sharing of data and other resources (the fear was that the information could be used for military purposes).

It was by working through these material and social frictions that meteorology became a global science with an international body of scholars, the IPCC, keeping it functioning. At the same time, these processes were bound up with the same globalizing aspirations of capitalism itself.

These global connections and the processes that maintain them feed back into our everyday lives and influence the kinds of entanglements in which we choose to engage through a process of subjectification. Robbins (2007) describes how people become subjectivised by turfgrass. Lawns, he argues, are not only aesthetically appealing surroundings for our houses, they draw us into a web of entanglements with turfgrass, weeds, other home owners, the chemical industry, the real estate industry, and other influences that in turn shape how we think of ourselves and the kinds of engagements that are possible.

“The lawn as sculpted, immaculate, atemporal, and emerald green monoculture … only developed as a product of the economic growth conditions in suburban real estate development, tied to proselytizing that connected the lawn with a certain kind of desirable urban citizen and economic subject” (129).

Lawn people are anxious, he claims, anxious about the condition of their lawn and others’ perceptions of it, but also anxious about the economics of maintaining it, and the ecological harm resulting from the chemicals they must apply in order to maintain the idyllic lawn. Lawns “hail” into existence certain kinds of human subjects with ecological causes and consequences.

This is why attempts at education so often fail to alter behavior. As subjects of these broader political ecologies – particularly those of us who have little political power – we are unable to simply choose one action over another because of the kinds of entanglements in which we are engaged. Crafting more sustainable and just political ecology will require more than simply the production of knowledge and the legislation of behavior. It will mean restructuring the entanglements that constitute the present political ecologies in which we are engaged through material  and social reconfigurations and the production of new subjectivities.

Watershed Ethnography

How does one do a watershed ethnography? I have already pointed out that there is a difference between an “ethnography of a watershed” and a “watershed ethnography.” Maybe the case is belabored, but I want to press the idea that ethnography – in light of the theories of entanglement and the consequent redefinition of the project of science – can no longer be conceived as an abstract set of methods that can be almost universally applied with some modest modifications. Ethnography must be situated within the political ecologies in which it operates, and that means that it becomes a different practice in each case.

In this sense, perhaps we need a conception of “tentacular” ethnography. But in this case, I refer not only to the entanglements that tentacles suggest, but also the extraordinary camouflaging capabilities of many cephalopods. Octopuses, in particular, are exemplary of this skill. Able to change not only their coloration, but also the texture of their skin to blend in with almost any surface. They accomplish this with the use of chromatophores – specialized skin cells that expand and contract to expose different colors and textures. However, these cells are not simply color changing cells, they are also sensory organs that operate through a distributed network throughout the octopus’s body. In other words, when the octopus changes its color and texture, it is not only attempting to match its surroundings, it is feeling the rocks, sand, plants, and shells around it. In other words, the octopus’s skin is its way of relating to the ecology that surrounds it.

Ethnography has always been this kind of sensing-camouflaging skin – ethnographers are chromatophores. The ethnographer’s skill is to immerse herself in the cultural ecology that she is attempting to understand. She must not only observe the activities and practices of the people, she must also participate, take part in those practices, and allow herself to be transformed by them. This was the original purpose of ethnography, and the reason for the trepidation among early ethnographers around the possibility of “going native.” Despite this, these early ethnographic practices were oriented around gaining some semblance of objective knowledge about the people rather than sensation as feeling and being.

In recent years, ethnographic methods have been reduced to simple data collection practices. Participant-observation and interviews allow the researcher to gather detailed and rich information about almost any topic, but what has been lost is a sense of immersion – of mimicry and camouflage as itself a process of sensation. Ethnography, in this sense, is not about entanglement with other modes of existence, but simply capturing what can be observed and documented in order to convey an abstracted understanding. The ethnographer – and the reader of the ethnography – are not transformed as a result of the encounter, and there is no risk of “going native.”

Tentacular ethnography is the way to bring ethnography into the anthropocene and confront the entangled problems we currently face. But this kind of ethnography must not be considered an abstract suite of methods that can be universally applied, instead as Kim Fortun (2012) argues, it must make itself “‘appropriate’ to the historical conditions in which we find ourselves” (459). She claims that:

“Ethnography also has a record and habit of shifting in concert with the times, responsive to both historical conditions and internal critique (of the sort Writing Culture offered). And these conditions can be discerned ethnographically” (Fortun 2012; 451)

I take this to mean not only that the practice of ethnography as a whole must change to meet global historical conditions, but must also – octopus-like – situate itself with respect the particular systems that ethnographers are attempting to resolve… Fortun, whose concern is the condition of late industrialism and the technological systems that pollute our atmosphere, land, and water, describes ethnography as itself a kind of technology – a “…means through which things are enabled” (450). In this sense, ethnography, like technologies, can be “designed” and “produces” various things.

In other cases, ethnographers have reconfigured their practice to align with other kinds of systems. I have already mentioned Anna Tsing’s work with matsutake harvesters, which takes on a patchy and “unruly” quality, and Laura Ogden for whom, “…the rhizome is not only a metaphor for thinking through the world’s relations, or in this case, theorizing the Everglades landscape, but also a model for producing landscape ethnography” (31, italics original). In both cases, the features of the system influence the character of the ethnographic process and define what features the ethnographer must be on the look out for. Participant observation and interviews continue to form the core of all of these ethnographic practices, but these processes “tolerate, indeed cultivate, open-endedness” (Fortun 2012; 451). Done well, they are, as Tsing would put it, “arts of noticing” – a way of feeling, but also of being that is attentive to the particular histories of the political ecologies in which we are engaged.

It is in this spirit that I propose the idea of watershed ethnography. A watershed is formed by flows. Water falls upon a landscape, percolates through the soil, condenses into drops and begins to pool. It flows over the land and gradually converges into increasingly larger creeks, streams, and rivers. Ultimately these cascade into a single confluence – a lake, estuary, wetland, or ocean. As I mentioned above, however, it is not only water that flows in a watershed, and it is the confluence of these disparate flows – the way they shape and reshape one another – that interests me.

In my research with the Chesapeake Bay watershed, I have followed the flows that constitute the political ecology of the landscape and sought out the sites of confluence. I was fortunate to begin my research with the modeling taking place at the Chesapeake Bay Program (CBP). It was wading in the waters of this enormous techno-scientific environmental management framework that I began to see flows converging. From there I had only to follow them up and downstream to see what other confluences emerged and how different flows tended to influence one another. I spent a lot of time at the CBP attending meetings because it was in these meetings where things tended to converge – although the streams themselves can be seen elsewhere. I conducted interviews to… I also jumped into the streams at various points where possible – learning to make simple computational models and requesting feedback on the models from my collaborators. In many ways, I tried my best to immerse myself in the various projects and processes taking place.

But the goal of tentacular ethnography, and watershed ethnography as a manifestation of it, is not only to learn and produce knowledge about the political ecologies in which we are entangled. The goal cannot be to become entangled one’s self – to simply and unreflexively “go with the flow.” The goal, as I mentioned earlier, must be to reconfigure the flows materially and subjectively in the hope of producing something more just, more sustainable. As Fortun describes:

“Our task now becomes creative. We must try, through the design of an experimental ethnographic system, to provoke new idioms, new ways of thinking, which grasp and attend to current realities. Not knowing in advance what theses idioms will look and sound like” (Fortun 2012; 453)

in order to encourage

“…particular subject effects—subjects able and willing—even wanting, desiring—to become party to new ways of thinking about and engaging a particular problem domain, a domain that we have analyzed ethnographically to understand the discursive gaps and risks that characterize it” (458-459).

In my research, I have seen the tensions – the currents and gyres – that emerge from the various material and subjective confluences that constitute the watershed. My hope is that this ethnography serves as an “experimental system” in the way Fortun describes – that it reveals and even brings into existence new “frictions” that we must work through collectively. I hope that by channeling, redirecting, and potentially damming some of these flows, we might reconfigure the watershed system to produce new subjectivities and new political relationships in order to produce a more just and sustainable watershed political ecology.