U/Acc in Michael Cisco’s Animal Money

Acceleration, in physics, is the rate of change in velocity with respect to time. Importantly, I think, both velocity and (by extension) acceleration are vectoral concepts, meaning the require both magnitude and direction. A change in direction without a change in speed is an acceleration. Additionally, and interestingly for accelerationism, the change is not necessarily positive, therefore, slowing down is also a form of acceleration. That brings me to what has been described as “unconditional” accelerationism. Based on these definitions, acceleration can only be conditional (i.e. directional) and most of the variants I’ve seen do ultimately suggest some kind of directionality if not an ultimate teleology. In that sense maybe “acceleration” is the wrong metaphor and needs to be dismantled itself in order to open up the unconditional. Instead of a change in velocity (direction and/or speed), perhaps the better way to approach the unconditional is through the rapid proliferation of victors, i.e. both directionalities and speeds.

That brings me to Michael Cisco’s Animal Money… This massive book is, perhaps, the most hyperstitional, psychedelic, non-linear, mind-warping text I’ve ever read, and I’ve read some crazy shit. It took me a year to get through it because I had to stop at intervals and read something that made sense just so I could give my brain a rest. Having finished it, I can’t say that I know entirely what it was about, but I can say the experience was worth it.

The plot – if one could be distilled from the morass of intersecting and diverging narratives and non-narratives – is that a group of economists, ostracized from their professional meeting due to various injuries and illnesses, devise (or are implanted with) the concept of animal money – a form of living currency that self-replicates, proliferates, and evolves as any living creature would. As an experiment, they break into a nearby zoo and create the first – I guess you could say “batch” of animal money. What exactly animal money is, how it’s created, how it works – that’s all left up to imagination. But the effects of its creation are immense. The world is transformed, economies collapse and congeal, social systems disintegrate, new forms of life emerge, alien entities start to show up, and so on. In order to conceal their role in the creation of this disruptive currency, and avoid a conspiracy stop the currency from proliferating, the economists create a hyperstitional identity known as Assiyeh, an astrophysicist-errant whose experiments revolve around recovering the ghost of her father who she accidentally projected into space in an earlier experiment-gone-wrong. For much of the novel, Assiyeh is found on the lam, traveling through the cosmos taking refuge on various exotic planets, falling in love with robots, and having sex with what is essentially a literal body-without-organs.

As crazy as it is, the plot is subsidiary to the broader goal of the novel, which is the deterritorialization of literally everything. And Cisco does it with such beautiful and grotesque imagery that it is hard to turn away. For example, I can imagine whole volumes written on a small passage describing what is referred to as “The Fudge Machine”:

“Behind a facade of dull whiteboard meeting rooms there are sacred vaults that contain the Fudge Machine, a pile of cushions interconnected with hoses, hanging bladders, surrounded by curing meat and cheese like a delicatessen, a blue mold crust on everything and in the middle there’s a trench cut near and narrow into the stone floor, and the fudge pours into this trench from a concatenation of translucent bladders that look like sea anemone. The fudge is a creamy confection of tan rubber tepidity, made from the mucus residue left behind by the collision of human energies with the softly ablative baffles that are the basic substance of the organization. In an adjoining chamber, the fudge is sliced into marvelously uniform rectangular tiles and doled out, still sort of warm, neatly wrapped in wax paper pages to be gobbled up, paper and all, by accessories. The vaults smell like a candy store and there are candy dishes full of sticky, bland grandma candies, Jordan almonds, butterscotches, virtually flavorless peppermints in both barber pole and chalk hunk versions, drab pastel jellybeans with no black ones, of course.”

This is just one small part of the book and leads to nothing else, but the image has stayed with me as a metaphor for life within the structural confines of our social institutions. Such images are pervasive in the book, challenging our notions of everyday life and providing fodder for a surrealistic imaginary. You cannot come out of reading this book without experiencing a fundamental shift in your operating system.

To me, the book captures the spirit of the unconditional – a broad experimentation and the generative proliferation of multitudinous subject- and objectivities. In this framework, it’s not acceleration that matters – it’s not, fundamentally, about a change in speed or direction – but rather, it’s about proliferation, and the rapid production of alternate fictions any of which might lead to a hyperstitional future. Whether it’s a “good” book or not is beside the point – the goal is not to please or enjoy, but to produce. Much like its eponymous currency, Animal Money the book is meant to live and evolve, and we are meant to absorb it, and allow ourselves to be changed by it. What comes from such experimentation? Who knows… but that’s the point.

De Landa – Synthetic Reason and Computational Simulation

DMF recently posted this video to Deterritorial Investigations. In it Manuel De Landa discusses various computational simulation techniques and how they influence scientific thought. Having not read De Landa in a while, it was refreshing to return to his mode of thought, and, in fact, find some parallels with my own research. Specifically, there are some similarities between what he describes as “synthetic reason” and the way that I found researchers to be using computational models in a scientific context.

Part of what De Landa is arguing here is that computer simulations function as a new kind of synthetic reason, and a new way of understanding the world we live in (I gather that this line of thought is developed in his recent book  Philosophy and Simulation: The Emergence of Synthetic Reason, which I have not read). Instead of simply observing behavior in the natural world and drawing assumptions from those observations, simulations allow us to synthesize the conditions of our assumptions and test them in the space of a limited world. This can then lead to new insights and intuitions about the world that we can apply to our observations in order to develop new assumptions. The idea is encapsulated when he says:

“… what virtual environments are making possible is precisely to synthesize more and more ideas in that direction, to allow the machine to be kind of a probing head into a world that we don’t see because we have these blinders on.”

This synthetic approach was similar to the kind of science I encountered in my own research on the Chesapeake Bay watershed. I’ve talked about it in another post, but essentially, researchers were using models as tools for exploring aspects of the watershed that they can’t observe directly for various reasons, and for examining the limits of their understanding (i.e. where the models are wrong). The result is a continual feedback between simulation and empirical data that accelerates and intensifies an engagement with the natural world. When models are part of this kind of feedback, I think they can be very productive tools for appreciating the complexity of ecological systems and recognizing the limits of our ability to understand them. This is important to understand because computer modeling is often criticized for being too abstract, technological, and reductionist.

On the other hand, models can also reinforce our predefined assumptions, particularly when they are displaced from the simulation-data feedback. This was the case with the application of computational models to environmental management. That’s not to say that the models used were inaccurate or not scientific, but that the management context is displaced from the feedback system in which the models are constructed. In those cases, the models are often modified to make them more useful for the management staff – incorporating things like costs and benefits of various management approaches, and simplifying the model to make them easier to run quickly and easier to understand for non-modelers. As a result, certain assumptions get built into the models that simply reinforce the assumptions of the broader management context. In the Chesapeake Bay watershed, this meant that the models reinforced the neoliberal approach structured by the Total Maximum Daily Load (TMDL) method (costs and benefits, best management practices, etc.). As a result, alternative forms of engagement with the environment and non-quantitative ways of valuing environmental benefits are ignored. This leads into a whole other critique of the neoliberal management approach, which I won’t address here.

So essentially, my findings support De Landa’s synthetic reason argument to an extent. I think it’s important to recognize that context matters. Simulations can be really valuable tools for understanding the natural world and for breaking through our ingrained assumptions, but only when they are part of a broader context (e.g. science) in which that kind of work takes place. When removed from that feedback process (e.g. management), models can often serve to reinforce and replicate our ingrained assumptions, further preventing us from seeing alternatives and recognizing our limitations. These are things we need to be aware of and on the lookout for as modeling becomes increasingly prevalent in environmental science and management.

Temporal Dissolution in Hawksmoor’s London

To get back in the swing of writing blog posts, I’m going to try sharing some quick thoughts on the books that I’ve read recently. In the next week or so, I’ll talk about Mapping the Interior by Stephen Graham Jones, The Ark Sakura by Kobo Abe, Days Between Stations by Steven Erikson, and a few other things I’ve read since I finished my PhD this past summer. But today I want to talk about Hawksmoor by Peter Ackroyd.Yes, I bought into Duncan Jones’s homage to his father, David Bowie, in the form of a mass reading group. Actually, I bought the book because I found it at a used book store in New Orleans at the beginning of the year. I wouldn’t have known about it if not for Jones’s reading group, but I wouldn’t have bought and read it if I hadn’t stumbled across it at the book store. So, it’s a little bit providence, I guess.

The book is a murder mystery or thriller that tacks back and forth between present day (1980s) and 17th century London. The 17th century chapters are written in a quasi-archaic, but decidedly readable style from the first-person perspective of the architect Nicholas Dyer (who is based on an actual 17th century architect named Nicholas Hawksmoor who designed and built several churches in London, and who is also the subject of Iain Sinclair’s Lud Heat). Dyer is a strange man who recounts in the first few chapters his unfortunate childhood in the wake of a disease that strikes the city and leaves both of his parents dead. As a youth he takes up with a cult figure and learns the secrets of a kind of black magic. Over time, Dyer becomes a well-known architect in the city and applies the occult principles he learns to the design and construction of his churches.

The alternate chapters follow a series of unusual murders in 1980s London that all take place at the very churches that Dyer has designed. These murders are being investigated by a detective at Scotland Yard named… Nicholas Hawksmoor. As you can see, there is already an element of temporal slippage taking place here, but as the novel develops, the parallels between Dyer’s occult architecture and Hawksmoor’s investigation become more clear. Finally, there is almost a total blurring of temporality on the London landscape with the churches forming a kind of focal point.

The book is beautifully written and extremely intelligent. The narrative elements in each alternate chapter contain references to one another so that you feel a sense of continuity, as if the past is never quite gone and the present was there all along. I am not very familiar with London, having only been there once as a teenager, but Ackroyd is well known for his flaneur-like accounts of the city and its people. Reading Hawksmoor, this intimacy with the city comes across, and the reader is left with a visceral sense of the temporal depth of the cityscape, and the nuances of every street corner. I would like to read his non-fiction books about London – particularly (given my interest in watersheds) his book Thames: Sacred River. If it carries any of the atmosphere of Hawksmoor, it might be the kind of dark environmental writing I’ve been looking for.

I have not kept up with whatever discussion is happening around the novel in Duncan Jones’s open reading group. For all I know the whole idea might have fallen off the map. But I know that many book stores were sold out shortly after Jones announced the first reading, which makes my encounter with the book all the more serendipitous. If you happen to be in a used book store in the near future, be sure to check the A section on the literature shelf. Maybe you will be pleasantly surprised.

We need more dark environmental writing…

Most of what I’ve been reading lately has nothing (directly) to do with the environment, which always causes me some anxiety. As an environmental anthropologist, I should be reading things about the environment and environmental issues! But then when I look at the literature that’s out there, it’s all very disappointing. I mean, it’s not all optimistic, bright green cheeriness, and there’s a fair amount of doom and gloom, but it’s all about how much we are fucking things up. That’s true, of course, and it’s important to be aware of, but there’s an underlying or unexamined romanticism to it all that makes it, ultimately, all too human – like, if we could just stop fucking things up, everything would be beautiful and happy. I think there’s a general failure to look at the gross, dark, dangerous, monstrous, and terrible aspects of the natural world – to encounter it on its own terms and really grapple with the non-humanity of it all. Like right now I’m reading this book River Horse by William Least Heat-Moon, which is great – it’s funny, and interesting, and well written – but the way the natural world is depicted is as this lovely, peaceful, wonderful space that is just being destroyed by humanity.

I don’t know exactly what I want, but I guess I would know it when I see it. I think the closest thing to it that I’ve read recently is the Southern Reach Trilogy by Jeff VanderMeer. What’s really amazing about that series is that Area X is so not human, and it’s threatening and scary. Also, the bureaucracy, the Southern Reach, is completely entangled with Area X, but not in any kind of deterministic or unilateral way. But it’s fiction, and weird genre fiction at that, so the effect is a bit dulled. What I want to read is a non-fiction Southern Reach trilogy about the uncanny, eerie, and strange aspects of the actual natural world, and its entanglements with the human.

Anyway, just a thought. If you have any suggestions, let me know.

What is a Watershed?

Image of a watershed as a network of streams and riversWe think of watersheds as intricate networks of rivers, streams, and creeks that converge towards a single point. Thinking about my research and the ecological relationships that make up a watershed, I’ve come to realize that this image doesn’t really tell the whole story.

It’s better to think of a watershed as an inundation – a flow of water through the landscape rather than across it. Instead of a network, it’s better to think of molecular condensations percolating through the soil, coalescing gradually to form into the rivers and streams we usually think of when we imagine a watershed. The mud in your back yard is as much a part of the watershed as is the creek that runs through the woods or the river that cuts through town.

Now, thinking metaphorically, I wonder how it would change our framing of social relationships to think in terms of these molecular condensations. How would our lives be different if we considered ourselves part of the mud rather than nodes in a network of relations?

It’s just a thought – something I might explore more later…

Playing Games with Anthropology #2: Anthropological Futures

This weekend at the AAA Conference in Washington DC, we say an unprecedented amount of sessions related to various forms of anthropological futures. In a panel that I coorganized with several friends on twitter, we had a great discussion about the relationship between science fiction, anthropology, and imagining a better world in the present, past and future. In addition, there was an Anthropology Con, which focused on gaming, and included a “lounge” in which folks could come and play anthropological games including an ongoing D&D campaign. It was one of the best conferences I’ve been to in a while!

Piggy-backing off of this, I want to share an activity I designed for the students in my biological anthropology course. I’ve talked about using games to teach anthropological concepts before, and shared a game that I designed for that purpose with mixed effects in the classroom. This time, I’ve simplified the game, and made it more about building a narrative than about winning or defeating other teams. I ran it last semester, and it worked out great, so this semester I’m only making a few small tweaks before running it again on the final week of classes.

It goes like this. In the first session (download the PDF here), which I have students do after talking about environmental issues, they are asked to conceptualize a post-apocalyptic community. I give them an environmental scenario and ask them to make some basic choices to make about how their community is structured. I then ask them to think about how these choices help and/or hurt them in their given environment. Then I encourage them to creatively flesh out the details of their society including events, rituals, religious and political practices, foods, games, etc.

In the second session (download the PDF here), which runs a couple of weeks after the first session, I ask students to imagine the future of their community and how they will respond to challenges. The students must first describe their community ten years after the first session and think about how it has changed in that time. Then I have them run through three rounds, each representing one decade of life in the community, in which they encounter some challenge and must adapt to it. They roll dice to see which challenge they encounter and how severe it is. They are also given “resilience points” (I’m not a huge fan of resilience theories, but it’s all I could think of to call them), which they can spend to reduce the effects of the challenge. After rolling the dice and spending resilience points, the students must narrate the effects of the challenge, describing the event, how it affects the community, and, if they spend resilience points, how they were able to adapt to the challenge.

As I mentioned in my talk at the AAA panel, I think the ability to imagine other worlds and futures is an important muscle that needs to be exercised regularly. My hope is that this activity gives students the opportunity to think about potential challenges we might face and various ways to confront them. Feel free to let me know what you think in the comments.

Welcome to the Simulocene

In my contribution to Cultural Anthropology’s Lexicon for the Anthropocene Yet Unseen, I suggested that, perhaps, another name for our epoch, among the many other names that have been proposed (anthropocene, capitalocene, chthulhucene, etc.), could be the simulocene – a world made from models. In Mackenzie Wark’s latest post on “general intellects” he discusses Bruno Latour’s most recent book Facing Gaia: Eight Lectures on the New Climactic Regime. I have not read Facing Gaia, but the dichotomy between Wark’s and Latour’s views on simulation is a familiar issue that I’ve been grappling with in my dissertation research.

As Wark describes it, Latour seeks a return to a premodern politics – turning away from the totalization and globalization of modern science. Specifically, Wark reminds us that we only know climate change and the other environmental catastrophes of our time as they are mediated through simulation. Simulation, Wark explains, merges first nature (the environmental background of modernity viewed as resource) with second nature (the built human environment) to produce a third nature, or what Wark refers to as the inhuman (the messy combination of human technologies and nonhuman agents). For Latour, this third nature is the problem, because it takes us away from a connection with a “lived world.”

This aversion to mediated forms of knowledge is quite common – Anna Tsing, for example, makes some similar statements in both Friction and The Mushroom at the End of the World. There is an almost intuitive sense that mediated knowledge is less reliable and valuable than knowledge derived from unmediated, direct encounters with the natural world. But Wark sees this not only as impossible given that we are dependent upon computer simulations for our understanding of large and complex processes like climate change, but also sees it as a reactionary move:

The decision here is whether to further develop the artifice of the sphere to include in the simulation its conditions of existence, or to think without it, and return to the territoriality of the past — and all that implies.

I agree with Wark, to this extent. We cannot simply return to a world where all knowledge is first-hand and unmediated by technology and abstract quantification. We cannot simply exit the simulocene. But there are still clearly problems with simulation – not all models are created the same, and models can be used, not necessarily to distort ecological conditions (as the climate deniers would claim), but to promote a particular approach to ecological issues that reduces decision-making to a series of financialized cost-benefit analyses. In other words, despite a kind of methodological cooptation (Wark claims “Stack-based, information-based conflict puts one simulation up against another: earth science versus financialization”), neoliberal capitalism continues to have the upper hand.  So it’s not as simple a choice as Wark makes it out to be. Rather, we have to ask, what kind of sphere is being produced, and how (if at all) can models be used to make it differently.

The site of the Chesapeake Bay Hydraulic Model, demolished in 2011.

In what I describe as the simulocene, models are not only ways of constructing knowledge about the world. They are also relational artifacts and processes that contribute to the material production of the world. The iconic reminder of this for me is the physical model of the Chesapeake Bay that was constructed in the 1970s, but never used. The model was, very distinctly, a material artifact. It was housed on the shore of the Chesapeake Bay itself, drew water from the estuary to fun through its concrete structure, and dumped that water right back into the Bay. These are the material relations that it embodied, but its relational architecture extended far beyond the shores of Matapeake, Maryland.

The model was constructed by the US Army Corps of Engineers which was the original agency mandated with managing water across state boundaries – primarily for navigational and military purposes. The fact that the model was never really used and became obsolete almost as soon as it was finished was not simply a matter of physical models being superseded by computational models at the time, as Christine Keiner describes in her history of the model. It also signals a shifting institutional order wherein the Army Corps was superseded by the Environmental Protection Agency as the primary agency for water management in the US, and water quality concerns began to take precedence over navigational and water quantity issues. In other words, the physical model was made obsolete not only by technological change, but also by economic and social changes taking place at the same time.

This, along with the rest of my research, suggests that we can think of computer models not simply as tools for understanding the world, but also as the embodiments of social/institutional frameworks that extend far beyond the technology itself. I’m not trying to suggest that there is a deterministic relationship between the technology of modeling and the social institutions in which it is utilized, but there are feedbacks and broader processes at work that make the two emerge alongside one another.

However, if we are not to go back to “the territoriality of the past” without models then the question is whether they can be redeemed from the neoliberal institutional frameworks in which they are embedded. As I’ve hinted at in a prior post and will elaborate upon in future posts and articles, I think that they can be redeemed, but that the process of building and utilizing a computational model might take on a significantly different character in the process. In other words, we may not be able to exit the simulocene, but we might be able to figure out a way to use computer models to undermine neoliberal management and make a different world possible.

Computer Models and Neoliberal Environmentality

Do computer models inevitably lead to neoliberal forms of governance? This is one of the persistent questions that I’ve grappled with in my research on the use of computational models for environmental management in the Chesapeake Bay watershed. Over the years, I struggled with the idea that maybe computer models, despite their incredible power to help us understand complex issues, only end up making things worse. Models inevitably reduce that complexity to simplified numerical representations, and in so doing. As a result, they feed into the kinds of neoliberal governmentality that subjectivizes us towards individualized economic rationality and away from the kinds of relational engagement that are needed to actually deal with the social and environmental problems we face. My question is, can computer models be redeemed? Can they actually contribute to relational engagement rather than liberal individualism? I think I have the beginning of an answer now. I’ve written the full argument up as a journal article, which is currently under review, but I’d like to share the basic structure of my thinking here so that others can start thinking through some of its implications.

Computer models – indeed all kinds of models – are inevitably reductionistic. It’s a fundamental characteristic of models that they must reduce the complexity of systems either by reducing the size (i.e. in a scale model) or reducing the amount of factors affecting the system. The latter is what computer models do. They isolate those components of a system that are relevant to the issues at hand and represent only those factors as numerical equations. The result is a model that can take some input like quantity of nitrogen applied to the landscape and generate an output such as the amount of oxygen dissolved in the waters of the Chesapeake Bay. Between these two quantities, there are a number of different complex processes at work, but the computer model has reduced and simplified them to provide a baseline estimate that is more or less accurate depending on the quality of the simulation.

Chesapeake Bay Hydraulic Model
The original Chesapeake Bay Model was a massive physical model of the estuary. Despite its size, it was still a reduction from the actual system.

That process of reduction is not necessarily a problem in and of itself, since all cognition works through reductionism. But it can become a problem if it leads to other practices that are ultimately harmful. One possibility is that such a reductive and quantified process lends itself well to governance practices that reduce human decisions and interactions to market transactions – a form of governmentality known as neoliberalism. In order for neoliberalism to work, non-market values and complex human activities and relationships must be quantified and given a market value. Modeling serves this purpose very effectively, and is rightly implicated in the rise of neoliberal governmentality. So there is no doubt that computer models can play an important role in neoliberalism, but my question is whether they always do so, or if they can be part of a process that promotes non-market values and complex human and non-human relationships?

In my doctoral research, I conducted interviews and participant observation with computational modelers and environmental management staff throughout the Chesapeake Bay watershed. The modelers I spoke with included not only those who work for the Chesapeake Bay Program (CBP) in the context of environmental management, but also those who work in academic contexts as well. There is a lot of interaction between the two contexts and it was important for me to understand how heterogenous groups of modelers are assembled to work towards management goals. However, this also means that I have data that allows me to compare the two different contexts: modeling for management and modeling for science. It is this comparison that has provided some insight and the beginning of a potential answer to my questions about modeling and neoliberalism.

Confluence of Chenango and Susquehanna Rivers
The Chesapeake Bay watershed is a beautiful and user lex socioecological system that can never be reduced to simple quantification.

What I found when talking with the scientific modelers was that there is an immediate recognition that models are inherently simplifications and, therefore, limited or “wrong” in some ways. The phrase “All models are wrong, but some are useful.” demonstrates their pragmatic approach to modeling, and frequently came up in my discussions with them. But there’s more to it than that. In fact, it is this very “wrongness” of the models that makes them such useful tools for scientists. By engaging the models in a continual process of feedback between empirical data and simulation, the modelers are able to recognize the limitations of our understanding which drives further research to understand those limitations in order to improve or expand upon existing models. Ultimately, this resulted in a high degree of appreciation for the complexity of natural systems and a recognition that they cannot be reduced to the quantitative inputs and outputs of the model.

Modeling in the management context was not entirely different. There is still a scientific drive to understand the complexity of the system and processes, but ultimately it comes down to a question of applying the models to management decision-making. I should point out here that environmental management in the Chesapeake Bay watershed is decidedly neoliberal. The primary regulatory structure that guides management is the “total maximum daily load” or TMDL. Implementing a TMDL requires the EPA – in this case, by way of the CBP – to set an upper limit on the quantity of contaminants (in this case nitrogen, phosphorous, and sediment) that can be introduced into the system. The difference between the upper limit and the present load is then distributed as a load reduction to the various agents involved and they are required to reduce their input to meet the TMDL requirements. In order to meet their load reductions, the agents are supposed to implement “best management practices” or BMPs, which are will help to reduce the loads. As a result, the process is often reduced to an economistic cost-benefit analysis of trying to determine what BMPs will generate the greatest load reduction for the lowest cost.

Computational modeling plays a significant role in this process, at least in the Chesapeake Bay watershed. The CBP has developed a complex model called the Chesapeake Bay Modeling System or CBMS. This model is used to identify the TMDL limit, distribute load reductions across the watershed, and track the implementation of BMPs and their effect on water quality in the estuary. On the whole, the CBMS is a scientific model and undergoes the same process of feedback that other scientific models undergo. It is very impressive to observe the development of the model and the discussions about how different processes are to be simulated. However, ultimately, this complexity must be made to fit within the management context. That means that the model must be understandable to management staff who are generally not computer scientists or mathematicians, and that it must be useful to them in applying the management process. As a result, the enormous complexity of the model must itself be reduced to a simple set of factors that are relevant to the cost-benefit analysis of decision-making within the TMDL framework. In fact, in the management context, the model is sometimes referred to as an “accounting tool” because it allows management staff to calculate the costs and benefits of different BMPs within their region.

This comparison shows that, while computer models might contribute to neoliberal forms of governance, they do not necessarily do so. When they do, models themselves are torn from the continual feedback process of scientific understanding and, in the process, reduced to simple accounting tools. In other words, there is hope that computer models might be redeemed for non-reductionist, non-market relationships with the environment. What still remains to be answered, and is perhaps for another research project, is whether the feedback process of scientific understanding can be generalized outside of academia. Is it possible, in other words, to engage management staff and members of the public in a modeling exercise that is not reductionistic – one that might even foster non-market values and complex human non-human relationships? I will continue to examine this issue in further research, and as I do, I will be sure to share my findings here.

What’s the purpose of anthropology?

Are we to be “custodians” of a body of “cultural knowledge”? Or, rather, a body of knowledge about cultures written from a particular (white, colonial, male…) perspective?

Sure, I find it interesting and worthwhile to learn about the many different ways that people live and survive in a harsh world. And I think we have important stories to tell, as well as some useful insights we can share. But is the purpose of doing all of this ethnographic research simply to add to that pile of “cultural knowledge” so that we can preserve it for future generations and rattle it off in lectures to our students? I don’t know about you, but that’s not the anthropology I am interested in pursuing and building.

When I think about the purpose of anthropology, I always go back to an excellent lecture by Ghassan Hage (who has his own take on the Sahlins post) in which he suggests we are left with a question: “…how can we make a bad relation a good relation?” In other words, how can we work on improving relationships between and among the very different kinds of people who must share the world today? That’s as good a purpose for anthropology as I can come up with, and I think it’s the mode of anthropology that I see many of my colleagues striving for today.

In that sense, what is the role of that body of knowledge that we might love or despise, but with which we are nevertheless burdened and/or blessed? It either helps us on our relational work or it hinders us – maybe sometimes both. It’s stupid to bemoan the fact that graduate students are “ignorant” of something like “matrilineal cross-cousin marriage.” That, in itself, is not something to bemoan, since knowing about those cultural practices is not an end in itself. What would be truly tragic, however, is if anthropology students and their instructors lost an interest in making a better world for all of the people who live those cultural lives today simply to preserve a static body of knowledge. Our task, then, is to understand the legacy of that knowledge, listen to the critiques coming from those who were the subjects of its production, and consider the kinds of relationships we compose and maintain by acting as its uncritical “custodians.”

Towards a radical acceleration, or, We have no idea what’s going on anymore…

In the past month, we have seen: two of the biggest storms ever recorded, one of the largest earthquakes seen in decades, wildfires consuming enormous portions of the Western US and other parts of the world, heat waves hitting the West Coast, and many other disasters that haven’t made it into the news. A new report says that global ocean currents could be collapsing due to climate change, and that article contains the most honest quote I’ve seen in weeks:

While geologists have studied events in the past similar to what appears to be happening today, scientists are largely unsure of what lies ahead.

In addition to all of these “natural” disasters, we’ve got two nuclear-armed world leaders playing a game of chicken with one another. We have a fascist in the White House and Nazis proudly roaming the streets. All of this was impossible to imagine just a year ago, but here we are, and we have no idea what’s going on anymore.

Paul Stoler suggests we take a lesson from the Songhay of Niger and slow down, rebuild some of the “harmony” we have lost. I would say the idea of slowing down is incredibly inane right now. The harmony Stoler calls for was only ever localized and never involved the kinds of globalized geopolitical processes underway today.

At the same time, the accelerationists don’t really have a response to these kinds of threats to life and livelihood except to say “fuck it and we’ll see what happens.” It’s not a position I find very helpful or useful, but maybe total resignation to the inevitable flux of global events is the only response we really have anymore.

Or maybe it’s a question of what exactly is accelerated. Maybe we can start accelerating care and compassion. Maybe we can accelerate the process of building alternatives to the extractive and accumulative economy of Capitalism. I guess that’s a kind of left-accelerationist position, but I’ve not been convinced by any of the social-democrat plans I’ve seen so far. Maybe a more radical kind of left-accelerationism is needed to really confront the horrors of climate change and the violence of Capital accumulation.

We really have no idea what’s happening anymore. No idea what to do about any of it. I’ve written three blog posts in the past few days, and none of them seemed worth posting. I’m posting this one because sometimes “we are unsure what lies ahead” is the only thing anyone can really say…