Computer Models and Neoliberal Environmentality

Do computer models inevitably lead to neoliberal forms of governance? This is one of the persistent questions that I’ve grappled with in my research on the use of computational models for environmental management in the Chesapeake Bay watershed. Over the years, I struggled with the idea that maybe computer models, despite their incredible power to help us understand complex issues, only end up making things worse. Models inevitably reduce that complexity to simplified numerical representations, and in so doing. As a result, they feed into the kinds of neoliberal governmentality that subjectivizes us towards individualized economic rationality and away from the kinds of relational engagement that are needed to actually deal with the social and environmental problems we face. My question is, can computer models be redeemed? Can they actually contribute to relational engagement rather than liberal individualism? I think I have the beginning of an answer now. I’ve written the full argument up as a journal article, which is currently under review, but I’d like to share the basic structure of my thinking here so that others can start thinking through some of its implications.

Computer models – indeed all kinds of models – are inevitably reductionistic. It’s a fundamental characteristic of models that they must reduce the complexity of systems either by reducing the size (i.e. in a scale model) or reducing the amount of factors affecting the system. The latter is what computer models do. They isolate those components of a system that are relevant to the issues at hand and represent only those factors as numerical equations. The result is a model that can take some input like quantity of nitrogen applied to the landscape and generate an output such as the amount of oxygen dissolved in the waters of the Chesapeake Bay. Between these two quantities, there are a number of different complex processes at work, but the computer model has reduced and simplified them to provide a baseline estimate that is more or less accurate depending on the quality of the simulation.

Chesapeake Bay Hydraulic Model
The original Chesapeake Bay Model was a massive physical model of the estuary. Despite its size, it was still a reduction from the actual system.

That process of reduction is not necessarily a problem in and of itself, since all cognition works through reductionism. But it can become a problem if it leads to other practices that are ultimately harmful. One possibility is that such a reductive and quantified process lends itself well to governance practices that reduce human decisions and interactions to market transactions – a form of governmentality known as neoliberalism. In order for neoliberalism to work, non-market values and complex human activities and relationships must be quantified and given a market value. Modeling serves this purpose very effectively, and is rightly implicated in the rise of neoliberal governmentality. So there is no doubt that computer models can play an important role in neoliberalism, but my question is whether they always do so, or if they can be part of a process that promotes non-market values and complex human and non-human relationships?

In my doctoral research, I conducted interviews and participant observation with computational modelers and environmental management staff throughout the Chesapeake Bay watershed. The modelers I spoke with included not only those who work for the Chesapeake Bay Program (CBP) in the context of environmental management, but also those who work in academic contexts as well. There is a lot of interaction between the two contexts and it was important for me to understand how heterogenous groups of modelers are assembled to work towards management goals. However, this also means that I have data that allows me to compare the two different contexts: modeling for management and modeling for science. It is this comparison that has provided some insight and the beginning of a potential answer to my questions about modeling and neoliberalism.

Confluence of Chenango and Susquehanna Rivers
The Chesapeake Bay watershed is a beautiful and user lex socioecological system that can never be reduced to simple quantification.

What I found when talking with the scientific modelers was that there is an immediate recognition that models are inherently simplifications and, therefore, limited or “wrong” in some ways. The phrase “All models are wrong, but some are useful.” demonstrates their pragmatic approach to modeling, and frequently came up in my discussions with them. But there’s more to it than that. In fact, it is this very “wrongness” of the models that makes them such useful tools for scientists. By engaging the models in a continual process of feedback between empirical data and simulation, the modelers are able to recognize the limitations of our understanding which drives further research to understand those limitations in order to improve or expand upon existing models. Ultimately, this resulted in a high degree of appreciation for the complexity of natural systems and a recognition that they cannot be reduced to the quantitative inputs and outputs of the model.

Modeling in the management context was not entirely different. There is still a scientific drive to understand the complexity of the system and processes, but ultimately it comes down to a question of applying the models to management decision-making. I should point out here that environmental management in the Chesapeake Bay watershed is decidedly neoliberal. The primary regulatory structure that guides management is the “total maximum daily load” or TMDL. Implementing a TMDL requires the EPA – in this case, by way of the CBP – to set an upper limit on the quantity of contaminants (in this case nitrogen, phosphorous, and sediment) that can be introduced into the system. The difference between the upper limit and the present load is then distributed as a load reduction to the various agents involved and they are required to reduce their input to meet the TMDL requirements. In order to meet their load reductions, the agents are supposed to implement “best management practices” or BMPs, which are will help to reduce the loads. As a result, the process is often reduced to an economistic cost-benefit analysis of trying to determine what BMPs will generate the greatest load reduction for the lowest cost.

Computational modeling plays a significant role in this process, at least in the Chesapeake Bay watershed. The CBP has developed a complex model called the Chesapeake Bay Modeling System or CBMS. This model is used to identify the TMDL limit, distribute load reductions across the watershed, and track the implementation of BMPs and their effect on water quality in the estuary. On the whole, the CBMS is a scientific model and undergoes the same process of feedback that other scientific models undergo. It is very impressive to observe the development of the model and the discussions about how different processes are to be simulated. However, ultimately, this complexity must be made to fit within the management context. That means that the model must be understandable to management staff who are generally not computer scientists or mathematicians, and that it must be useful to them in applying the management process. As a result, the enormous complexity of the model must itself be reduced to a simple set of factors that are relevant to the cost-benefit analysis of decision-making within the TMDL framework. In fact, in the management context, the model is sometimes referred to as an “accounting tool” because it allows management staff to calculate the costs and benefits of different BMPs within their region.

This comparison shows that, while computer models might contribute to neoliberal forms of governance, they do not necessarily do so. When they do, models themselves are torn from the continual feedback process of scientific understanding and, in the process, reduced to simple accounting tools. In other words, there is hope that computer models might be redeemed for non-reductionist, non-market relationships with the environment. What still remains to be answered, and is perhaps for another research project, is whether the feedback process of scientific understanding can be generalized outside of academia. Is it possible, in other words, to engage management staff and members of the public in a modeling exercise that is not reductionistic – one that might even foster non-market values and complex human non-human relationships? I will continue to examine this issue in further research, and as I do, I will be sure to share my findings here.

3 thoughts on “Computer Models and Neoliberal Environmentality”

  1. as you point out all human thinking/doing is always already reductive and so it is with our tool making and use, the question seems to me is there someway to build in checks/reflexivity into systems so that they keep in mind (and otherwise address) the built in limits/biases or (as it seems to go) are we doomed to be creatures ruled by the tyranny of the means?

    1. Agreed. Though there is a distinction to be made between limits and biases in this context. The biases are handled pretty well by the review process all of these models undergo. It’s possible that the review process is itself biased – a lot of climate deniers, etc would argue that – but I tend to think it’s dealt with reasonably well. The limits are another question and I think always there – we can never fully grasp the complexity of a system, particularly one of which we are a part. So models will always be limited even if they are not (completely) biased. Part of the argument I didn’t delve more into is that the relational practice is what addresses those limitations, in some cases by enforcing reflexivity (I.e. you must pay attention to the limitations!). By taking models out of that practice in the management context, it cuts out any potential for reflexivity.

      1. by bias just meant that we have preferences/pre-judices about what we even notice and in terms of what to include and what to leave out (what matters and doesn’t, etc), don’t see how that can be eliminated but might be monitored and otherwise tested. Would be interesting given the neo-lib focus you are choosing to see how matters like costs/benefits are prefigured in the algorithms, you should check out:
        mathbabe.org

Leave a Reply

Your email address will not be published. Required fields are marked *