Complexity theory, that is studying complex systems, is tracked back to 18th century with classical political economy of the Scottish Enlightenment, although the real pioneers of the field are 20th century’s philosophers, economists, mathematicians and social scientists. It’s a rather young field, but it already covers quite large number of topics (such as complex adaptive systems, chaos theory, non-linearity, emergence or self-organization) and which influences other fields of science, like biology, sociology or economics. In this post each time I mention a complex system I mean “complex adaptive system” (CAS) which is adaptive (which is not the case for non-linear system), non-deterministic (which is not the case for chaotic system) and non-predictable (which is not the case for simple or linear system). John Holland’s definition of CAS is:
A Complex Adaptive System (CAS) is a dynamic network of many agents (which may represent cells, species, individuals, firms, nations) acting in parallel, constantly acting and reacting to what the other agents are doing. The control of a CAS tends to be highly dispersed and decentralized. If there is to be any coherent behavior in the system, it has to arise from competition and cooperation among the agents themselves. The overall behavior of the system is the result of a huge number of decisions made every moment by many individual agents.
I think we can safely say that science as a system of organized research within and outside certain institutions exhibit large number of properties attributed to CAS. Therefore let’s assume that science is a complex system.
It’s important to remember that behavior of a complex system may depend on a unique set of fundamental laws, but these are different from models we use for practical purposes to describe this behavior. In other words, models of complex systems do not have to be reducible to unique laws. Let me pull out another quote, this time from this recent post by Wavefunction (emphasis mine):
A molecular mechanics model of a molecule assumes the molecule to be a classical set of balls and springs, with the electrons neglected. By any definition this is a ludicrously simple model that completely ignores quantum effects (or at least takes them into consideration implicitly by getting parameters from experiment). Yet, with the right parametrization, it works well-enough to be useful. There could conceivably be many other models which could give the same results. Yet nobody would make the argument that the behavior of molecules modeled in molecular mechanics is not reducible to quantum mechanics.
So, despite some people claiming to know exactly how the science is operating, and we are all wrong with our analogies, we are free to make as many models of science as we wish and there’s nothing wrong with that. Not only because laws and models are different. In many cases, emergent properties of the system cannot be derived from a set of underlying laws – we use (often naive) models to capture these phenomena.
How many models of science can we build? Or how many models is enough?
We could compare science to a multi-agent system, where researchers would compete for goods produced by science funders.
We could compare science to a culture, where research areas would rise and fall as a result of competition between memes. Researchers and science funders would be agents of transmission
We could compare science to a simple system, with linear laws (such as “more money, more papers”) which becomes unpredictable due to inherent elements of randomness (scientific discoveries).
We could compare science to a social system, in which behaviour of researchers could be modelled by game theory.
We could compare science to a campfire, where people gather and tell stories.
We could make analogies to art, economics, sociology, or almost anything else. We could derive “laws” or “rules” based on these models, which often can quite accurately (within certain boundaries) approximate behavior of the system.
However, asking which model is the best one is like asking which approximation of molecules is the best one. The answer is that it depends on the experiment. As for protein structures, there’s a large spectrum of different approximation used, depending on the task (rough structure comparison, structure modelling, molecular dynamics, docking of small compounds). For other complex systems, situation is quite similar – the practical purpose determine the choice of the model. This is often forgotten, when you move to other fields.
There are also two other approaches – multi-model or multilevel modelling (represented roughly multiscale modelling) and model-free (represented roughly by neural networks), but if these are chosen, this happens for practical purposes, not because they represent “reality” better.
I’ve been thinking about future of science and strategy for science for quite some time. It can be quite difficult already at a personal level (career strategy) and real hard to get at a larger level (for example, open science strategy for Poland). What I’ve learned from Michael Nielsen, is that if you want to make predictions about the future, you need to understand the present as good as possible. I don’t know any better way of understanding something than constructing model after model (and testing them, if that’s possible) .
However, if you look at the predictions made by some people around, they usually focus around one or two ideas their authors like the most. Also people don’t test their predictions against different models, not to mention trying to combine models, or learn something from models incompatible with their own ideas.
But treating science as a complex system doesn’t mean only slight update to our methodology, that is testing different approches. It provides us with a variety tools to build and test our models (network analysis, multi-agent modelling, pattern oriented modelling, cellular automata, game theory, and list goes on and on). And how to apply these tools to understand how science develops, will be the topic of upcoming posts.