1. Know this! The agents in the model can not see what you can see.In the Resource-Patterns Model of Life (RPM) we work with agents which we call living things (LTs). As we construct an implementation of RPM for some purpose, we give our LTs specific powers of perception and computation. But we give them no more of these “nervous system” capabilities than we have consciously listed.
This point needs emphasis because almost everything that we can gain from RPM requires inadequacy of the LTs as individuals. As individuals the LTs are lucky if they survive in the poorest of ways. What we can learn from the model comes next, from our exploration of the modes of cooperation which may empower the LTs to thrive as groups in spite of their inadequacy as individuals.
We human modelers place abundant resources into the model along with the LTs — but we are crafty about how we introduce these resources into the environment. We place resources in patterns (RPs) designed so that:
- it will be difficult for our individual (and handicapped!) LTs to make much use of these supplies, but;
- it is possible for the LTs to make use of these supplies if they can learn ways to cooperate.
With the environment and abilities of the LTs thus designed, now, dear reader, we face a problem which begs for all our human intelligence. What small increments in ability, given to one or more of the LTs, will enable the LTs in time to exploit a given RP?
Our struggle with this problem brings us face to face with problems which resemble the problems we humans face in our social orders. The similarity is this. With both critters (primitive LTs) in a thought experiment and humans in the 21st century, LTs exist in an environment which promises greater prosperity for them if and when they can overcome two problems.
- Can they develop a way to perceive a large, unexploited, and as yet unperceived RP?
- How will they go about dividing the tasks and the gains from cooperation?
As our models in RPM become more sophisticated, we will encounter situations in which we begin to imagine correlations between (1) the coordinating activities needed by our critters and (2) our human experiences of language, truth, and consciousness.
Once again, remember this precaution: Keep close watch on your natural anthropomorphism. When we carelessly give unspecified powers to our LTs we rob ourselves of the opportunity to examine how such powers might develop through a mesh of cooperation in a community of LTs.
2. This precaution applies to thought experiments but not to computer modeling.We should note that this precaution just expressed applies when we are working with agents in thought experiments, but not when we are working with agents in computer models. Previously I have written about this distinction in agent-based models.
When we are working in computer-programmed mode then the exactness required by the code forces us to specify the details of all our assumptions, so we necessarily become aware of those assumptions. This is an advantage of the computer-programming mode. But this advantage has a cost which we may be unwilling to pay.
When we work in thought-experiment mode, we are taking advantage of our brains’ powers of natural language. These powers give us ability to perceive and discuss generalities about which we could never be entirely exact. These powers are especially useful when we are taking our first steps into an unknown realm, into a new science. The ambiguities of natural language enable us to leap over vexing uncertainties which seem irrelevant to our aims in a juvenile science.
The precaution I have expressed above attempts to strike a balance between sloppy thought experiments and impossibly-demanding computer models. I suggest that we can make much good progress while working with thought experiments. But we must be vigilant. We should not allow the agents to have nervous-system capacities which we have not listed in our assumptions.