Engineered Objects, Systems, and State Spaces

Some anthropologists are uncomfortable with describing human-environment interactions in the language of systems. The concern seems to be that human-environment interactions don't work, in some fundamental way, like computers, engines, or other proto-typically designed things. These interactions are held to be too complex, too non-linear, too open for the systems metaphor to be appropriate, or at least useful. For my part, I think that the concept of a system is entailed by any non-trivial description of interaction. It simply doesn't carry that much baggage. But I am somewhat concerned by the kind of assumptions about engineered things that seem to pervade discussion of the issue.

I hold that a crucial difference between engineered things and non-engineered things is that engineered things are designed to achieve some limited goal(s), and that design elements are coordinated for the achievement of that goal. I note that the kinds of made objects that people seem to invoke either positively or negatively as descriptors of systems (e.g. human-environment interactions) are objects like engines and computers, but not things like hunting nets, bracelets, flutes, kites, tangled balls of string, and analog electronic instruments, shoes, napkins, or hammocks. Yet all are artifacts of human design, and some of them are pretty suggestive.

So on the one hand we have an ethnocentric bias in those things that we use as analogies (artifacts from industrialized techno-culture), and on the other we have obvious preferences for the sorts of things that seem good candidates for the analogy. The former makes me wonder what analogies are employed cross-culturally and through time for such 'systems', and what we might learn from these alternatives. The latter makes me wonder what it is about the particular objects we (here, now) choose to use as the analogy that make them more apt than other objects of our techno-culture.

We tend to think of computer systems as devices like those probably sitting on your desk or on the table at the coffee shop. But many computer systems are quite different from this. For example, many computer systems, especially large ones, are distributed systems (it might be better to say, highly distributed, since most systems, even flashlights, can be seen as a distributed system). What this means in practical terms is a distributed computer system works under a set of constraints that a desktop or mainframe computer does not- a great deal of uncertainty. Messages between systems may arrive out of order, or they might not arrive at all, each of the components must act and responds to requests in a cooperative manner but with degrees of autonomy. It is up to the designer of the system to make sure that all the parts will achieve the goals of the system despite the uncertainties, complexities, and open-ness of the system. In fact open-ness of computer systems is why we need computer security.

An engineered thing is usually designed by instantiating a model, and then checking whether the instantiation conforms to the model under some set of operating environments. So for example, when I designed and built (in a simulator) my CPU, I used boolean logic expressions to design my logic functions, and then chose the parts necessary o instantiate those logic functions in circuits. If I was building physical circuits I would have to worry about other things, like electro-magnetic fields which might interfere with intended circuit function.

But not all systems are designed with an intended model in mind. Only a goal. The most obvious case is when things (computers, programs, antennae, machines, etc) are designed using an evolutionary paradigm to search the design space looking for solutions to specified goals. In some cases, this produces results that particular models fail to capture. For example, in evolutionary electronics, where connections in the electronic circuit itself evolve, there is a famous case where a tone-recognizer was evolved which, when examined under the microscope of a boolean logic circuit, should not have worked. The circuit was not even completely connected! It worked, it is hypothesized, because of electromagnetic 'leakage' in the circuits, interference between circuits that traditional engineering paradigms would have eliminated before hand. Interestingly enough, reproductions of the circuit do not work, so it is probably something about the material matrix itself that is responsible. I would suggest that this is very much like a living system- in a living system all the rough edges are fodder for adaptation. Daniel Dennet once wrote:

"The traditional engineering perspective on all the supposed subsystems of the mind- the modules and other boxes- has been to suppose that their intercommunications...were not noisy. That is, although there was plenty of designed intercommunication, there was no leakage. The models never supposed that one box might have imposed on it the ruckus caused by a nearby activity in another box. By this tidy assumption, all such models forgo a tremendously important source of raw material for both learning and development. Or to put it in a slogan, such over-designed systems sweep away all opportunities for opportunism...A good design principle to pursue, then, if you are trying to design a system that can improve itself indefinitely, is to equip all processes, at all levels, with "extraneous" byproducts. Let them make noises, cast shadows, or exude strange odors into the neighborhood." p. 141 Dennett, D. (2001). Things About Things. In Joao Branquinho (Ed.), The Foundations of Cognitive Science (Volume , pp. 133-143). Clarendon Press, Oxford.

If the term system has too many unwelcome connotations for people, perhaps instead of systems, one should think in terms of state spaces.I think of a thing that can be represented in a state space model as having a set of systems associated with it, if by a system I mean a way of classifying the state space. To take an elementary example, the result of a roll of two dice may will have a two dimensional state space , where d1 and d2 in {1,2,3,4,5,6}. Every possible state in the state space may be given its own type in the classification. But we might classify states differently; for example by their sum, e.g. FOUR={<1,3>,<3,1>,<2,2><2,2>} would be one type. In state spaces used in population studies might be classified according to whether they represent a reproduction event. But clearly these are not the only ways we can classify state spaces. So, as I am describing it here, there is some power-set defined set of possible "systems" associated with a state space. Some of these are more useful or more informative than others.

State space models also typically include transition 'rules' between states and a set of parameters or background assumptions which can be changed. If the change of these parameter values and background assumptions can be modeled in another state space, one might join the two state space models.

Views: 295

Comment

You need to be a member of Open Anthropology Cooperative to add comments!

Comment by Jacob Lee on November 19, 2009 at 7:18am
Huon Wardle writes:Bateson has a useful analogy that he applies in various places - if you kick a stone, it moves with the momentum derived from the action of your foot; but if you kick a dog it moves with momentum derived from its own organism. We can call the foot-stone relationship a system if we like, but we shouldn't confuse it with the human-dog 'system' or a human-human 'system'. The same kinds of problems arise when we say that we 'have a picture in our mind' and then we perhaps develop a view of the mind as a big screen on which pictures are projected - the usefulness of the initial analogy leads off into other partially convincing analogies that fall apart as soon as we look at them carefully.

I need to read Bateson. He has come up in a number of places. In particular I see him referenced by environmental anthropologists and by semantic information theorists. I'm fascinated that Bateson was both an engaged anthropologist and a seminal figure in cybernetics and semantic information theory.

I do not know the context in which Bateson used the analogy you describe. I am comfortable in describing complex agents like humans and dogs as constituents of systems, or systems in their own right. Such terminology is frequently employed in engineering contexts (e.g. artificial intelligence, multi-agent systems, robotics) with which I am familiar. The difference between a stone and a dog is essentially the way the state of each responds to it's environment. A dog's cognitive state is (in an actionable way) about it's environment (we can say this without committing ourselves to any representationalist view of the mind). An organism displays autonomy with respect to its environment, while a stone is (to be simplistic) merely indifferent. Different organisms have different degrees of autonomy relative to their environments, depending on the complexity of their information processing capabilities. It seems to me that the more autonomy subsystems have in a system, the less easy is it to characterize the regularities of the larger system. The regularities may involve a greater number of conditions. Rocks generally respond to being kicked in only one way (perhaps as a function of the foce of the kick, the mass of the stone, and various facts of the embedding environment). But dogs can respond in many different ways.
Comment by Jacob Lee on November 19, 2009 at 6:56am
John McCreery writes: Perhaps the greatest weakness in the systems version of reality is the notion that, like a traditional engine or standalone computer, systems can be examined in isolation. Consider a locomotive. Reduce the ambient temperature to absolute zero or sink it to the bottom of the Marianas trench. Will it work as expected? Or drop it into the Sun. Watch it disappear.

Yes. Systems are not closed. Generally all systems, except possibly a total world system, if there is such a monstrous entity, will be in some, or overlap with some embedding system.

To be precise, when we examine a system as if it were in isolation, what we are in fact doing is examining that system under some set of background conditions (in particular, conditions holding for some embedding system). Sometimes some of these background conditions are explicitly acknowledged, or can be elicited when desired, but under any situation of incomplete knowledge (and that obviously is almost always the case when we are looking at the real world) many of the background conditions necessary for the validity of the model (formal or informal) being used will be unknown. Certainly a Newton could have formulated a law of gravitation by observing falling objects on Earth without him realizing that the validity of the law depended on facts about gravity specific to earth's mass. Apples dropped on the moon do not drop at the same rate. Nor do apples dropped in bodies of water on Earth, for that matter.

We need to be careful to avoid naive assumptions about the object-nature of systems, as you say. I think that contemplation of Conway's Game of Life (see: http://en.wikipedia.org/wiki/Conway's_Game_of_Life) is a good antidote to such a naive view of systems. One thing that I am particularly fond of about Conway's Game of Life (and other cellular automata) is that dynamic structures like gliders and spaceships that appear in the game are not separated from their environments. A glider or spaceship consists of particular patterns in the environment itself. They are one with the world around them, because they fundamentally are made from the same stuff and operate using the same rules. The orderliness and stability of particular enviornmental patterns earn them designation as objects or systems. The regularity of those patterns permit information flow. But these regularities may ultimately be ephemeral. Two patterns disintegrate when they collide, and are transformed into something new (perhaps even a copy of one of them!). Conway's Game of Life is too unstable to be a realistic model of the universe, but the Game of Life is really only a special case of a kind of cellular automata. Some (Wolfram, for example) even speculate that the universe is a vast cellular automata.
Comment by John McCreery on November 18, 2009 at 7:00am
Keith mentions, "the synthesis of cybernetics, game theory and economics that dominated post-war thought in America and elsewhere." Its failings, alluded to be Huon in his reference to Bateson's kicking a dog, led to the development of soft systems methodologies pioneered and described by Peter Checkland and Jim Scholes and the learning organization movement. I learned about the latter by stumbling on Peter Senge's The Fifth Discipline.
Comment by Huon Wardle on November 16, 2009 at 1:38pm
Bateson has a useful analogy that he applies in various places - if you kick a stone, it moves with the momentum derived from the action of your foot; but if you kick a dog it moves with momentum derived from its own organism. We can call the foot-stone relationship a system if we like, but we shouldn't confuse it with the human-dog 'system' or a human-human 'system'. The same kinds of problems arise when we say that we 'have a picture in our mind' and then we perhaps develop a view of the mind as a big screen on which pictures are projected - the usefulness of the initial analogy leads off into other partially convincing analogies that fall apart as soon as we look at them carefully.
Comment by Keith Hart on November 16, 2009 at 11:49am
This is probably not directly relevant, but I hope it opens up a perspective on the idea of 'system' from intellectual history. Here's the etymology:

1619, "the whole creation, the universe," from L.L. systema "an arrangement, system," from Gk. systema "organized whole, body," from syn- "together" + root of histanai "cause to stand" from PIE base *sta- "to stand" (see stet). Meaning "set of correlated principles, facts, ideas, etc." first recorded 1638.

For me the important notion, apart from 'together', is the Indo-European root sta- or 'stand', which is also found in 'state' and 'institution', both of which as places to stand. To my way of thinking, this makes 'system' an inherently conservative concept, being concerned with holding things together.

Systems theory flourished in the US after WW2, especially in the 50s and 60s. This was what Eric Hobsbawm called the "golden age" of strong developmental states on both sides of the Cold War. French intellectuals at this time were keen to dump German dialectical thinking with its subjects and history; and they looked to America for models. The structuralism of Levi-Strauss and Althusser was the result. And systems theory was its inspiration (apart from linguistics, of course). This another aspect of LS's wartime sojourn in the United States. Philip Mirowski in Machine Dreams shows how operations research during the war spawned the synthesis of cybernetics, game theory and economics that dominated post-war thought in America and elsewhere.

So there is a lot going on when we choose to focus on systems and I will not even mention its centrality to Aristotle, the medieval Schoolmen and contemporary ecological thinking, all of which are explicitly conservative.

All theories are good for some things and not others, but their predominance in certain times and places requires historical investigation. The tension always is between stability and movement. If the locomotive was a symbol of industrial civilization in the 19th century, what can we say about a civilization whose symbol is the mobile phone?
Comment by John McCreery on November 16, 2009 at 6:05am
The observation about things that are and are not taken up as emblematic of systems (engines and computers versus shoes, nets, etc.) is an important one. So is the one about leakage between modules, which could, of course, be extended to systems themselves. Perhaps the greatest weakness in the systems version of reality is the notion that, like a traditional engine or standalone computer, systems can be examined in isolation. Consider a locomotive. Reduce the ambient temperature to absolute zero or sink it to the bottom of the Marianas trench. Will it work as expected? Or drop it into the Sun. Watch it disappear.

Translate

OAC Press

@OpenAnthCoop

Events

© 2019   Created by Keith Hart.   Powered by

Badges  |  Report an Issue  |  Terms of Service