From the Center for Peripheral Studies (OAC Branch). After Lance, the sky's the limit!

The papers discussed in our online seminars are often excellent, but this one is apocalyptic, as well as being an exercise in fine writing. Lee Drummond was once a Chicago anthropology PhD specialising in the Caribbean and later in San Diego's tourist attractions. He taught at McGill University for over a decade before retiring to the wilderness (Palm Springs!). July and August there are his winter when he withdraws into his airconditioned study to avoid the heat. The OAC is a principal beneficiary of this aestivation, as witnessed by the paper attached here. All I can say is you gotta read it, whether or not you participate in the seminar.

Lee's point of departure is Lance Armstrong's confession on the Oprah Winfrey show. The man who perhaps deserves to be known as the greatest American athlete ever admitted taking performance enhancing drugs, thereby triggering an intense public outcry. Lee deconstructs what he takes to be a key feature of the American ideology, the opposition of nature to culture, showing that biology and technology have been inextricably woven together throughout human evolution and even before. If it is impossible to identify the unequal influence of technology in sporting performance, what about other areas of cultural achievement, like literature for example? Should Hemingway's Nobel prize be taken away or Coleridge's poetry eliminated from the canon because they wrote under the influence of mind-altering substances?

Not content with this reductio ad absurdum, Lee then launches into a savage critique of American civilization and of the cultural anthropology it has spawned. Drawing on Marx's happy phrasing in the 18th Brumaire, he argues that the American tragedy (New World genocide) now reappears as farce (reality TV shows), one of which actually replayed the former in a grotesque reenactment of the competitive ideal. Anthropology tends to celebrate cultural achievement around the world, whereas in Lee's view, the current state of American society suggests that culture may be a disease killing off its carriers just as their ancestors once killed off the original inhabitants of what passes for the land of the American dream.

Views: 16534

Replies are closed for this discussion.

Replies to This Discussion


Two things come immediately to mind:

1) The situation in the USA and in Asia is somewhat different. In Asia, people are returning to lack of professional jobs they've trained for. For example, you train to be a software engineer, get back to India and all you can get is call centre work. However, you're still trained as a software engineer; you can go elsewhere. In the US, the problem is the opposite - jobs you used to need a high school degree for, or maybe none at all, suddenly you need college for.This keeps escalating, too. I've seen secretarial positions advertised that demanded a Master's. Gradually, the working-class dream around college has become not to ascend into the professional ranks and escape physical labour, but merely to keep swimming - to keep the HVAC or secretarial or nursing job your mom and dad had, and not be tossed into customer service work. This isn't a supply/demand mismatch, it's a paradigm shift. Which leads me to...

2) I've worked for a few of those companies that whinge about lack of people capable of innovation in HBR or Fast Company. They're really only interested in "innovation" as an elite skill, generally. Their junior members, their non-technical staff - these people may also be able to enjoy the free coffee and the pingpong tables (though this is not guaranteed), but they are not granted the privilege of innovative thinking, and they are not assumed to be a source of it. They're also not willing to grow their own talent; they prefer to purchase it off the shelf. To be a junior programmer in a large software company is the slightly better-paid modern equivalent of burger-flipping, and reliability is valued above all else. This is not a paradigm shift, people just think it is because it has e- attached to the front. It's a common problem. And on this front...

3) Do those companies really value innovation as an individual trait? Here's one example. Blue LEDs were a damn hard problem to crack. If you're old enough, you remember a time before blue LEDS - there were red and green and amber, but blue... no. Now, it looks cool and futuristic to use blue. Serious innovation, right? Invention worth billions, possibly. The guy that solved this problem got a $3,000 bonus. Another example: I used to work with a guy that invented, out of whole cloth, a way to automatically identify roads from satellite pictures. I think his bonus was about the same, for something that was literally instrumental in the computer maps you use every day. Of course, this is not universally true - we all know about Jonathan Ive, who made Apples out of aluminium and changed the world with a materials swap. I would argue that there is no evidence that companies value actual innovation nearly as much as they value the appearance of innovation. Therefore, it's not surprising that they have trouble finding actually innovative people to work for them.



Could it be that what you have seen inside large organisations is a fractal reproduction of what goes on in the larger society? Companies still need reliable people for all sorts of things not directly connected with innovation. That doesn't mean that they don't worry about recruiting and keeping the elite innovators. The reliable ones can always be replaced. They are interchangeable parts. The successful innovators are the geese who lay the golden eggs and are treated accordingly.

Kimoto Kazuhiko, the senior creative director who hired me as an English-language copywriter for Hakuhodo in 1983 and taught me most of what I know about advertising was very clear about it. One day we were talking about rules (the company was going through one of its periodic fits of trying to get everyone to come in on time and work nine to five). Kimoto told me, "In our business there is only one rule. If the client gives the company  business because you are here, you can ignore the other rules." The tacit corollary was, of course, that if all you are doing is a job that anyone could do, you have to obey the rules. 

We also need to think about what counts as innovation. Doing something differently isn't innovation. Innovation is doing something that makes a difference. After the fact, Johnny Ive's shift to aluminum may look like nothing more than a change of materials. But that change demanded a huge investment in new manufacturing technology and resulted in products that solidified Apple's reputation with people willing to pay a premium price for genuinely classy look and feel. At the end of the day, it  made Apple hundreds of millions of dollars. Not a bad day's work.

P.S. I agree that what we are talking about is a paradigm-shift. I'd call it the death of the middle-class dream of education as a ticket to upward mobility. It is now, at most, a prerequisite. 


You see culture in two ways, as an organism or an abstract subject. In both these conceptions culture becomes what Dan Foss calls a "thingie" and, I would add, an example of the fallacy of misplaced concreteness. I have found it more useful to think of culture as conversation. 

Conversation implies a number of things.

  1. At least two people involved in a social encounter, taking into account what they perceive the other as thinking or feeling.
  2. Some common ground. Only some, complete agreement is not required.
  3. Some common language. Again, only some. Perfect mutual comprehension is not required.

An important question, raised by all thingies, is why people take them as given, as what Durkheim calls "social facts." One reasonable answer begins with W. I. Thomas' famous theorem,

If men define situations as real, they are real in their consequences.

But we know that not all situations are equally real to all people. How does that come about? A reasonable, if still rough, explanation is provided by Peter Berger and Thomas Luckmann in The Social Construction of Reality. In the short form in which I remember it, reification, the process by which ideas come to be taken as facts, is a process with three moments:

  1. Externalization: Someone in a group has an idea and offers it to other members of the group.
  2. Objectification: Others in the group accept the idea and act as if it were true.
  3. Socialization: New members of the group are taught that "This is the way things are."

Note that after an idea is offered to the group and accepted by some of its members, its existence no longer depends on the  originator. The originator may die or change her mind, but the idea lives on. Note, too, that socialization is normally most effective when everyone in the group shares the idea and no member of the group denies that "This is the way things are." That this is the usual situation for most of what we call culture is, however, far from certain. Children rebel against what their parents tell them. Newcomers to an organisation may encounter others who are critical of the official line and offer alternative views of "This is the way things really are." 

This culture as conversation model is, like all models, incomplete. It is, for example, in the language I have used, clearly too biased toward a linguistic/literary/textual view of culture. An "idea" can be a new dance step, a new musical beat, a new way of cooking squid, a preference for light instead of dark liquor. It can be communicated non-verbally. That said, however, the process of externalization, objectification and socialization still applies. 

This simple, three-step model may, however, be too simple. Like Copernicus' model of the solar system with the Earth moving in a perfect circle around the Sun, it may get some big things right but require refinement in detail The psychology involved in coming up with new ideas and presenting them in ways that maximize the chances of their being accepted is, for example, the defining problem of the whole academic field of consumer psychology and marketing research.

Even so, I like this conversational model. If nothing else it frees us from the pensée sauvage that posits a binary opposition between organism and concept and then goes round in circles forever, diverting attention from the social processes by which ideas are generated, conflicts erupt and may or may not be resolved, and a consensus emerges among the survivors that "This is the way things are." Until that is, children or other strangers enter their lives.


What is typically "left out" of the various accounts of the "social construction of reality" is *reality* or, if you prefer, the "environment" in which we all live.

In particular, there is always an "environment" in which cultures operate, which is real and, while certainly culturally interpreted, nonetheless is causal in how that culture "behaves."  Formally (or, if you prefer, "structurally") caused.

We seem to be comfortable calling that environment "nature" but we very quite uncomfortable recognizing that the reality we all live in which is independent of our culture (and its interpretations) is actually "technology," or as some have called it "2nd Nature."  Arguably, the "first" environmental technology was language.

You could say that much of the social theorizing over the past 40+ years is really about the "social construction of fantasy," which, in turn is a result of a *technological environment* that "causes" such a fantastic approach to social construction.

That technological *reality* is, of course, television -- the man-made environment of much of the world over the past 50+ years.  Since everything that appears on television is a "fantasy" (starting with the tiny little "people"), we are left to try to "construct" our lives in terms of this fantasy-generating environment.

Like the weather, it really doesn't matter if you "like" or "watch" television since it's environmental and, therefore, ubiquitous for our culture.  Our economics depends on it.  Our politics depends on it.  Our "culture" depends on it.  Our "kinship" depends on it.  Nothing has been untouched by our environment.

We began this discussion talking about it (but never saying its name).  Lance Armstrong and Oprah Winfrey are not people that we know.  They aren't people that we care about.  They are "fantasies" generated by television and have become elements of the overall technological environment.  The Tour de France is a television-sponsored event as is Oprah's show.  "Reality television" is an apt description our recent "natural" environment.

Until it isn't anymore.  Like right now.  We are living through a shift to completely new environment!  It isn't as if there is more-or-less rain or snow or sunshine but as if something quite different has started to happen.  Now it's "raining" cats-and-dogs.  

That's why there is a "constitutional" crisis nearly everywhere around the world.  That's why the economies everywhere are "beyond" the comprehension of economists.  Television-as-environment is now obsolete -- along with all of the political, economic and cultural effects that were *caused* by television.  No one has a clue what is going on.

A critical part of the "fantasy" that deters us from even trying to understand the effects of our own technological environment on our culture is the distractingly named "environmental movement" (i.e. about something that isn't our environment at all.)  By setting "conscious purpose" (i.e. human understanding, however imperfect) against "nature" -- as Gregory Bateson did in his crucial London speech and Austrian conference in the summer of 1968 -- he deliberately pointed us in the *wrong* direction.

I say deliberately, because Bateson knew what Wiener was doing regarding the impact of technology in the 1950s and what happened to him for his troubles.  When the FBI came knocking with the threat to "destroy" the career of a leading cyberneticist, that was Wiener's door, not Bateson's.  Indeed, one of the early indications that Wiener was getting into deep trouble was his public statement that would not work with Bateson (or Mead or Kurt Lewin.)

As discussed in the new "intellectual" biography of Mead, "Return From the Natives: How Margaret Won the Second World War and Lost the Cold War," Mead (along with her associates) was that "small group" she has been quoted about "changing the world."  They set out to "rig the maze" that then became the "social construction of fantasy" which became our lives -- accomplished by "detaching" culture from *reality* in the interest of "defeating" conscious purpose in favor of "nature."  Their "conscious purpose" was "good," while that of the others was "bad."

The fundamental question now at hand is what will the relationship be between *reality* (i.e. the new digital technological environment) and our "culture" now that all this is changing.  Yes, Lance and Oprah will ride off into the sunset, along with all the other "fantasy" creatures invented by the previous environment.  But, what will happen to us?  That's where *real* innovation enters the story.

@Mark, I tend to agree that it's a question of how to not lose sight of the "reality" in absolute, somehow.  See if I express it as well as you did.

@John,   Yes, I distinguish between cultures referred to as different kinds of "thingies", as you quote Dan Foss.  I use that device to anchor the discussion to how nature observably develops forms of organization in the natural world, as "thingies", of which we are one and that we are quite surrounded by.   It's a way of establishing a common reference to independently observable, but still "full bodied" real subjects, so that the "meta-world" of "meta-subjects" that lots of people prefer to discuss (whether anyone else can follow or not) remains somewhat grounded.  

If you or others think it as a "fallacy" to have a way to refer to the realities of our world we don't define, it then seems to be in preference for grounding discussions on common subjects the discussants define.   Isn't that fair to say?  How they connect with the subjects we don't define, is then the question.

I prefer not to leave that unanswered, for the one main reason that I get confused.   It seems to lead me to relying on my own meanings for the words I hear others speaking, and having to ignore the meanings they are or might be trying to associate with their words.   That becomes a "guessing game" of interpolating different realities... that I'm kind of bad at, and think everyone seems bad at too, unless they reduce their conversations to strictly deterministic terms and subjects, like being restricted to thinking with rules as simple as a computer's.  That presents at least three choices,

   1. having a way to refer to the realities ("thingies") of nature that our thoughts don't define

   2. referring to realities in the minds of others you'll never quite know, so social convention becomes the common reality

   3. following rules so narrow you can't say anything not pre-determined, so reductionism becomes the common reality

What I've found is that using the social definitions of reality, expressed in your observation that

"Note that after an idea is offered to the group and accepted by some of its members, its existence no longer depends on the  originator."

leads most social groups into just believing their own theories.  In their hands it generates the kind of self-perpetuating myths that are also self-justifying.   For rare social networks that have BOTH the ability and interest in free form discussion while also keeping discussion grounded in realities they don't invent, it can indeed be quite creative.  

For the great majority of social networks, believing their own theories seems to just hopelessly confuse their conversations with any other social network, that is almost certain to have picked an different invented reality to live by... It fails to acknowledge what seems to be an independent reality, that "consensus is the enemy if it wouldn't check out", like I pointed out in my Tweet this AM

"50 years saving the earth, expanding our scale and complexity, saying it's the solution when clearly always the problem, should'a known."

What I prefer is a mix of 1, 2 & 3, kept from spinning out of control in the ways each can by itself, by everyone maintaining their ability to shift from one to another, to keep the thinking free and honest.    In practicing that, my way of identifying common subjects of interest to refer to as "thingies", in the recognizable forms and behaviors of natural systems we don't define, went far beyond interpreting "facts".   My big step was actually to discover that "facts" were most often mistakenly disassociated from their context before being called upon for supporting theories.

Facts tend to lose their meaning when rearranged in theories detached from their context, and I found a partial "fix" for that.   If you start with "continuities" (rather than "facts") it's very much harder to detach one's hard evidence from the context, and helps assure one's theories produce truthful questions about the context, what I end up seeing as the object of "theory".    Like any other new approach to things, one doesn't get far by ruling it out, though that's a common temptation.   It's by "kicking it around" to see where it fits that seems to stimulate new thinking and lead to the interesting uses and new places.





Mark, in the swirl of discussion, I’d like to respond to your Comment of a couple of days ago:


 Given the fact that Wiener had been for many years a sharp critic of the organization of scientific research -- often on basic moral grounds and leading to multiple attempts to resign from MIT -- and that he refused to cooperate with Margaret Mead, Gregory Bateson and Kurt Lewin (and many others) in their "control” projects, you have to wonder about the "politics” of this deeply flawed book. [Referring to Dark Hero of the Information Age: In Search of Norbert Wiener – Father of Cybernetics]


    You are quite right that Norbert Wiener stands out as a principled individual at a time – the transition from World War II to the Cold War – when many scholars, particularly physicists and mathematicians, were associated with the military.  One has only to contrast Wiener and his fellow prodigy, John von Neumann, to see the moral gulf that opened during those times.  [von Neumann was a real-life Doctor Strangelove; the man, far more than Robert Oppenheimer, who was the “father of the atomic bomb.”] 

    I would, though, like you to provide more specific information about the “control projects” involving Margaret Mead and Gregory Bateson.  Are these the ties to the Rockefeller and Ford Foundations and the Macy Conferences mentioned in earlier discussions in the seminar?  What was Bateson’s role in those?  As you know, I would find it surprising that Bateson would have participated substantively in military-related research.  But, then, I’m always ready to be surprised.

    Regrettably, I’m not much of a scholar; I don’t dig through archives in hopes of extracting the nuggets that indeed are there.  Lacking documentation of Bateson’s involvement in nefarious schemes, my difficulty in seeing him as an instrument of The System stems from the nature of his theories.  If early systems theory and cybernetics emphasized the orderly interconnectedness of things, an orderliness that made possible direction or control, Bateson’s most important work focused on the inherent disorderliness and internal conflict of societies and persons.  I can’t imagine how a systems theorist would explain, let alone control, the Iatmul of New Guinea, the subjects of Naven.  The “organizing” principle Bateson extracted from his study of Iatmul society is schismogenesis: the process whereby individuals become increasingly suspicious and violent as a result of feeding on stereotypes of one another.  The book was far ahead of its time in that it does not claim to provide an objective description of a particular society.  Rather, Bateson uses the Iatmul material to conduct an exercise in epistemology, essentially trying on different ways of knowing a society.  A long way from the methodical approach of systems theory.  It would not be inaccurate to suggest that Bateson was our discipline’s first “interpretive anthropologist,” pre-dating Geertz by decades.  When Bateson later turned his attention to psychology, he developed the concept of a “double bind” to understand schizophrenia.  The basis of that concept is that an individual is driven mad by attempting the impossible task of coping with contradictory messages / instructions.  The heart of the matter, as with schismogenesis, is disorder and incomprehension – not a fertile ground for systems theory. 

    I do think, however, that there is a great mystery surrounding Bateson and the place his theories found in early American anthropology.  In my view Naven is one of very few books at the pinnacle of anthropological theory.  Yet Naven appeared on the heels of  another book, an inferior and deeply flawed work that took the anthropological – and much of the public – world by storm: Ruth Benedict’s Patterns of Culture.  I’ve stated my criticisms of that work earlier in the seminar; here I simply want to pose the question of how it was possible, how it came to pass, that Benedict’s impoverished book somehow set the stage for decades of subsequent anthropological thought in the United States (as Keith and Huon have emphasized, anthropologists on their side of the Atlantic were largely immune to this indoctrination).  There may be a seminal book or article on anthropology’s intellectual history that delves into this question, one I’ve missed through my hopelessly slap-dash approach to scholarship, and if so I would be very interested in learning about it.  I think the matter is of first-rate importance, particularly in our time when “societies” are coming apart before out eyes (“personalities writ large??) and epistemologies of social thought are ricocheting off the walls of departmental seminar rooms. 


Mark, commenting on my discussion of the role of genius in society and its place in social theory, you mention in reference to Norbert Wiener: 


Wiener shifted into what I call his "genius project” -- which is completely misrepresented (really missed altogether) by [the authors of Dark Hero of the Information Age: In Search of Norbert Wiener – Father of Cybernetics]. He figured that whatever was happening then was only temporary and that eventually humanity would "re-appear.”  Maybe it would take a few generations. Maybe longer. But the "genius” -- someone who somehow knows without having any teachers -- would inevitably resurface humanity's storehouse of reconnaissance. 


Intriguing.  Can you tell us more about Wiener’s “genius project”?  With our rapidly evolving sort-of-species, is it possible to step into the same river twice? 


Lee, Mark & all,

In reference to the comments on "control experiments", it's been an increasingly useful observation of mine that systems theories and cybernetics models generally represent relationships as being deterministically controlled, defined as networks of abstractly defined subjects as  variables with associating rules.   It appears to be a mental necessity, I think, that models can't have undefined relationships like life does, and hold together at all.   So our conceptualizations naturally represent systems of relationships in terms of abstractly defined relationships unlike natural environmental systems are built.  

It's been one of the important natural differences between mental models and natural systems for me.  I find it very helpful for telling one from the other, to tell in conversation whether the subject is some closed network of mental definitions or an open network of self-defining relationships.   It's also rather confusing to almost everyone, of course, that mental models are so limited in representing what natural language gives us the ability to refer to in nature, and we add to the confusion by using most of the very same words for both.  

Another distinction is that natural environmental systems generally occur through local interactions between independently organized and animated individual parts, like economies being animated by the personal choices of people, but we absolutely never discuss them that way... We overlook how changes are occurring everywhere throughout the subject natural system at the same time, and at numerous nested scales of organization in it once, an arrangement quite impossible to define, or diagram, let alone model.   Our minds seem to keep suggesting deterministic models for every subject raised, though, and to not be interested in the inherent deep flaws... also complicates the subject as i see it.

Lee et al:

Alas, I am in the middle of moving, so I hope that I haven't started something that I can't (at the moment) finish! My reference material is already packed and my time is unfortunately limited.  That said, these issues are very important to me (as I suspect they are to some others in this conversation), so I hope that we can make some progress together on thinking this through.

I would not call myself a Bateson scholar, even though I have read most of what he wrote and been in contact with many people who believe they are following his overall plan.  I am, however, a Wiener scholar (of the sort who has gone through his archives), a position I found myself in by virtue of my father being one of his closest associates, from whom he heard many stories, and the fact, such as it is, that Wiener brought roses to my mother the day after I was born, so I grew up thinking of him as my "godfather."  Perhaps this overly colors what I'm about to say and I welcome any corrections and/or criticism.

Bateson was, as best I can tell, out to "engineer" a *new* form of human, based on his deep antipathy to the sort of human we all know (and, indeed, all are ourselves).  As anyone who sets out to do something like this knows, there is no chance of accomplishing anything through "top-down" control (i.e. telling people what to think and do).  The only option would be to get the humans to "engineer" themselves.  So, my sense is that for Bateson (and many who worked with him)  the goal was "self-control" in the context of a carefully crafted and compelling description of a "man-made crisis" which would "force" people to voluntarily change themselves.

That crisis was the presumed destruction of the "environment."  From what I can tell, the "manifesto" of that movement to engineer a new type of human is Bateson's 1968 "Conscious Purpose vs. Nature" essay, delivered publicly at the Tavistock Institute's "Dialectics of Liberation" conference in London (and reprinted in "Steps to an Ecology of Mind") and then forming the basis for the Austrian conference (paid for by Wenner-Gren and written up by his daughter in "In Our Own Metaphor").  

My sense is that many activities that we associate with the early "environmentalist movement" flowed from this essay.  Stewart Brand's Whole Earth, for instance, as well as Earth Day etc.  Today's "climate change" concerns are the remnant of Barry Commoner's obsession with the "nitrogen-cycle" at Bateson's conference.  I *really* hope that this discussion doesn't go off into the weeds on the participant's own relationship to this movement or their personal feelings about global warming.  For myself, I believe that green-house gases *are* causing warming but, like Freeman Dyson, I also believe that we don't know how to forecast the ultimate results, since the capacity of the atmosphere and oceans is beyond our capability to accurately model.  If possible, can we try to focus on the cybernetics/anthropology/sociology that stands behind all this activity and not the concerns themselves? <g>

Here is where the contrast between Wiener and Bateson (as well as Warren McCulloch, who was a key participant in the Austrian conference, as well as a "rival" to Wiener at MIT) comes into focus for me.  Whereas Wiener was deeply concerned about what cybernetics would do *TO* humanity (as reflected in his 1950 "The Human Use of Human Beings" and its aftermath), Bateson was concerned about how to use cybernetics to stop humans from literally *BEING* humans.  Opposite goals, or so it seems to me.

This "change the humans" effort is sometimes called "second order" cybernetics and is closely associated with Heinz Von Foerster and his military-funded Biological Computer Laboratory at Univ. of Illinois. It is also called "constructivism," which seems to mean the effort to understand how people "construct" their own versions of reality.  My research, which, once again, could well be misdirected and in need of correction, traces this notion back to WW II "psychological warfare,"  As best I can tell, the idea is to set up a situation so that the "morale" parameters are "controlled" so that the humans "construct" a reality that drives their behaviors in the desired direction.  Like firebombing Tokyo and then dropping "Big Boy" on Hiroshima.

Foerster called this "magic" and discussed it in an interview how he discovered how this worked as a young boy -- "The experience occurred a very long time ago.  At twelve or thirteen years of age, my cousin Martin and I . . . began to practice magic . . . What sort of story must I tell, *how* must I tell it in order to make people accept it and make them work the miracles of the floating elephant and sawn girl in their own individual ways? . . . And, this is what you later -- when you are fifty, perhaps -- describe as the *observer problem*" [emphasis added]

The goal was to "manipulate" the *observer* into "observing" what you wanted them to see.  Such as a crisis in which humans were destroying "nature" and would therefore have to behave differently, so as to conform to this "constructed" reality.  Thus it was magic.  In another Bateson essay, it's called "rigging the maze" so that the "rats have the illusion of free will."  Wiener would never had imagined saying something like this -- which, I suspect, is why he was "busted" and forced into retirement.

Earlier, Lee contrasted Wiener with Von Neumann for their differing views on working with the military.  There is actually a double biography of the two of them by Steve Heims, who also wrote the "Cybernetics Group," which does just that.  I spent a fair amount of time with Heims discussing Wiener and, on his advice, put together a plan to write a biography of Wiener (based in part on my father's stories plus a lot of addition research) which I then submitted to Basic Books, initially with strong interest from the publisher.

What then happened is, well, odd to say the least.  Basic came back in a couple weeks and told me that they had checked around and determined that there was "no interest" in such a biography.  As it turns out, however, they apparently hired a couple with no background in the area (previously having written about cult "deprogramming") to write what became the "Dark Hero" biography!  Since these authors have "declined" to talk with me, I don't know if my proposal was also passed along.

What I do know is that the crucial events -- in which Wiener's concerns about the impact of cybernetics on humans led to an FBI "knock on the door" threatening him with a HUAC investigation of himself and all his associates as "communists" -- turned into a "mental breakdown" in the published biography.  I'm quite sure that breakdown never occured.  As told in the "Dark Hero" biography, Wiener supposedly "cracked up" because his daughter "lost her virginity" to someone in the group around McCulloch.  According to the acknowledgements, as well as the dust-jacket, one of the key "contributors" to the biography was Mary Catherine Bateson.

There's more involved (ike Neuro-Linguistic Programming and Korzybski's "General Semantics") but I'll leave it at that to see if I have completely freaked everyone out or said something that is at least somewhat resonant.  Thanks for you patience with my not-quite-scholarly account of these likely-to-be-important events.

Mark, for reference, I found the Bateson article Conscious Purpose v. Nature which I'll read with interest in travels today.  I suspect Bateson's "misdirecting" the environmental movement would be put in context if we looked at what seem to be numerous other "misdirections" of movements intent on guiding mankind to finding a comfortable "home on earth".   I've identified a variety of reversals of purpose like that coming from people clinging to one or another mental model (extreme reductionist construct) of nature to use as their definition of life and nature.    

Some I find most influential are the "revisionist histories" that appear and then gain acceptance in various places.  One is how science was redefined for Quantum Mechanics, reinterpreted as the study of data rather than the study of the systems of nature we are trying to explain, that generate the data...  Another was the redefinition of "sustainability" by the OECD, changed from the literal "to be self-sustaining without doing harm" posture of the original Brundland Commission wording, reinterpreted by the leading global growth organization to mean "maximizing productivity with minimal costs" the meaning we see clearly in the worldwide sustainable development culture defining the term in action today.   

Mark, you write,

The fundamental question now at hand is what will the relationship be between *reality* (i.e. the new digital technological environment) and our "culture" now that all this is changing.  Yes, Lance and Oprah will ride off into the sunset, along with all the other "fantasy" creatures invented by the previous environment.  But, what will happen to us?  That's where *real* innovation enters the story.

I agree.

My questions about your question have to do with things like what you mean by "the new digital technological environment." Given our starting point, Lance Armstrong on Oprah, and subsequent references to Marshal McCluhan, I had imagined that we were talking about what we might call the infotainment industries. Now I find myself wondering if you include things like weaponized drone aircraft. One way of seeing human evolution is, of course, to trace the history of what the military calls, borrowing language from the old AT&T, the ability "to reach out and touch someone": with a fist, a hand axe, a sword, a spear, a bow, a rifle, a cannon....all the way up to the big fist/phallus, the hydrogen-bomb tipped ICBM, That was, in part, an evolution from the very personal encounter many, if not most, children experience in schoolyards, to the nightmare of world-ending mass destruction that still has a collective, communitarian ring to it since, as that great social thinker Tom Lehrer sings, "We will all fry together when we fry..." The drone makes things personal again. Somebody, somewhere, could be targeting ME. Which is, of course, also true in the realm of infotainment, where, as I have previously described, the big data and microtargeting people are taking an interest in individuals that seems almost obscene in its intimacy. 

Also, I am very much enjoying your anecdotes about Weiner, Bateson, von Neumann, Mead, et al. But when you write,

For myself, I believe that green-house gases *are* causing warming but, like Freeman Dyson, I also believe that we don't know how to forecast the ultimate results, since the capacity of the atmosphere and oceans is beyond our capability to accurately model.

I can't help wondering if both you and Freeman Dyson are extrapolating from a current reality to a past inability and not paying enough attention to already emerging near-future possibilities. By an odd coincidence one of The Word Works' clients is the Japan Agency for Marine-Earth Science and Technology (JAMSTEC, which works with similar organizations around the world to map and model global environmental change. We are no longer talking about handfuls of scientists in scattered locations with models confined to the capabilities of early IBM or DEC computers. We are talking about thousands of automated buoys seeded in oceans around the world to monitor changes in temperature, salinity, and currents, plus input from satellites tracking climate change in real time, feeding data into constantly improving models running on new generations of supercomputers. It's another one of those big-data projects that have only become possible within the last decade. What they will be able to achieve remains an open question. But the heirs of "I will go off and build a better machine" McCulloch can still surprise us in ways that backward looking nay-sayers rarely if ever do.


So far as I understand what you are talking about, I hear echoes of that famous scene in Hamlet,

O day and night, but this is wondrous strange!

And therefore as a stranger give it welcome.
There are more things in heaven and earth, Horatio,
Than are dreamt of in your philosophy.

And I certainly agree that neglecting the natural and other contexts in which cultures form is a very bad idea. Do remember, though, that this side of our conversation is occupied by someone who has been reading science fiction for more than sixty years and has made his living in advertising, where our aim is always to get people to see something in a different light — usually positive, but could be negative, too, for example when writing about body odor, ring-around-the-collar,  aging skin, or political opponents of our clients. For better or worse, we do abduction all the time.

But enough snark. Over on Dead Voles, Carl Dyke the elder has recommended William E. Connolly’s new book The Fragility of Things,  Duke UP. My copy arrived last week, and I do believe that you and others here should be interested in it.

From the early chapters, I would imagine that Connolly's response to Bateson's lecture (thank you for providing the link to that) would be that the world is composed of numerous interacting systems, all striving for some kind of equilibrium, but colliding in ways that have unexpected, chaotic and sometimes creative consequences. It begins with a meditation on the great Lisbon earthquake of 1755, one of whose consequences was Voltaire's writing CandideThat opening appeals powerfully to someone who lives in Japan where pursuit of energy independence and social stability led to the building of nuclear reactors in Fukushima, without considering the system of tectonic plates seeking its own equilibrium that produced  the earthquake and tsunami on March 11, 2011


John,   I'm more focused on the "mismatch in variety" between "models" and "natural environmental systems".  Models are "missing things" of great importance, but we are running our entire world on fixed models, like the economic rule to always make a profit so you can multiply what you are doing.  Partly because of following that rule, people who live by rules, thinking of nature in terms of models, have become dominant in our culture.  

They act on nature as if it were a system of controlled parts, as if a set of instructions for the people to use for controlling them, as the "animators", of an otherwise "lifeless" world (in their models).  Nature, though, is observably a host to a great variety of independently organized and behaving individual systems, giving nature a complex "cellular" design as a network of separate worlds, that develops patterns of interaction that reflect how all these independent worlds respond to each other, everywhere at once.  As a system nature and its parts demonstrate a great capacity for creating innovations of many kinds that grow rapidly and spread virally to then integrate in new stable states, making nature very much a "living thing".  

Another thing models greatly misrepresent is cause and effect in natural systems, which generally have no elements of "logic" in them, but only complex "developmental processes" literally everywhere.   Some seem to be deterministic cascades of repercussions of other things, but quite a number seem to be explosions of organization such as seen in the emergence of any new system, like how storms or organisms form by a growth process from an imperceptibly small beginning.  Models have only logic, and no developmental processes...  so a second very major "miss match in variety".

So, it's another way to point out what Bateson did, in the essay Mark referenced, that human intentionality is based on almost no awareness of what kind of living system we are part of.  I agree with him that perhaps that's why our actions appear more disruptive than collaborative in our relations with nature.  I also note he wasn't taking into account the above two systemic problems with models.   It still indicating a need for a better way to use models, since that's how we think, just not how nature works...  

So I've proposed some ways to use them as a learning tools rather than for representation, understanding equations as "boundary conditions" for living systems, rather than equating them with models.   After 35 years of trying, however, I can't seem to get any scientist to discuss the methods I've found useful.  It really seems to be that scientists are well trained to discuss and think of nature as their abstract theories, so they experience great discomfort when asked to compare the differences between their models and the systems of nature they are modeling.  

They have no scientific words for referring to natural systems in their natural state.   So, even if I point out that we are all using such terms all the time as our natural language, and even for discussing the use of our models and theories, they still refuse to acknowledge that the two forms of language could be different ways of describing the same subjects.   It's a "funny problem"...   



OAC Press



© 2019   Created by Keith Hart.   Powered by

Badges  |  Report an Issue  |  Terms of Service