The papers discussed in our online seminars are often excellent, but this one is apocalyptic, as well as being an exercise in fine writing. Lee Drummond was once a Chicago anthropology PhD specialising in the Caribbean and later in San Diego's tourist attractions. He taught at McGill University for over a decade before retiring to the wilderness (Palm Springs!). July and August there are his winter when he withdraws into his airconditioned study to avoid the heat. The OAC is a principal beneficiary of this aestivation, as witnessed by the paper attached here. All I can say is you gotta read it, whether or not you participate in the seminar.
Lee's point of departure is Lance Armstrong's confession on the Oprah Winfrey show. The man who perhaps deserves to be known as the greatest American athlete ever admitted taking performance enhancing drugs, thereby triggering an intense public outcry. Lee deconstructs what he takes to be a key feature of the American ideology, the opposition of nature to culture, showing that biology and technology have been inextricably woven together throughout human evolution and even before. If it is impossible to identify the unequal influence of technology in sporting performance, what about other areas of cultural achievement, like literature for example? Should Hemingway's Nobel prize be taken away or Coleridge's poetry eliminated from the canon because they wrote under the influence of mind-altering substances?
Not content with this reductio ad absurdum, Lee then launches into a savage critique of American civilization and of the cultural anthropology it has spawned. Drawing on Marx's happy phrasing in the 18th Brumaire, he argues that the American tragedy (New World genocide) now reappears as farce (reality TV shows), one of which actually replayed the former in a grotesque reenactment of the competitive ideal. Anthropology tends to celebrate cultural achievement around the world, whereas in Lee's view, the current state of American society suggests that culture may be a disease killing off its carriers just as their ancestors once killed off the original inhabitants of what passes for the land of the American dream.
Replies are closed for this discussion.
My guess is that many reading this are familiar with the 1992 Collins/Yearley essay "Epistemological Chicken," which deals with some aspects of what I've been calling our *metaphysical* CRISIS (and is prominently footnoted in the Wikipedia entry on the "sociology of scientific knowledge") -- http://books.google.com/books?id=U_mL2pPGY7YC&pg=PA301&lpg=...
Mark, first time I've heard of "Epistemological Chicken." Thanks for the introduction. May I guess that you or others here are familiar with Richard Rorty (1979) Philosophy and the Mirror of Nature. From the Wikipedia entry to which the link points,
Rorty's central thesis is that philosophy has unduly relied on a representational theory of perception and a correspondence theory of truth, hoping our experience or language might mirror the way reality actually is. In this he continues a certain controversial Anglophone tradition, which builds upon the work of philosophers such as Quine, Sellars, and Davidson. Rorty opts out of the traditional objective/subjective dialogue in favor of a communal version of truth. For him, "true" is simply an honorific knowers bestow on claims, asserting them as what "we" want to say about a particular matter.
Rorty spends much of the book explaining how philosophical paradigm shifts and their associated philosophical "problems" can be considered the result of the new metaphors, vocabularies, and mistaken linguistic associations which are necessarily a part of those new paradigms.
Rorty's pragmatism is one of the origins for the suggestion I made in my last post, to approach the relationship between model and reality as one of map and territory, where the map is part of the territory, a public, material thing — NOT an invisible bit of software hidden in a separate reality called "Mind." This is, by the way, also a fundamental part of Clifford Geertz's concept of culture. Culture may be a network of symbols, but the symbols are part of the visible, tactile, audible, tastable world. One virtue of this approach is that it is perfectly compatible with your project of getting people to look at the world with open eyes, or at least to behave like the sociologists described in "Epistemological Chicken," freely shifting from one point of view to another. At the same time, moreover, it removes the compulsion to arbitrarily separate — to use your recent example— mental from physical energy, or, as Terence Deacon might put it, slip in a new homunculus in the guise of some mysterious elán vital.
Mark, this question leads me back to a question raised by your phrase, "hunting for 'metaphysics.'" Are the metaphysics in question new, but historically contingent ontological assumptions influenced by new technologies or are we hunting a snark, a final, metaphysical answer to What is Being and What's it About?
Mark & John,
I'm actually not clear why "Epistemological Chicken" is understood as way to open one's mind to multiple worldviews. It don't think of "opening one's mind" as a high risk type 'A' game of superiority as "chicken" is. But be that as it may, I find all these references fascinating, and very much to point.
I've also been reading a bit on Parmenides from de Santillana's "Reflections on Men and Ideas", which seems embroiled in the very same general subject. My approach to it seems a bit different, still. It seems we're bringing up multiple sides of a persistent struggle over many centuries to find the proper place for abstract thinking in the natural world, with the answer generally remaining elusive.
From what I know of the discussion, it seems to have revolved around comparing many options looking for the best way to conceptually represent reality. To me it's notable that people who are happy to let reality define itself, and use words defined by usage and in relation to what they do to care for very complex things often enough, don't seem to be as bothered by that question. So, though they seem to have found a way to be clear about what reality is they also don't enter into the debates or understand its issues, and so don't seem to offer much for solving it. In the debates over quantum mechanics, for example, there's a famous phrase that resolves a great many disputes, "just shut up and calculate", expressing the consensus on the math but bewilderment about what it means.
I still think that somehow learning from the people who don't have the problem will be the solution, to teach abstract thinking to somehow look at the world experientially somehow. In suggesting that "reality" is more defined in the end by experience than by conceptual definitions does treat the long effort to represent reality in our minds as sort of a "mistake", a mistake of circular thinking in using thought to define what isn't thought.
If reality is actually not in the mind, the best way to represent with our minds would seem to involve persuading the mind to be so fascinated with what it does NOT define. Being fascinated and absorbed by what we can't define could result in our giving undivided attention to how to stretch the shapes in our minds to nicely fit the shapes of the world, and in getting good at it, finding ways of thinking that are deeply faithful to reality.
I think I'm basically suggesting a change in the "test" one would use for perceiving and defining reality. I'm not sure if these references touch on this other way to address the issue, but it seems to me the usual way is to test one's mental representation of reality for having the ability to be defended against others. That introduces a lot of complicated social competition and pecking order kinds of issues into the selection process, often seeming to overshadow the questions of learning involved. It would seem better to instead test for whether the shapes one creates in the mind succeed in closely fitting the world "like a glove" for letting it move freely, rather than "like a rope" for tying it down.
So I'm suggesting that if thinkers thought it over, the lack of progress in the 25 or more centuries of struggling with the question, might just be resolved by acknowledging that actually only "reality defines reality", so our contest is actually for something different than we thought. The meaning of being able to mentally "define reality" then changes to the artist's or sculptor's quest for "defining reality" as perfecting a way of seeing the real thing, not deciding who is right.
For abstract thinking it would become a test of how faithfully it is possible to make one's thinking fit the subjects you find nature has defined. That in our complex world they would have features that are context dependent, then means one must also learn from and include the reality that many others see from different sides.
Let's get REAL . . . !!
What happens to our world when 80% of people are "dis-employed" as a result of being replaced by machines -- in reality? Does anything we've been discussing hereabouts help us to address what appears to be our *inevitable* fate?
If, as Lee suggests (following the remarks of Eleusinian/LSD initiate Ezra Pound), we -- the humans -- are the "rich effluvium . . . the waste and the manure and the soil, and from it grows the tree of the arts [and the tree of the sciences]," then what are we to do with ourselves when the ROBOTS "run the world"? Fish in the morning and paint in the afternoon?
Our dominant-but-now-declining communications environment (i.e. mass-media/television) promotes FANTASY (i.e. everyone gets to make up their own version of "reality," subject to the "rules of the maze"), which it needs to support an economy based on people buying things they do *not* "need." Yes, I've read Danny Miller. Yes, he does a masterful job of outlining the complex "kinship rituals" associated with shopping. No, he never seriously considers the larger context (except as a "cartoon") and, most importantly, he never seems to ponder how *digital* technology changes that context.
Is an even more extreme version of self-absorbed *fantasy* bound to be our future? Are we just technology "users" and will the Internet plug our brain-stimmed fantasies directly into the "dealers"? Are we all to become drugged-out "batteries" *fantasizing* about REALITY which has become a "simulacrum" as detailed in the movie series THE MATRIX (which, btw, was apparently based on Baudrillard, who, in turn, many think based much of his work on McLuhan)?
Nietzsche was likely initiated into a his secret "Brotherhood" in the 1850s, based on a re-enactment of the Eleusinian Mysteries and took "LSD" (actually an "alchemical" pre-synthetic) and it fundamentally shaped his views of "the herd" -- since he believed that he wasn't a part of it. Indeed, he thought himself to be so "special" that he later distanced himself from his "brothers" and struck off on his own -- as recounted in his Nazi-editor Alfred Baeumler's 1929 "Bachofen und Nietzsche." http://en.wikipedia.org/wiki/Alfred_Baeumler. Lee was correct, Nietzsche wasn't an "organization man."
Pound was likely initiated into his secret "Brotherhood" during his sojourns in Provence in the 1910s, richly feeding his views of history and humanity. When Pound was up on charges for treason (a capital offense in wartime), he was "saved" and brought back to St Elizabeth's Hospital, where he enjoyed a "suite" and plenty of visitors (including McLuhan, with whom he grew quite attached and had a fascinating correspondence.) St. Elizabeths' was a headquarters for what is widely known as MKULTRA -- the CIA program around LSD etc. Yes, his "jailers" were also Eleusinian *initiates*!
Bateson was likely initiated into his secret "Brotherhood" in the 1950s in Palo Alto -- if not earlier in connection with his own "intelligence" work during WW II. LSD was *deliberately* synthesized in the late 1930s in Basel (not accidentally, as the public "cover-story" has it) by a group of chemists linked to Rudolf Steiner's Anthroposophy (which is HQ'ed in the suburb Dornach.) Allen Dulles, the first civilian head of the CIA (and the champion of MKULTRA), spent his WW II years in Bern, where he was also likely initiated (as, indeed, many in the Euro/US elites were in those days). Maybe Bateson, who was a longtime insider in these intelligence networks, got in on the action early?
Neither Nietzsche, Pound or Bateson had a very high opinion about (the rest of) humanity. They all apparently considered themselves to be "above" the rest of the humans. Their "secret society" initiations and their LSD probably had something to do with these attitudes. So, my guess is that if we could ask any of them what they thought about "replacing" humans with *programmable* (and, therefore, *perfectible*) machines, they would approve (each with his own qualifications, of course). Is there anything in their work that would argue otherwise?
Perhaps the closest we have to someone who has come through this whole process today is Kevin Kelly -- Stewart Brand's editorial "sidekick" (recalling that Brand is a Bateson protege and the original "publicist" for LSD) -- who is quite unabashed about his desire for the machines to "take over" (and, ironically he was also responsible for the "McLuhan Revival," by making him the "patron saint" of WIRED magazine.) For Kelly, the only issue is "What Technology Wants" http://www.amazon.com/What-Technology-Wants-Kevin-Kelly/dp/B004Y6MT6O
Take me to your leader (which reminds us that all those "aliens" in the movies and books are really just the machines) . . . <g>
Here is where *metaphysics* comes in. If "reality" is whatever we want it to be (i.e. just a "fantasy"), then the machines will DESTROY us! Humanity's only chance is to "re-discover" that REALITY actually *exists* regardless of what any of us might think about it (which, btw, was at the heart of Norbert Wiener's "Genius Project" and the basis for cultures based on "manuscripts" in both the East and West). . . !!
The 1992 "Epistemological Chicken" essay, with its central metaphor of "playing in traffic," is a good start at describing exactly our dilemma. Yes, the authors argue against two versions of *nihilism* but they are also arguing for a human-centered world. Anything else is asking to get run-over. His opponents were not equally concerned.
The key sentence (in the part of the book I can read for free) came from Bruno Latour (and his co-author, on page 360), when he says (replying to the "Chicken" essay), "Our empirical program [ANT -- Actor Network Theory] does not claim either that humans and artifacts are exactly the same or that they are radically different."
This is exactly the disconnect from *reality* that points to the triumph of the ROBOTS! If the *humans* can't be distinguished from the MACHINES and if we are unable to articulate those differences in fundamental terms, then why not "replace" them with something that can be "perfected" (i.e. the PURITAN/millennarian attitude I discussed earlier in this conversation)?
Near the end of his life in 1963 (yes, 50 years ago), Norbert Wiener (i.e. Bateson's "rival" in cybernetics) was interviewed by US News and World Report about the "danger that the machines will take over." His answer was that this danger was REAL (his term) and that the outcome would depend on whether we can figure out "what the machines can do and what the humans can do." What's the difference?
Understanding that difference is what it means to "get real" in today's world. Otherwise, we are playing "epistemological chicken" and the outcome will not be good for the "effluvium." Do we (should we) even care what happens to HUMANITY?
"If reality is actually not in the mind, the best way to represent with our minds would seem to involve persuading the mind to be so fascinated with what it does NOT define."
Reality *exists* and NOT in our minds. Really! And, if you run into someone who thinks otherwise, watch out!
This has been understood by *every* culture before electric media "tricked" us into believing otherwise (aka "constructionism" etc). Yes, there have always been "radical skeptics" but no human culture could endure (for long) without a firm grip on this very basic fact.
Every culture has also understood that "definitions" are mostly useless. Language is inherently EQUIVOCAL and, to the extent that words have singular meanings, language is also pretty useless.
We operate via analogy and metaphor and have (apparently) always done so. That is how we apprehend reality experientially and how we communicate with each other. This is why communications technologies -- starting with language -- are so important.
It is also why phrases like "In the beginning was the Word" are so important, as are all the various "origin myths" that talk about how the world (i.e. *reality*) was itself "spoken."
While there is always room for "improvement," these are TRUTHS that have been *long* understood. Trying to fundamentally make "progress" over what has been thought through over the past 25 centuries is likely to be very frustrating (and, shall we say, also a "fantasy").
You might first want to devote yourself to studying what our forebears thought. Some of them were really smart! <g>
UN Population projections 2012
Year Asia % Africa % Rest %
1900 58 7 35
2015 60 16 24
2050 54 25 21
2100 43 39 18
You may have noticed that the world population is changing and the world economy with it. It is obvious enough from these stats that what really matters for our world in the 21st century is the relationship between Asia and Africa. The kind of scenarios you are spinning have almost no relevance to the historical experience of African people in their own continent. They missed out on the first industrial revolution almost entirely, the electricity grids largely passed them by. They have joined in the urban revolution with alacrity of late and are adopting the present wave of the digital revolution (mobile phones) with amazing speed and some ingenuity (pioneering mobile banking for example).
I doubt if you will find any takers there for the decadent fantasy that human beings and machines are in effect the same. What you would find is a mass of people hungry for some of the development that they can see is on offer elsewhere and a lot of hope for the future, since they know that their societies are young and growing fast in all sorts of ways. The Asian manufacturers understand that Africa is the key to the world market in the century to come. But the Europeans and North Americans only see there confirmation of their own dwindling superiority and project miserabilist versions of the future onto what they still think of as the dark continent.
At the very least, if we wish to address humanity's predicament, we should take some notice of where the human beings are and what is going on there. And no, claiming to know something about China will not do. As for Bruno Latour, he is a clever man playing to a constituency of western students who know they have no future. In collapsing the distinction between human and non-human agents, he just reproduces the ideology of a corporate takeover which abolished the distinction in law between real and artificial persons over a century ago. If humanity is to have a future, we should pay attention more systematically to the historical trajectory of Asians and Africans and we need to oppose firmly any ideology that collapses the difference between Walmart and you or me. The dystopian fantasies of literary heroes like D H Lawrence and H G Wells after the demoralising Great War found a number of ways of discounting human life, including dreams of genocide and the world historic defeat of the human race by machines. Count me out of the replay.
Fortunately few of the poeple who will have the main say in figuring out the future of the planet will be distracted by these ideas, which are little more than the retrograde nightmares of a civilization in decline. I have enjoyed your witty and sometimes erudite commentaries in this thread. Maybe I could have just smiled and let this one go too, but being told to GET REAL tipped me over.
Excellent! I was hoping that you would be "tipped over"! <g>
The primary reason *why* these population trends (and, indeed, the trends regarding CO2 going into the atmosphere) make sense is that most of the world has NOT YET completed its *industrialization* curves -- which, when "complete" will both bring down birth-rates (while increasing life expectancy) and shift energy demands, albeit on a planet with a different climate.
These are all long-term "S-curves" and the West (including Japan) reached their industrial plateaus in the late 20th century, with Asia likely coming next and Africa (overall) completing the picture perhaps by the end of the 21st century. I have no doubt that you are right that Western intellectuals are largely "degenerates" and so is their "metaphysics" -- lost in a world they don't understand and hoping *fantastically* to be taken away in a spaceship.
However, the technologies that are rapidly marching to displace workers aren't limited to the West and the effects of "robots" on human society is already an important factor in Asia and it is only a matter of time (decades?) before the same will be true in Africa. Regardless of the failure of "globalization" as a political/cultural project, it is still a global economy.
As I said in a post to another list this AM (which I will repost here), Moore's Law is fundamentally *deflationary* against human LABOR. It doesn't matter whether that labor is here, in China or in Kenya. So, what the "degenerate" West does about confronting this fundamental challenge to the structure of its post-industrial society also matters elsewhere. The conditions under which Africa industrializes will not be steam-power and railroads (or, for that matter, radio or television) -- it will be under DIGITAL conditions. How the West handles those conditions will also impact how everyone else handles them -- as they also move towards being post-industrial.
The strength of the Chinese culture relative to the West is very important in this regard, which, of course, means that China will likely have more impact on industrialization in Africa than the does US. How Africa deals with all this is also a big unknown. Has anyone in Africa even begun to think any of this through in cultural terms?
Mark, We should talk some of these things over. We seem to be referring to many of the very same things, but you seem to be hearing what I'm saying as if forced to make assumptions I'm not making at all.
Where I'm coming from is having carefully studied and documented a variety of ways that social and professional communities "create their own realities" while being quite unaware of how they do it. The basic opening comes from our normal habit of accepting consciousness as being "reality". The misunderstanding starts everyone having different information, and assuming that whatever sense they have made for whatever information they have, is the world they are part of. So it's then 'natural' to treat agreements on how to interpret partial information as the world we live in, and those "disconnects" can sometimes be solidly documented.
As a result, though, most people are quite unaware of how totally subjective their idea of reality is, and remain fooled by the *seeming* authenticity of our own consciousness, often quite unaware that it presents us with meanings of our senses that we ourselves made up entirely. Being unaware of that, it generally appears to us that our conscious image of the world is where everyone else lives, when it's actually a just a virtual world of our own design, in which no one else lives!! Isn't that cool! :-) It means that everything in consciousness is a reflection of ourselves...
I've been documenting a variety of *highly* consequential errors in interpreting the systems we are part of, that result from that. The ones you can solidly prove basically center around situations where very specific kinds of information are missing from what people can find. For example, when economists and sustainability scientists tell people how to calculate the energy it takes to deliver a consumer product, they generally get the answer wrong by an order of magnitude or more. It's because they only count the energy uses they can trace, but you can prove the economy's untraceable energy uses will be much larger than the traceable ones. I was able to show that the untraceable energy uses are most often many times as big.
That led to my "reality math" idea, for whole "Systems Energy Assessment". The principle (not being followed) is that when making physical measurements of environmental systems you need to count *both* what you can collect hard information on *and* a proper estimate of the share of what you can't directly trace. That is a scientific necessity for constructing well defined units of measure, so that the units add up to the total. For sustainability metrics the impact accounts people are making at present just add up what they "see", interpreting the information as reality, but don't add up to the total by a very long shot!
Sent: 10/19/2013 10:43:23 A.M. Eastern Daylight Time
Subj: Re: [IP] Fwd: The Century Question
> What are we to do in government, business, scientific,
> and social terms until the foreseeable long-term unemployment
> can be compensated, and humanely redirected, so that a new,
> fairer and more productive equilibrium is reached?
Excellent question! Who do you think is working on even understanding the various dimensions of the problem -- let alone some "solutions"?
Moore's Law (and all similar exponential technological curves) are indeed *deflationary* and, for all our fascination with the price of gadgets, the only economic variable that really matters for the overall *structure* of the economy is the price of human labor (which, then, in econ-speak, becomes the driver for "aggregate demand" in a consumer-based post-industrial economy like ours). [Or, for that matter, in still-industrializing economies in Asia and Africa that depend on consumer demand from the post-industrial economies.]
If, in the "deflationary" limit, technology were "free," then, presuming it was sufficiently advanced to perform every task, it would displace ALL human labor -- if "standard" economic principles applied. Too many "ifs"?
Where are we currently in the process of dis-placing human labor with machines? As many have noted (and typically applauded), manufacturing is "coming back" to the USA -- but that's only happening because the cost of the "robots" is now below the cost of the foreign labor (plus transportation, etc.) [Another effect that will hurt in places like Africa.]
Publicly Oxford seems to think that 45% of today's jobs can be "automated" and privately MIT seems to think the number is more like 80%. Clearly the "analysts" are trying to WARN us . . . !!
But what would that economy look like? No workers. No wages. No demand/consumption? Karl Marx seems to have imagined something along these lines but like so many others who have been able to imagine such a "limit," he refused to speculate about what life in such a society would be like. Maybe that's why he published so little in the last 15 years of his life.
In 1966, a "blue-ribbon" Congressional Commission delivered its final report (along with 6-volumes of supporting material) on "Technology, Automation and Economic Progress." As best I can tell, very few today are even aware of its existence and even fewer have read it. It was driven by the then-current concerns about "cybernation" that were, in turn, driven largely by the work of my "godfather" Norbert Wiener, particularly with labor movement leaders like Walter Reuther, who was on the Commission.
Alas, that movement (and, in particular, the UAW) made its "peace" with the robots a *long* time ago. So, *who* is going to take-up these very urgent issues today?
Mark & all, I'm wondering if this is a good place to bring the subject back to where the discussion started.
Aren't we perhaps still swayed by the "reality show" aspects of these great stories lines, and losing sight of the fact that the stories we attach to our world's dynamics are actually not "S" curves we can project into the future, as if pre-defined. If you remove human interpretation and storylines from them, the last few hundred years of human development has been an ecological growth processes (not a curve) of ever more invasive cultures that have now begun to all collide with each other.
We almost never hear about that kind naive "phenomenological" view of "what's doing what", not even from scientists trying to think objectively about it. Almost anyone can see that our professions, social networks and media society all seem endlessly preoccupied with the excited "reality shows" of their ever shifting cultural theaters. That our views seem handicapped by a tendency to see "objective reality" as "what we think", our "consciousness", means that for us reality is mostly composed of our own meanings for our the selective information we have about the world around us. That could help explain that disconnect couldn't it?
I think we see it in the sweep of history too, in that history a long list of "big surprises", right? Some people aptly express that as the experience of their own lives, saying "life is what happens when you were planning something else". In part life is apparently so unexpected on each scales due to the systems of nature not following our own narratives and cultural expectations.
It seem to be for good reason in that our expectations are for the world we invent in our minds by projecting or values on patterns of "dead" events of the past. Couldn't nature's reason for being so unexpected, then, be that natures systems are instead following their own quite independent animating processes, that our preoccupation with our own expectations has very broadly kept us from looking for?
Mark, Keith, et al,
New Portugal, Science, Reality (the really real or REAL), Communications Technology, and Rivers of Acid Flowing down through History
Things are getting really (REAL?) interesting with the latest Comments from Mark and Keith. These have got me up and going even before my morning ergot smoothie and my sausage-and-biscuits (with gravy) breakfast down at the local Sidewinder Bar and Grill. Paradigms are now emerging from our deep cavern of consciousness like lumbering cave bears, awakened from their dogmatic slumbers and hungering for a taste of human meat. Forgive a rough, probably distorted sketch:
Keith-on-Mark: Western decadence with its coddled techno-fantasies of a human-machine interface in which a rapidly evolving communications technology constitutes the REAL ignores the really-real world of an emergent global civilization anchored by African and Asian nations, few of which experienced anything like the technological revolutions in the West that have installed Apple, Facebook, and Twitter at the pinnacle of culture. To wit,
At the very least, if we wish to address humanity's predicament, we should take some notice of where the human beings are and what is going on there. . . If humanity is to have a future, we should pay attention more systematically to the historical trajectory of Asians and Africans and we need to oppose firmly any ideology that collapses the difference between Walmart and you or me. The dystopian fantasies of literary heroes like D H Lawrence and H G Wells after the demoralising Great War found a number of ways of discounting human life, including dreams of genocide and the world historic defeat of the human race by machines. Count me out of the replay.
Uh oh, the “g” word. When I started out in this Web 2.0 business in August, I was puzzled by several items in the WebSpeak lexicon, one being the use of the term “intervention” for a seminar contribution. Here I’d always thought an “intervention” was what me and the folks had when my brother-in-law Billy Bob was gettin’ likkered up and taking it out on my sister. But I’m learning.
And then there is,
Mark-on Keith: Regardless of the historical paths taken on the several continents, human populations are experiencing or will soon experience the DIGITAL revolution which re-defines communication and transforms the very meaning (or lack thereof) of what it is to be “human.”
Again, to wit,
However, the technologies that are rapidly marching to displace workers aren't limited to the West and the effects of "robots" on human society is already an important factor in Asia and it is only a matter of time (decades?) before the same will be true in Africa. Regardless of the failure of "globalization" as a political/cultural project, it is still a global economy. . .
Moore's Law is fundamentally *deflationary* against human LABOR. It doesn't matter whether that labor is here, in China or in Kenya. So, what the "degenerate" West does about confronting this fundamental challenge to the structure of its post-industrial society also matters elsewhere. The conditions under which Africa industrializes will not be steam-power and railroads (or, for that matter, radio or television) -- it will be under DIGITAL conditions. How the West handles those conditions will also impact how everyone else handles them -- as they also move towards being post-industrial.
The New Portugal.
These are heavy, world-historical ideas. They bring to mind something I, and doubtlessly countless others, have considered in these days of a “decadent” or “degenerate” America which seems utterly incapable of getting its act together. [Can’t anybody just sit back and enjoy the real /REAL life and adventures of Here Comes Honey Boo-Boo without fretting about Congressional hijinks and the global economy?] That is the realization that the times they are a’changing, that the contemporary United States is the New Portugal, a half-millennium after Portugal, that puny and forgotten land perched at the toe of Europe, was the military and political superpower that called the shots during the important and exciting Age of Discovery:
The Treaty of Tordesillas was intended to solve the dispute that had been created following the return of Christopher Columbus and his crew. On his way back to Spain he first reached Lisbon, in Portugal. There he asked for another meeting with King John II to show him the newly discovered lands. After knowing of this situation the King sent a threatening letter to the Catholic Monarchs stating that by the previous Alcaçovas Treaty signed in 1479 (confirmed In 1481, with the papal bull Æterni regis that granted all lands south of the Canary Islands to Portugal) all of the lands discovered by Columbus belonged, in fact, to Portugal. Also, the Portuguese King stated that he was already making arrangements for a fleet to depart shortly and take possession of the new lands. After reading the letter the Catholic Monarchs knew they didn't have any military power to match with the Portuguese, so they pursued a diplomatic way out. So, on 4 May 1493 the Spanish-born Pope Alexander VI decreed in the bull Inter caetera that all lands west and south of a pole-to-pole line 100 leagues west and south of any of the islands of the Azoresor the Cape Verde Islands should belong to Spain, although territory under Catholic rule as of Christmas 1492 would remain untouched.
It’s mid-October, and my local WalMart has already rolled out the plastic Santas and elves and put Christmas music on its battery of speakers. As I push my cart through the aisles, stocking up on Ensure nutritional supplement (lots and lots of new flavors!) and adult diapers, the classic, “It’s Starting to Seem a Lot like Christmas,” is playing. I sing along to myself, substituting the word “Portugal” for “Christmas.” But don’t worry folks, China, India, and Kenya may be the movers and shakers as the centuries tick by, but America will still be here, and we may not even have to trade our sausage-and-biscuits for that rather nasty bacalhau.
(How’s that for a heading?)
Idealism vs. materialism, symbolism vs. realism, postmodernism vs. positivism? The Titans of Thought have been duke-ing these out for centuries. Considering the highly intelligent contributions to our little seminar, it was inevitable that these fundamental arguments should make an appearance (and head for center stage). But Mark and Keith add some intriguing touches to this schism in Western thought. Again, apologies for mis-stating their views:
Mark: Reality (his REAL) is a powerful blend of communications technologies (speech, writing, printing, radio, TV, stand-alone computers, and, now, the massively interactive digital world of the Web) infused down through the centuries with hallucinogen-inspired cults of priests, adepts, robber barons, and corporate chieftains which shaped social thought and determined the form assumed by entire societies.
You have to give this theory high marks for originality and vibrancy; thinking about it is way more fun than slogging through Locke / Hume vs. Kant for example.
Keith: Marx’s materialism pointed the way to a sound understanding of global society, which can be improved by the addition of an ethnographic method and anthropological theory which map out a truly human economy, one featuring flesh-and-blood individuals in all their diversity rather than one or another economic model of what “rational actors” do.
The strength and appeal of this approach lie in its embracing – and amplifying – a compassionate humanist tradition that has come under fire from a variety of intellectual movements over the years. Keith accomplishes this by the admirable feat of drawing the marginal discipline of anthropology into the mainstream of social theory.
My own modest view of this mega-issue (my “tweet” if you will) is that a fundamental principle of semiosis is inherent in all experience, whether human, proto-human (australopithecines and early Homo lineages), post-human (Mark’s robots / cyborgs), alien intelligences (E. T.), or animalian (and, just maybe, I should include plant life here as well). Any action involving a sentient being (a movement, gesture, utterance, spoken or written word, constructed image, computer code) is very likely semiotic in nature, an element in an exchange (of actions / messages) or itself the subject of another piece of semiosis (an interpretation). Obviously reams can and have been written about this thorny topic, but I would claim with Peirce – without subscribing to his convoluted development of the idea – that the universe is a perfusion of signs. That is true whether one is considering literary texts, bicycle races, chess games, economic transactions, or (what shall we call them?) digital entities. Elsewhere, in and outside the seminar, I’ve considered how these vast, free-floating semiotic processes might be organized within a semiotic universe of the kind to which Peirce alluded.
Reality, “Reality” and Science: The Bishop and the Blowhard
Rather than plunge into the deep, murky waters of Locke / Hume vs. Kant and other forbidding topics, I’d like to consider briefly a related, more accessible, and even more extreme contrast of philosophical perspectives on the world.
The idealist in my little sketch is George Berkeley, Bishop of Cloyne, whose early 18th century work, A Treatise Concerning the Principles of Human Knowledge, is often cited as an extreme, even solipsistic argument that the material world has no independent “reality,” but is instead a set of ideas in the human mind. My personal view, not shared by many, is that Berkeley’s work has got an undeservedly bad rap. His prose is dense and tedious, and, being a Bishop, he argued that our ideas have an organization that comes from God. Those two factors account for the fact that secular contemporary scholars generally ignore him; I would even bet that many professional philosophers have not read the Treatise. When one does consider the substance of Berkeley’s ideas, however, I think they place him at the origin of what has developed into the loosely-defined area of study called “semiotics.” His ideas are deeper, and have more far-reaching implications, than those of Locke (Berkeley’s nemesis who motivated him to write the Treatise). To interpret his work for our time, it is simply a matter of performing the impious act of replacing “God” in his text, the grand puppeteer of human thought, with something like “semiotic system” or “semiotic domain” or even “semiotic armature.”
I introduce Berkeley here because he is the subject of one of the great narratives in philosophy (actually one of the few narratives in philosophy). Long after the Treatise was released, Samuel Johnson and his toady boy, James Boswell, were strolling along a London street when Boswell asked the old blowhard what he thought of Berkeley’s idea that matter only seemed to exist. Johnson immediately proclaimed, “I refute it thus!” and kicked a rock (thereby proving the reality of the material world). We can only hope the pompous old doctor broke a toe and had to hobble to the nearest coffeehouse, where he would hold court for hours.
NOTE: I tried to include a handsome portrait of Samuel Johnson, here, but don't know the technique. I'll try to include it in the OAC list of photographs. Someone with the requisite skill might transfer that photo over to this Comment. Thanks, L. D.
Samuel Johnson: Not Our Hero, also:
Not Tripping on Acid
The world is real; the things in the world are real. Ideas are about real things in a real world. What else is there? What else could there be? How many people hold those very ideas today? Not bothering to kick rocks, they instead spend most of their waking hours staring into computer monitors, TV screens, smart phone displays, soaking up images while comfortably proclaiming the primacy of a real, tangible world.
The issue goes deeper, much deeper, than this. Here Johnson’s little stunt affirming the material reality of the world is a marvelous foil. Bear with me; another story is required here.
In 1994 Brian Schmidt, a Harvard post-doc in astrophysics and a colleague began a Hubble Telescope project to study deep space supernovae. They wanted to test the three existing theories of the universe:
1) From the time of the Big Bang the universe has been expanding and will go on expanding at a slower rate until the end of time (or until protons disintegrate).
2) The universe will stop expanding and eventually collapse into another Big Bang.
3) The expansion rate of the universe will gradually slow until, at some indescribably remote time, it will come to a virtual halt (the so-called “flat universe”).
Over the next few years Schmidt’s team collected data on a small population of supernovae near the very limit of the observable universe. They made a stunning discovery, which they published in 1998. None of the three established theories of the universe was correct; instead, the universe is expanding at an ever-increasing rate. They had found the first evidence for “dark energy,” an invisible force that makes up about seventy per cent of the constituents of the universe and that somehow propels its expansion. For this remarkable work Schmidt and two colleagues received the 2011 Nobel Prize in Physics (a fairly quick turnaround for those conservative old Swedes).
The discovery of dark energy came on the heels of another remarkable finding in cosmology. For some time evidence had been accumulating that observable matter in the universe (that is, all the stars, nebulae, planets, tables and chairs, flowers, our bodies – all baryonic matter, composed of protons and neutrons) was not nearly enough to account for the behavior of stars in galaxies or of galaxies in galaxy clusters. To do that there had to exist a great deal of “dark matter” (not anti-matter) that was completely invisible apart from its effect on ordinary, visible matter. No one has much of a clue of what it is or how to find it.
Now that the smoke has begun to clear around these incredible findings, it appears that dark energy makes up a bit less than seventy per cent of the universe (68.3%); dark matter about twenty-seven per cent (26.8%); and ordinary matter (which now seems anything but “ordinary”) just under five per cent (4.9%).
J. B. S. Haldane nailed it, way back when:
I have no doubt that in reality the future will be vastly more surprising than anything I can imagine. Now my own suspicion is that the Universe is not only queerer than we suppose, but queerer than we can suppose.
-- Possible Worlds and Other Papers (1927)
You and I, all our friends and neighbors, our world, including Johnson’s rock are a sliver of what exists, of the real world.
All this ties directly into my remarks in my last Comment about the great importance of a sense of wonder that fuels scientific discovery and, I would add, artistic creation. In an astonishing turn of events, our centuries-old wrangling about privileging the “real” over the “ideal” or the other way around turns out to have been a pretty meaningless skirmish, a tilting at (imaginary) windmills, because we had only the barest inkling of what constitutes “reality.”
I would return, then, to my point that it is the accomplishments of genius (considered in the broadest terms) which stand apart from this-or-that critique of our species’ historical / symbolic processes. Not that those are unimportant or lack interest; I think all of us spend our time engaging in those activities (unless someone has an immortal masterpiece on canvas or paper hidden away).
To end on a more or less practical or procedural note, let me ask where Mark would fit, say Einstein’s theory of general relativity, Bobby Fischer’s Game of the Century, or even Lance Armstrong’s feats on the bicycle, into his intriguing commentary on communications technology and its hallucinogenic aura? Ditto for Keith and his theory of the human economy, with its Kantian pedigree? Please note that this is not a challenge or criticism, merely a request born, just perhaps, of wonder.
Couldn't master whatever technique is required to add the Samuel Johnson photo. Here's the link. (Sam's a hottie!)