Phoebe Sengers on Thu, 4 Feb 1999 03:35:37 +0100 (CET)

[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]

<nettime> AI as cultural phenomenon

Dear Nettimers,

My rather voluminous contribution to the internet gift economy can be
found at  For the last 8
years, I have done interdisciplinary work at CMU in cultural theory and
artificial intelligence.  My goal is to integrate cultural theoretical
and political considerations into the practice of agent technology
development, i.e. to be a computer scientist and a cultural theorist at
the same time.  The result of this strange hybrid is described in my
thesis, "Anti-Boxology: Agent Design in Cultural Context", which can be
downloaded from the page above in postscript format.  A small excerpt
follows.  Feel free to download the rest (but be forewarned that it is

Phoebe Sengers
ZKM Karlsruhe

Methodology: Subjective Technologies

The approach taken in this thesis follows Varela, Thompson, and Rosch in
asserting that subjective experience, which goes to the heart of what it
means to humans to be alive in the world, should be an important
component of AI research.  I believe that one of the major limitations
of current AI research --- the generation of agents that are smart,
useful, profitable, but not convincingly alive --- stems from the
traditions AI inherits from science and engineering.  These traditions
tend to discount subjective experience as unreliable; the experience of
consciousness, in this tradition, is an illusion overlaying the actual,
purely mechanistic workings of our biological silicon.  It seems to me
no wonder that, if consciousness and the experience of being alive are
left out of the methods of AI, the agents we build based on these
methods come across as shallow, stimulus-response automatons.

In the reduction of subjective experience to mechanistic explanations,
AI is by no means alone.  AI is part of a broader set of Western
cultural traditions, such as positivist psychiatry and scientific
management, which tend to devalue deep, psychological, individual, and
subjective explanations in favor of broad, shallow, general, and
empirically verifiable models of the human.  I do not deny that these
theories have their use; but I fear that, if taken as the *only* model
for truth, they leave out important parts of human experience that
should not be neglected.  I take this as a moral stance, but you do not
need to accept this position to see and worry about the symptom of their
neglect in AI: the development of agents that are debilitatingly
handicapped by what could reasonably accurately, if metaphorically, be
termed autism.

This belief that science should be understood as one knowledge tradition
among others does not imply the rejection of science; it merely places
science in the context of other, potentially --- but not always actually
--- equally valid ways of knowing.  In fact, many if not most scientists
themselves understand that science cannot provide all the answers to
questions that are important to human beings.  This means that, as long
as AI attempts to remain purely scientific, it may be leaving out things
that are essential to being human.

In _Ways of Thinking: The Limits of Rational Thought and Artificial
Intelligence_, for example, cognitive scientist Mero, while affirming
his own scientific stance, comes to the disappointing conclusion that a
scientific AI will inevitably fall short of true intelligence.
	In his book _Mental Models_ Johnson-Laird says, `Of course there
	may be aspects of spirituality, morality, and imagination, that
	cannot be modeled in computer programs.  But these faculties will
	remain forever inexplicable.  Any scientific theory of the mind
	has to treat it as an automaton.'  By that attitude science may
	turn a deaf ear to learning about a lot of interesting and
	existing things forever, but it cannot do otherwise: radically
	different reference systems cannot be mixed.  (228-229)

But while the integration of science and the humanities (or art or
theology) is by no means a straightforward affair, the work already
undertaken in this direction by researchers in AI and other
traditionally scientific disciplines suggests that Mero's pessimism does
not need to be
warranted.  We *do* have hope of creating a kind of AI that can mix
these `radically different reference systems' to create something like a
`subjectivist' craft tradition for technology.  Such a practice can
address subjective experience while simultaneously respecting its
inheritances from scientific traditions.  I term these perhaps
heterogeneous ways of building technology that include and respect
subjective experience `subjective technologies.'  This thesis is one
example of a path to subjective technology, achieved through the
synthesis of AI and cultural studies, but it is by no means the only
possible one.

Because of the great differences between AI and cultural studies, it is
inevitable that a synthesis of them will include things unfamiliar to
each discipline, and leaves out things that each discipline values.  In
my approach to this synthesis, I have tried to select what is to be
removed and what is to be retained by maintaining two basic principles,
one from AI and one from cultural studies: (1) faith in the basic value
of concrete technical implementation in complementing more philosophical
work, including the belief that the constraints of implementation can
reveal knowledge that is difficult to derive from abstract thought; (2)
respect for the complexity and richness of human and animal existence in
the world, which all of our limited, human ways of knowing, both
rational and nonrational, both technical and intuitive, cannot exhaust.

The Anti-Boxological Manifesto

The methodologies I use here inherit many aspects from the previous work
described above [in the 1st section of the chapter from which this
excerpt comes].  Following Winograd and Flores, I analyze the
constraints that AI imposes upon itself through its use of analytic
methodologies.  Following Suchman, I uncover metaphors that inform
current technology, and search for new metaphors that can fundamentally
alter that technology.  Following Chapman, I provide not just a
particular technology of AI but a way of thinking about how AI can be
done.  Following Agre, I pursue technical and philosophical arguments as
two sides of a single coin, finding that each side can inform and
improve the other.
The additions I make to these approaches are based on a broad analysis
of attempts to limit or circumscribe human experience.  I believe that
the major way in which AI and similar sciences unintentionally drain the
human life out of their objects of study is through what agent
researchers Petta and Trappl satirize as `boxology:' the desire to
understand phenomena in the world as tidy black boxes with limited
interaction.  In order to maintain the comfortable illusion that these
black boxes sum up all that isimportant of experience, boxologists are
forced to ignore or devalue whatever does not fall into the neat
categories that are set up in their sstem.  The result is a view of life
that is attractively simple, but with glaring gaps, particularly in
places where individual human experience contradicts the established
wisdom the categories represent. 

The predominant contribution to this tradition of humanistic AI which
this thesis tries to make is the development of an approach to AI that
is, at all levels, fundamentally anti-boxological.  At each level, this
is done through a contextualizing approach:
- At the disciplinary level, rather than observing a strict division of
technical work and culture, I synthesize engineering approaches with
cultural insights.
- At the methodological level, rather than designing an agent as an
independent, autonomous being, I place it in the sociocultural context
of its creators and the people who interact with it.
- At the technical level, rather than dividing agents up into more or
less independent parts, I explicitly place the parts of the agent in
relation to each other through the use of mediating transitions. 

At all levels, my approach is based on this heuristic: ``that there is
no such thing as relatively independent spheres or circuits'' (Deleuze
and Guattari, _Anti-Oedipus_, 4).  My approach may feel unusual to
workers because it is heavily metaphorical; I find metaphorical
connections immensely helpful in casting unexpected light on technical
problems.  I therefore include in the mix anything that is helpful,
integrating deep technical knowledge with metaphorical analysis, the
reading of machines, hermeneutics, theory of narrative, philosophy of
science, psychology, animation, medicine, critiques of
industrialization, and, in the happy phrasing of Hayes and friends,
``God knows what else.'' The goal is not to observe disciplinary
boundaries --- or to transgress them for the sake of it --- but to bring
together multiple perspectives that are pertinent to answering the
question, ``What are the limitations in the way AI currently understands
human experience, and how can those limitations be addressed in new
#  distributed via nettime-l : no commercial use without permission
#  <nettime> is a closed moderated mailinglist for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: and "info nettime-l" in the msg body
#  URL:  contact: