Joseph Rabie on Sun, 21 Jan 2018 15:21:02 +0100 (CET)


[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]

<nettime> They know not what they do


Dear all,

What has for a long time been labeled “artificial intelligence” has now become a mainstream catchword, with the popularization of (among other things) self driven cars, facial recognition and computers capable of churning out Rembrandts. Terms like “deep learning” and “neural networks” are the vanguard of the coming so-called “singularity”, when computers will allegedly become more intelligent than us humans. A competing artificial species, equipped with an artificial brain analogous to our own but far more efficient and capable, and likely more deadly (being that it was created in our image). That, according to Stephen Hawking and Elon Musk, is when the shit really hits the fan.

The question which troubles me is whether AI is indeed a transformative concept as claimed, or is it vastly overrated, a misnomer or a sham? Undeniably, computers can perform tasks conceived by humans with far greater ability than we ourselves can, which is why we made them. But is it appropriate to speak of intelligence as such, even if the task performed is assimilable with processes of human reasoning generally associated with intelligence? Perhaps I am simply ignorant of the progress being made - or is there a sort of misunderstanding because artificial intelligence is espoused by positivist computer scientists, for whom the notion of intelligence itself is defined by processes of rational thought or (to be more abrupt) problem solving of a purely technical order? This hypothesis does appear to be confirmed by the position expressed by Alan Turing in the radio interview cited by Morlock.

Implicit to the creation of something that we choose to label “artificial” are the particular terms of asymmetry with the natural artefact it intends to replicate. In the case of artificial flowers, this is uniquely a question of appearance and convenience (no watering, they last forever, more or less, and must just be dusted off periodically) - though they are generally frowned upon, as their patent aesthetic artificiality is perceived of as being crass. Artificial grass for a playing field must look natural on TV and conform to specifications of ruggedness capable of enduring football boots. An artificial heart must be able to pump blood in a manner identical to the real thing and fool the body into believing that it is a natural part of it.

As for artificial intelligence, its intent is to replicate human reasoning by reacting to exterior stimuli, making choices and setting things into operation according to the clearly though narrowly defined objectives of the task at hand (drive a car from A to B, for instance). Human reasoning is inseparable from thought, however, and this is where things get complicated: when a computer does such things, might one speak of machine thinking? One would say not: however sophisticated, the computer does not itself transcend  the algorithms it is programmed to execute, and this includes those self-development algorithms that it may be programmed to generate of its “own” volition. So my tentative position would be that the term artificial intelligence is inappropriate for what is after all a thoughtless technical process. Rather call it simulated intelligence.

Computers “write” texts and “paint” pictures: proponents of AI claim that art is no longer a uniquely human preserve. When this happens, one might be tempted to presume that the machine has somehow developed an aesthetic sense, and this is truly discomforting. In reality, a Rembrandt-painting computer is no more than an algorithm, devised by talented programmers who have enabled it to “teach itself” the rules allowing it to mimic the painter. This is not art, but the empirical science of perception being modelled and applied at a high level.

One has the impression that inherent to the question of artificial intelligence, insofar as those who champion the so-called singularity are concerned, is the question of artificial life. While it seems inconceivable to me that the bright people behind AI might in any remote way confuse the two, yet this appears, at least to a certain degree, to be so. This confusion is implicit to the way they fantasize how some vastly self-developed machine intelligence might one day supplant us. Perhaps this is all the fault of Descartes equating thinking and being, lumping together two phenomena which are, after all, incommensurably separate. Modernity is characterised by the supremacy of reason over all other human faculties, and the domination of technical science minimises all else, art notably. Since reason is seen as being life's most evolved attribute, it stands in or even operates as life's metonymic placeholder.

How does one distinguish between computers and men? The question is as silly as it is deadly pertinent. A computer “thinks” through the execution of humanly derived algorithms manipulating data that make sense to humans, its “reasoning“ being no more than a flux of electronic calculations. Human thinking (indeed that of all living creatures) manipulate meanings. Reason exists as the support for significations, and by extension computer reasoning is the blind algorithmic crunching of human meanings. Utilitarian meanings, but also spiritual meanings, insofar as it might be programmed to lever our desires, intentions and quest for transcendance. And also lever our fears and natural aggressivity.

It appears to me that a fundamental attribute of intelligence is that any entity which exercises it exists in a state of knowingness, knowing what it does, knowing who it is. As conscious beings, we are self-aware, we have will, we have discernment. Intelligence is the stuff of existence, the active ingredient of our state of livingness as sentient, knowing beings. This appears not to be so with computers. Even if we wanted to incorporate life into computers, we would have no idea how to go about doing this, as the nature of life itself eludes science. Maybe computers have developed a spontaneous existential state, but this seems unlikely and there is no evidence of this. 

In conclusion, I think that it is misleading to found the essence of the human condition on the power to reason, as proponents of AI are wont to have us believe. In this respect, the notion of neural networks is particularly potent, since more than a technique "copying" the biological functioning of the brain, it serves as an active metaphor making believe in the mind-likeness of the computer - making it somehow lifelike. The idea of existence, of self-awareness, of life itself escapes science. If there has been progress, this has been to extract these notions from the religious or the metaphysical, and replace them in the phenomenological domain. There is apparently a science of consciousness speculating about these questions. And this is what I wish to place before the collective intelligence that is Nettime.

Joseph Rabie.



#  distributed via <nettime>: no commercial use without permission
#  <nettime>  is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: http://mx.kein.org/mailman/listinfo/nettime-l
#  archive: http://www.nettime.org contact: nettime@kein.org
#  @nettime_bot tweets mail w/ sender unless #ANON is in Subject: