Ted Byfield via nettime-l on Mon, 2 Oct 2023 16:49:34 +0200 (CEST)


[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]

Re: <nettime> FWD: The Copy Far "AI" license (fwd)


On 2 Oct 2023, at 5:56, Felix Stalder via nettime-l wrote:

> As Joanna Bryson recently put it: "An AI system independent of humans is the ultimate shell company, purely an available hiding place for corruption by the human agencies that set it in place."
>
> https://joanna-bryson.blogspot.com/2023/09/a-very-short-primer-on-ai-ip-including.html?m=1
>
> One might put it even stronger: To claim that an AI system is independent of humans, is to create the ultimate shell company...." And we definitely do not need more rights for shell companies.

Maciej Cegłowski came close to saying that when he said in 2016 that "Machine learning is like money laundering for bias."

	https://idlewords.com/talks/sase_panel.htm

If you haven't read his essays, I really recommend them.

Something's been bothering me more and more over the last months, and I finally made a smidgen of progress on it a week or so ago. The idea of dialectics has played an immense role in left-leaning thought over the last few centuries, mainly in the form of dialectical materialism. AFAIK very little attention has been paid to thinking concretely about a 'science of' how quickly or slowly dialectics unfold. Basically, it — or at least the attention paid to it — has been relegated more to the ponderous stasis of metaphysics than to the dynamic field of (for lack of a better word) physics.

"The arc of the moral universe may bend toward justice," as MLK said, but he was a religious thinker, so its trajectory as he saw it had a dark star whose gravity bent it in that direction — God. Pascal won't help us here, though. The problem we face now is an increasingly aggressive and globally coordinated movement on the right dedicated deliberately and pragmatically to bending that arc in any, maybe *every* other direction (the phrase "everything everywhere all at once" comes to mind).

It often feels these days as if the right — which could be understood as anything from a disaggregated group of individuals to an all-inclusive zeitgeist that incorporates 'technology' — is quickly mastering that ~science, if only on a heuristic basis.

In theory, speculating about whether AI's could be "free" or should have "rights" seems fine, I guess; but where and when one asks that kind of question matters. On nettime, whatever, debate ends up in an archive that, as Felix noted, is overflowing with fascinating discussions; but in a case before the US Supreme Court in the coming months, the consequences could be very different, and not in good ways.

Debating whether here and now is the right time and place to ask some thorny question or propose some provocative idea is rarely useful, since it tends to end up in spats over tactics, recriminations about allegiances, dick-swinging erudition, etc, etc. Those debates may mobilize philosophies to justify this or that position within the debate. But the larger, more properly philosophical question is (again, and not a coincidence) whether there's a sort of 'science of' understanding when one should or shouldn't ask or suggest something. On the contrary: suggesting *this isn't the time or place* to do ask or say something invariably generates noisy conflicts whose register is more social than philosophical. A cynic might say that while the right is mastering the science of *material* dialectics, the left is developing something very different, the science (again heuristic) of dysfunction.

So, having said that, I'd say: now *really* isn't the time to float speculations about whether "AI," specific or general, should be free or have rights. Should nukes have "rights"? Should genetically engineered infectious bioweapons be "free"? Those shibboleths may sound philosophical, but in practical terms the questions themselves have more in common with Second Amendment fanaticism — "gun rights" — in the US. That may seem like an analogy now, but at a time when Manuel Delanda's memorable phrase "war in the age of intelligent machines" is becoming a concrete reality, that relation is morphing into a *genealogy*.

I also think these question are, to be blunt, wrong-headed. Investing an *environment* with 'rights' or 'freedom' — that is, a system-setting understood as a sort of commons incorporating the human and the nonhuman, is profoundly different from investing every supposedly discrete element within that environment with its own rights and freedoms. Indeed, if it hasn't become clear yet, it should be: the entire proposition of the (or an) "environment" is precisely antithetical to that kind of discreteness. And that, in my view, is one of the central problems of our times: technocratic civilization operated in large part through making ever-more finely grained distinctions, whereas "environmentalism" — which I mean in the wooliest way — challenges that by arguing that many of those distinctions are illusory. Saying they're *all* illusory would be silly; so the question (AGAIN, and AGAIN not a coincidence) would involve a sort of ~science of distinguishing which distinctions are valid and which are tendentious.

Cheers,
Ted
-- 
# distributed via <nettime>: no commercial use without permission
# <nettime> is a moderated mailing list for net criticism,
# collaborative text filtering and cultural politics of the nets
# more info: https://www.nettime.org
# contact: nettime-l-owner@lists.nettime.org