Felix Stalder on Thu, 22 Dec 2022 18:01:25 +0100 (CET)


[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]

Re: <nettime> Spamming the Data Space – CLIP, GPT and synthetic data


I couldn't agree more. There is no such thing as authentic culture, particularly not on a world where desires have manufactured by consumer capitalism for generations.

This reminds me of a work by the Mediengruppe Bitnik, State of Reference (2017)

https://wwwwwwwwwwwwwwwwwwwwww.bitnik.org/sor

It's a simple work, producing a chain of images where the last image serves as an input to search of a visually similar image, which is then becomes the input for the next search, and so on.

It's a way of navigating through the most dense nodes of Google's knowledge about the visual world. And it's thoroughly depressing: It starts from Man Ray's The Poet (1938) only to jump immediately to stock images of people and products, celebrities, beauty clinics, real estate and some geometric figures and a few uplifting quotes. After 1321 iterations it arrives at the image of an Nespresso machine.

None of these images as AI generated, but the commercial pollution had turned the image pool on which Google has trained its image recognition software already toxic.

I'm not suggesting that it's all the same same old, or that things cannot get worse, but rather that however bad we think the current situation is, nostalgia is a bad form of critique.


all the best. Felix




On 21.12.22 01:26, Luke Munn wrote:
Interesting essay Francis, and always appreciate Brian's thoughtful comments. I think the historical angle Brian is pointing towards is important as a way to push against the claims of AI models as somehow entirely new or revolutionary.

In particular, I want to push back against this idea that this is the last 'pure' cultural snapshot available to AI models, that future harvesting will be 'tainted' by automated content.

Francis' examples of hip hop and dnb culture, with sampling at their heart, already starts to point to the problems with this statement. Culture has always been a project of cutting and splicing, appropriating, transforming, and remaking existing material. It's funny that AI commentators like Gary Marcus talk about GPT-3 as the 'king of pastiche'. Pastiche is what culture does. Indeed, we have whole genres (the romance novel, the murder mystery, etc) that are about reproducing certain elements in slightly different permutations, over and over again.

This is not a recent or purely digital phenomenon. I remember going to a show at the Neue Nationalgalerie, where oil paintings repeatedly reproduced the identical bird in different positions. "A variety of painting styles suggests the involvement of a number of assistants and several motifs can be repeatedly found in an unaltered form in many of his paintings. D’Hondecoeter’s oeuvre consequently appears as a conglomeration of decorative collages, produced in an almost mechanical seriality on the basis of successful formulas." Copy, paste, repeat.

Unspoken in this claim of machines 'tainting' or 'corrupting' culture is the idea of authenticity. It really reminds me of the moral panic surrounding algorithmic news and platform-driven disinformation, where pundits lamented the shift from truth to 'post-truth.'  This is not to suggest that misinformation is not an issue, nor that veracity doesn't matter (i.e. Rohingya and Facebook). But the premise of some halcyon age of truth prior to the digital needs to get wrecked. Yes, Large language models and other AI technologies do introduce new conditions, generating truth claims rapidly and at scale. But rather than hand-wringing about 'fake news,' it's more productive to see how they splice together several truth theories (coherence, consensus, social construction, etc) into new formations. I'm currently writing a paper precisely on this issue with a couple of colleagues.

nga mihi / best,
Luke


On Tue, 20 Dec 2022 at 22:20, Francis Hunger <francis.hunger@irmielin.org <mailto:francis.hunger@irmielin.org>> wrote:

    Hi Brian,
    On Mon, Dec 19, 2022 at 3:55 AM Francis Hunger
    <francis.hunger@irmielin.org <mailto:francis.hunger@irmielin.org>>
    wrote:

        While some may argue that generated text and images will save
        time and money for businesses, a data ecological view
        immediately recognizes a major problem: AI feeds into AI. To
        rephrase it: statistical computing feeds into statistical
        computing. In using these models and publishing the results
        online we are beginning to create a loop of prompts and
        results, with the results being fed into the next iteration of
        the cultural snapshots. That’s why I call the early cultural
        snapshots still uncontaminated, and I expect the next
        iterations of cultural snapshots will be contaminated.


    Francis, thanks for your work, it's always totally interesting.

    Your argumentation is impeccable and one can easily see how
    positive feedback loops will form around elements of AI-generated
    (or perhaps "recombined") images. I agree, this will become
    untenable, though I'd be interested in your ideas as to why. What
    kind of effects do you foresee, both on the level of the images
    themselves and their reception?

    Foresight is a difficult field, as most estimates can extrapolate
    maximum 7 year into the future and there are a lot of independent
    factors (such as e.g. OpenAI, the producer of CLIP could go bankrupt
    etc.).

    It's worth considering that similar loops have been in place for
    decades, in the area of market research, product design and
    advertising. Now, all of neoclassical economics is based on the
    concept of "consumer preferences," and discovering what consumers
    prefer is the official justification for market research; but it's
    clear that advertising has attempted, and in many cases succeeded,
    in shaping those preferences over generations. The preferences
    that people express today are, at least in part, artifacts of past
    advertising campaigns. Product design in the present reflects the
    influence of earlier products and associated advertising.

    That's an great and interesting argument. Because it plays into the
    cultural snapshot idea.

    Obviously Language wise, people already use translation tools, such
    as Deepl and translate Text from German to English and back to
    German in order to profit off the "clarity" and "orthographic
    correction" brought by the statistical analysis that feeds into the
    translator and seems to straighten the German text. We see the same
    stuff appearing for products like text editors and thus widely
    employed for cultural production. That's one example. Automated
    forum posts using GPT-3, for instance on Reddit are another, because
    we know that the CLIP Model also partly build on Reddit posts.

    Another example is images generated using diffusion models and
    prompts building on cultural snapshots and being used as _cheap_
    illustrations for editorial products, feeding off stock photography
    and to a certain extend replacing stock photography. This is more or
    less an economic motivation with cultural consequences. The question
    is what changes, when there is not sufficiently 'original' stock
    photography circulating, but the majority is syntheticly generated?
    Maybe others want to join in, to speculate about it.

    We could further look into 1980s HipHop or 1990s Drum'n Bass sample
    culture, which for instance took (and some argue: stole) one
    particular sound break, the Amen Break, from an obscure 1969 Soul
    music record by The Winston Brothers and build a whole cultural
    genre from it. Cf. https://en.wikipedia.org/wiki/Amen_break
    <https://en.wikipedia.org/wiki/Amen_break> Here the sample was
    refined over time, with generations of musicians cleaning the sample
    (compression, frequencies, deverbing, etc.) and providing many
    variations of it, then reusing it, because later generation did not
    build on the original sample, but on the published versions of it.

    We can maybe distinguish two modi operandi where a) "the cultural
    snapshot" is understood as an automated feedback loop, operating on
    a large scale, mainly through automated scraping and publication of
    the derivates of data, amplifying the already most visible
    representations of culture and b) "the cultural snapshot" is a
    feedback loop with many creative human interventions, be it through
    curatorial selection, prompt engineering or intended data manipulation.

    Blade Runner vividly demonstrated this cultural condition in the
    early 1980s, through the figure of the replicants with their
    implanted memories.
    I dont know if I get your point. I'd always say that Blade Runner is
    a cultural imaginary, one of the many phantasms about the
    machinisation of humans since at least 1900 if not earlier, and
    that's an entirely different discussion then. I would avoid this as
    an metaphor.
    The intensely targeted production of postmodern culture ensued,
    and has been carried on since then with the increasingly granular
    market research of surveillance capitalism, where the calculation
    of statistically probable behavior becomes a good deal more
    precise. The effect across the neoliberal period has been, not
    increasing standardization or authoritarian control, but instead,
    the rationalized proliferation of customizable products, whose
    patterns of use and modification, however divergent or "deviant"
    they may be, are then fed back into the design process. Not only
    the "quality of the image" seems to degrade in this process.
    Instead, culture in general seems to degrade, even though it also
    becomes more inclusive and more diverse at the same time.

    When looking for a plausible scenario regarding synthetic text and
    synthetic images, Steve Bannons “The real opposition is the media.
    And the way to deal with them is to flood the zone with shit.” is
    sadly a good candidate. This ties in with what Ganaele Langlois posits:

        „Therefore: communicative fascism posts that what is real is the
        opposite of social justice, and we now see the armies of ‚Social
        Injustice Warriors‘ as Sarah Sharma (2019) calls them, busy
        typing away at their keyboards to defend the rights to keep
        their fear of Others unchallenged and to protect their bigotry,
        misogyny, and racism from being debunked as inept constructions
        of themselves“ Langlois 2021:3

        „The first aspect of this new communicative fascism is related
        to what can be called ‚real fakes_ that is to say, the
        construction of a fictional and alternative reality where the
        paranoid position of fear and rage can find some validation …
        Real fakes are about what reality ought to be: they are virtual
        backgrounds on which fascists can find their validity and
        raising’être.“ Langlois 2021:3f

    So this is to be expected both for political or consumer marketing
    purposes.

    AI is poised to do a lot of things - but one of them is to further
    accelerate the continual remaking of generational preferences for
    the needs of capitalist marketing. Do you think that's right,
    Francis?

    That's one possible reading. I would insist, to not use an active
    verb with AI however, rephrasing your point towards "AI may be used
    for a lot of things". Better even replace 'AI' with the term
    'statistical computation'.

    Currently I would read 'AI' as a mixture of imaginations and
    phantasms about automation, of which some may become true – just in
    another way from what was expected or promoted. For certain, the
    inner logics of capital circulation command to deploy statistical
    computation to replace living, human labor. We already see how the
    job description of translators changes towards an
    human–statistical_computation entanglement and how the repetetive
    parts of the illustrator job, like coloring get automated away and
    put people out of jobs and it is plausible to expect the
    consolidation of jobs like photo editor, news editor, author with
    prompt-engineering. Since we are concentrating on the cultural
    sphere here, I'll limit the examples to this field. Human Labor in
    production, logistics, care labor would need their own thoughts.

    What other consequences do you see? And above all, what to do in
    the face of a seemingly inevitable trend?

    We are going to create separate data ecologies, which prohibit
    spamming the data space. These would be spaces, comparable to the
    no-photo-policy in clubs like Berghain or IFZ with a no-synthetics
    policy. While vast areas of the information space may be indeed
    flooded, these would be valuable zones of cultural exchange. (The
    answer would be much longer indeed, but we're not writing a book here).



    best, Brian

-- Researcher at Training The Archive, HMKV Dortmund

    Artistic Practicehttp://www.irmielin.org  <http://www.irmielin.org>
    Ph.D. at Bauhaus University Weimarhttp://databasecultures.irmielin.org  <http://databasecultures.irmielin.org>

    Daily Tweetshttps://twitter.com/databaseculture  <https://twitter.com/databaseculture>


    Peter and Irene Ludwig guest professorship at the Hungarian University of Fine Arts in Budapest 2022/23

    #  distributed via <nettime>: no commercial use without permission
    #  <nettime>  is a moderated mailing list for net criticism,
    #  collaborative text filtering and cultural politics of the nets
    #  more info: http://mx.kein.org/mailman/listinfo/nettime-l
    <http://mx.kein.org/mailman/listinfo/nettime-l>
    #  archive: http://www.nettime.org <http://www.nettime.org> contact:
    nettime@kein.org <mailto:nettime@kein.org>
    #  @nettime_bot tweets mail w/ sender unless #ANON is in Subject:


#  distributed via <nettime>: no commercial use without permission
#  <nettime>  is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: http://mx.kein.org/mailman/listinfo/nettime-l
#  archive: http://www.nettime.org contact: nettime@kein.org
#  @nettime_bot tweets mail w/ sender unless #ANON is in Subject:

--
| |||||||||||||||| http://felix.openflows.com |
| for secure communication, please use signal |
#  distributed via <nettime>: no commercial use without permission
#  <nettime>  is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: http://mx.kein.org/mailman/listinfo/nettime-l
#  archive: http://www.nettime.org contact: nettime@kein.org
#  @nettime_bot tweets mail w/ sender unless #ANON is in Subject: