Luke Munn on Tue, 21 Jul 2020 09:51:41 +0200 (CEST)


[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]

<nettime> "Divisive Data" CfP


Hey nettimers,

Some on the list might be interested in contributing to this issue I'll be
proposing to Big Data & Society.

Description below or a PDF if preferred here (
https://pdfhost.io/v/NrRIuypHM_Divisive_Datapdf.pdf).

Flick me a title and abstract by August 17 if you're keen.

best,
Luke


Divisive Data

[ Context: Big Data & Society has issued a call for Special Theme
Proposals, due September 15, 2020. I will be proposing “Divisive Data” as a
special theme. This initial call is to gather a list of interested
contributors, along with titles and abstracts of articles, which can be
submitted to BD&S. ]

The promise of the internet was a promise of connection. Networked
technologies would erase the physical and cultural space that separated us.
Digital communications would unite us like never before. Online platforms
would “bring the world closer together” (Zuckerberg 2017). Communication
technologies would collapse boundaries, encourage dialogue, and facilitate
mutual understanding.

Yet today data has become divisive. Data is used to separate certain
groups, to sharpen their differences, and to weaponize their harassment. On
social media, personalized data forms filter bubbles (Pariser 2012; Geschke
et al. 2018), confirming our views while condemning those we disagree with.
Rather than fostering a common consensus or public discourse, data-driven
algorithms fragment society into niche groups and atomized individuals.
When these publics do interact, it is often in highly antagonistic ways.
Predicated on the metrics of “engagement”, platforms incentivize content
that is emotive and controversial (Munn 2020 forthcoming). On the web,
outrage and lies win, spreading faster and further than other content
(Vosoughi et al. 2018). These polarizing posts trigger anger in users,
driving views, shares, and comments. Communication platforms remove
barriers to expressing this anger, allowing users to lash out to a large
audience through a few mouse clicks (Crockett 2017).

Divisive data can again be witnessed in the recent rise of the radical
right. In the last ten years, the far-right has reinvented itself,
recasting racist, sexist, and xenophobic ideologies into novel forms.
Information technologies have been key to this reinvention, enabling forms
of digital hate to be carefully calibrated and widely distributed. The
sociotechnical affordances of spaces like 4chan or Discord allow manifestos
to spread and memes to be reworked (Wagner and Schwarzenegger 2020, Schmitt
et al. 2020). The thousands of posts swirling in these spaces are an
ideologically influential form of “big data”, but one that challenges
typical associations with Big Tech (e.g. Google, Amazon, Apple) or Big
Government (e.g. the NSA, 5 Eyes, Palantir). On mainstream platforms like
YouTube, data-driven recommendations have come under fire. Scholars,
journalists, and ex-radicals have noted how users are gradually recommended
more extremist, divisive content (Naughton 2018, Nicas 2018, Tufekci 2018).
Personalized data forms a pathway for radicalisation (Ribeiro et al. 2019),
or a pipeline for the alt-right (Munn 2019). These technical affordances
piggyback on the strong social ecosystems of the reactionary right (Lewis
2018).

These dynamics bring into focus the stakes of data broadly understood. As
our everyday life becomes increasingly mediated through digital
technologies, data forms a powerful and pervasive environment that shapes
individuals on an ideological and psychological level. These environments
enable communities to target the racial or sexual “other”, to amplify hate
against them, and to direct this hate into forms of verbal and physical
aggression. This is not an abstract issue, but a painfully present one.
Indeed, violent attacks such as synagogue shootings (Pittsburgh and Halle),
pipe

bombs (the MAGA bomber) and a mosque shooting (Christchurch) have
demonstrated what could be understood as the natural “endpoint” of these
data-amplified processes. Hate-filled data contributes toward hateful
individuals.

How do data-driven processes and environments contribute to the recent rise
of hate? How are racist, sexist, and xenophobic ideologies reworked and
amplified by the unique affordances of digital technologies? And how might
individuals and organisations critique and effectively counteract these
growing threats? These are the key questions this issue centers around. The
issue will aim to present a diverse mix of articles that roughly correspond
to the following themes:

Spreading Hate

   -

   ●  The role of online platforms, social media, and other technical
   environments in fostering group-based hate, with a focus on data features,
   structures, and processes
   -

   ●  Data-driven (but theoretically aware) analyses of newer radical right
   spaces (e.g. Gab, Voat, BitChute) or appropriated spaces (e.g. Twitch,
   Discord)
   -

   ●  Contemporary examples of data-driven bubbles and their social fallout
   -

   ●  Tracing the data-driven circulation of a particular meme or ideology

   Theorizing Hate


   -

   ●  Broader theorisations of how data architectures and affordances
   amplify hate
   -

   ●  How data’s ability to “make a difference” (Bateson 1972) amplifies
   homophily (Chun

   2018a; 2018b) and fosters division and discord
   -

   ●  Situating today’s divisive data in the “data” (broadly understood) of
   the past

   Countering Hate


   -

   ●  Examples of communities adapting existing functionality to foster
   more inclusive spaces
   -

   ●  Redesigning data environments/architectures/logics to counteract hate
   and extremism

   The special theme will feature a maximum of 6 original research articles
   (max 10,000 words), and a maximum of 4 commentaries (max 3000 words). The
   commentaries would be ideal places to point to emergent dynamics in this
   space, to introduce new research concepts, or to stage a broader
   intervention that draws together diverse themes.

   To register your interest, email Dr. Luke Munn (
   l.munn@westernsydney.edu.au) with an article title, a short abstract
   (<250 words), and an indication of whether this would be an original
   research article or a commentary. The deadline for submissions is August
   17.



#  distributed via <nettime>: no commercial use without permission
#  <nettime>  is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: http://mx.kein.org/mailman/listinfo/nettime-l
#  archive: http://www.nettime.org contact: nettime@kein.org
#  @nettime_bot tweets mail w/ sender unless #ANON is in Subject: