Recently I was invited to contribute to the debate on emerging forms of surveillance society:
Surveillance capitalism technologies are “polyvalent” and can be used for different purposes: they can facilitate an intensification of (state) surveillance, or they can protect privacy and anonymity (for example, facial recognition technology is a surveillance technology, but it can also be used to protect iPhone owners, as the New York Times reported recently in the case of the Hong Kong pro-democracy protests).
On my way home, I often pass a café, which displays an anarchistic saying in its show window: “Even more dangerous than the virus is blind obedience”. There is much about this saying that is correct and important. Much has been written and researched about “blind obedience” and its dangers. “I have only done my duty” – many “obedient” perpetrators have used this justification formula in an attempt to evade responsibility or to justify their own moral failure. But just as dangerous as “blind obedience” is “blind disobedience”. When one thinks of the various so-called “Querdenker” who today protest and defend themselves against the “restrictions” and “coercive measures” of the government in the context of managing the Corona crises, this becomes very clear. One must fear the “blind disobedience” at least as much as the “blind obedience”.
So perhaps the distinction between obedience and disobedience is not the core of the problem, but rather the blindness that is associated with them. Blindness – as a metaphor for the unreflected reaction to some impulse – is the problem.
Algorithmic profiling is a technology and practice that is increasingly used to make decisions, sometimes even without human intervention. Profiles can be traced back to their use in police work and behaviorist psychology of the early 20th century. Thus, long before the emergence of Big Data, profiles were used as a knowledge tool in a wide range of human sciences. Today, profiles and profiling are used in multiple contexts: customer profiling, profiling for employment screening, credit scoring, criminal investigations, immigration policy, healthcare management, forensic biometrics, etc.
Together with Bernadette Loacker (Lancester University) and Randi Heinrichs (Lüneburg) I co-edited and ephemera special Issue (PDF) on truth-telling and whistleblowing in digital cultures. The issue opens a space for discussing the specific ‘conditions of possibility’ of truth-telling and the multiple technologies, which mediate it in contemporary digital cultures.
The notion of the ethico-politics of whistleblowing is introduced to address the irreducible entanglement of questions of ethics, politics and truth in the practice of ‘speaking out’. The special issue brings together a set of papers, acknowledging that forms and mediations of truth-telling are complex and contested. The contributions discuss questions such as: Who is, in digital cultures, considered to be qualified to speak out, and about what? Under which conditions, and with what consequences can ‘the truth’ be told? How do digital infrastructures regulate the truth, and the process of making it heard? How is the figure of the whistleblower constructed, and how do whistleblowers constitute themselves as political and ethical subjects, willing to take risks and pose a challenge, to others and themselves?
On 26 October 2015, BBC News published an article entitled China ‘social credit’: Beijing sets up huge system. It describes how the Chinese government is building an ‘omnipotent “social credit” system that is meant to rate each citizen’s trustworthiness’. Warnings about the advent of ‘digital dictatorship’ and phrases like ‘Big Data meets Big Brother’ have proliferated in research and Western public media ever since, and they reflect a rapidly growing focus on the contemporary global process whereby power and control become entwined with digitalization and result in new and often concerning forms of transparency.
Profiles’ are important technologies of organizing that are used in a multiplicity of contexts: customer-profiling, profiling for employment screening, credit-scoring, criminal investigations, immigration policy, health-care management, forensic biometrics, etc. Profiles organize perception and seeing and they are important media in (algorithmic) decision-making. They are ‘used to make decisions, sometimes even without human intervention’ (Hildebrandt, 2008: 18). All profiles are abstractions. In the process of profiling images of the person are created for the purpose of diagnosis or prediction. In the process of profiling ‘complex personhood’ (Gordon, 1997) is reduced to a finite number of traits, indicators, etc. Created models or figures may be fictions but these fictions are operationally effective, as they shape and intervene in the world. In the paper profiles are theorized as ‘ghosts’ that are produced in a ‘spectrogenic process’ (Derrida, 1994). The spectrogenic process describes the process of abstraction, in which (a) thoughts, ideas, data etc. are ‘torn loose’ from the ‘living body’ and integrated in a more abstract or ‘artifactual body’ and (b) the return of the abstraction (ghost) to the world of real life events in the process of ‘application’ where it ‘haunts’ those with whom profiles are associated. Continue reading “SCOS Talk on “Secret organizers: The ‘spectrogenic’ process of profiling and the effects of ‘ghostly demarcations’””→
Bei diesem Text handelt es sich um die deutsche Fassung einer Rezension, die in gekürzter Form und auf Englisch im Journal Organizationzur Veröffentlichung angenommen ist erschienen ist. Von Richard Weiskopf.
Erinnern Sie sich an den Skandal, um den Ge- und Missbrauch von Facebook-Profilen durch die Firma Cambridge Analytica? Oder daran, wie vor allem um 2016 herum Horden von Menschen, meist junge, umherzogen, begeistert und wie ferngesteuert auf der Jagd nach dem „Pokèmon“? Waren auch Sie überrascht zu hören, dass die (österreichische) Post mit den Daten ihrer Kund_innen einen regen Handel betreibt? Waren Sie auch schon erstaunt darüber, wie genau Amazon über Ihre Wünsche Bescheid weiß? Waren auch Sie schon in Versuchung, den smarten Kühlschrank über die Einkaufsliste entscheiden zu lassen oder das Smartphone über den idealen Heimweg? Erscheinen Ihnen solche Phänomene zuweilen als unheimlich oder gar bedrohlich?
Shoshana Zuboff, ihres Zeichens emeritierte Professorin für Business Administration in Harvard, ist in ihrem neuen Buch dieser Erfahrung nachgegangen und sie fragt nach den Kräften, die sie hervorbringen. Das Buch ist eine Fortsetzung von Work in the Age of the Smart Machine, ihrem grundlegenden Werk aus dem Jahr 1988. Hier studierte sie Veränderungen, die sich in der Arbeitswelt durch die Automatisierung und Informatisierung abzeichnen. Als eine der ersten Autor_innen erkannte sie schon damals das panoptische Potenzial der modernen Informationstechnologie. Seither sind mehr als drei Jahrzehnte vergangen. Es wurde nicht nur das Internet erfunden (und zunehmend als Geschäftsfeld erschlossen); auch die Möglichkeiten und Potentiale der Erfassung, Speicherung und Verarbeitung von Daten haben sich exponentiell erweitert. „Big Data“ verspricht nichts weniger als eine „Revolution“, die „die Art und Weise, wie wir leben, arbeiten und denken“ fundamental transformiert (Mayer-Schönberger & Cukier, 2013). Während optimistische Szenarien die großartigen Möglichkeiten von „data-rich markets“ hervorheben (Mayer-Schönberger & Ramge, 2018), warnen Kritiker_innen vor den Gefahren, die mit der „Datafizierung“ einhergehen. Mit den neuen digitalen Möglichkeiten ist die Überwachung vielfältig und „flüchtig“ geworden (Bauman & Lyon, 2013), sie dringt in alle Bereiche des Alltags vor und prägt die „surveillance culture(s)“ (Lyon, 2018; Harding, 2018) der Gegenwart.
Although the book is almost encyclopaedic, its main aim is not simply to represent the freedom of speech debate and its various positions, but to intervene in the world and to contribute to its transformation. It “invites a conversation about free speech” (p. 142), develops, promotes and defends a liberal position, and proposes principles of how to organize our relations in the “mixed up, connected world as city” (p. 19), that Ash calls “cosmopolis”. In this “crowded world”, he argues, “we must learn to navigate by speech” (p. 4). For students of organization, the book offers many points of entry for reflecting on the current conditions of organizing and its relation to free speech. How do organizations influence, control, and limit what we can say? What are the forms of power that shape and organize what we can say and see in and through the internet? What are the transformative potentials of organizing created by the affordances of the internet? (How) can “free speech” be organized, given that any form of organizing implies closure and exclusions? What are the organizing principles for a self-governing community, in which the principles of “free speech” can be actualized?
The internet has fundamentally transformed life in general and the conditions of communication in particular. It not only offers new possibilities of “free speech”, but also reveals its fundamental ambivalence. “[N]ever in human history was there such a chance for freedom of expression as this. And never have the evils of unlimited free expression – death threats, paedophile images, sewage-tides of abuse – flowed so easily across frontiers” (p. 127). Unknown freedoms of expression and unprecedented forms of control and surveillance coexist and create an ambivalent space of experience.
In case you are interested and your institution doesn’t have access to the full-text, I would be happy to send it to you via e-mail.