The success of the “measures” proposed by the government to contain and control the Corona virus depends to a large extent on the willingness of the population to go along with these “measures.” This willingness is contingent on a variety of factors. In this post, I pick out one factor that has a significant influence: the communication behavior of the government, or the communicative relations between the governed and the governed. I would like to briefly introduce two different models and put them up for discussion: that of strategic communication and that of frank speech.
Strategic communication and message control
In political and organizational communication, “strategic communication” is often offered as the means of choice when it comes to implementing “measures” efficiently. This model recommends that organizations and governments communicate strategically to various stakeholders. Messages and news that the organization/government sends out should be clearly structured, formulated uniformly and without contradiction, and sent out with the aid of suitable media.
In terms of communication theory, this idea is based on the classic sender-receiver model developed by the mathematicians Shannon and Weaver in the USA in the 1940s. The aim here was to explore how a message defined by a sender can be transported to a receiver in an efficient manner.
This paper situates organisational transparency in an agonistic space that is shaped by the interplay of ‘mechanisms of power that adhere to a truth’ and critical practices that come from below in a movement of ‘not being governed like that and at that cost’ (Foucault, 2003: 265). This positioning involves an understanding of transparency as a practice that is historically contingent and multiple, and thus negotiable and contested. By illustrating the entanglement of ‘power through transparency’ and ‘counter-transparency’ with reference to the example of Edward Snowden’s whistleblowing, the paper contributes to the critique of transparency and to debates on the use of Foucauldian concepts in post-panoptic contexts of organising. By introducing the notion of ‘counter-transparency’, the paper expands the conceptual vocabulary for understanding the politics and ethics of managing and organising visibility.
Recently I was invited to contribute to the debate on emerging forms of surveillance society:
Surveillance capitalism technologies are “polyvalent” and can be used for different purposes: they can facilitate an intensification of (state) surveillance, or they can protect privacy and anonymity (for example, facial recognition technology is a surveillance technology, but it can also be used to protect iPhone owners, as the New York Times reported recently in the case of the Hong Kong pro-democracy protests).
On my way home, I often pass a café, which displays an anarchistic saying in its show window: “Even more dangerous than the virus is blind obedience”. There is much about this saying that is correct and important. Much has been written and researched about “blind obedience” and its dangers. “I have only done my duty” – many “obedient” perpetrators have used this justification formula in an attempt to evade responsibility or to justify their own moral failure. But just as dangerous as “blind obedience” is “blind disobedience”. When one thinks of the various so-called “Querdenker” who today protest and defend themselves against the “restrictions” and “coercive measures” of the government in the context of managing the Corona crises, this becomes very clear. One must fear the “blind disobedience” at least as much as the “blind obedience”.
So perhaps the distinction between obedience and disobedience is not the core of the problem, but rather the blindness that is associated with them. Blindness – as a metaphor for the unreflected reaction to some impulse – is the problem.
Algorithmic profiling is a technology and practice that is increasingly used to make decisions, sometimes even without human intervention. Profiles can be traced back to their use in police work and behaviorist psychology of the early 20th century. Thus, long before the emergence of Big Data, profiles were used as a knowledge tool in a wide range of human sciences. Today, profiles and profiling are used in multiple contexts: customer profiling, profiling for employment screening, credit scoring, criminal investigations, immigration policy, healthcare management, forensic biometrics, etc.
Together with Bernadette Loacker (Lancester University) and Randi Heinrichs (Lüneburg) I co-edited and ephemera special Issue (PDF) on truth-telling and whistleblowing in digital cultures. The issue opens a space for discussing the specific ‘conditions of possibility’ of truth-telling and the multiple technologies, which mediate it in contemporary digital cultures.
The notion of the ethico-politics of whistleblowing is introduced to address the irreducible entanglement of questions of ethics, politics and truth in the practice of ‘speaking out’. The special issue brings together a set of papers, acknowledging that forms and mediations of truth-telling are complex and contested. The contributions discuss questions such as: Who is, in digital cultures, considered to be qualified to speak out, and about what? Under which conditions, and with what consequences can ‘the truth’ be told? How do digital infrastructures regulate the truth, and the process of making it heard? How is the figure of the whistleblower constructed, and how do whistleblowers constitute themselves as political and ethical subjects, willing to take risks and pose a challenge, to others and themselves?
On 26 October 2015, BBC News published an article entitled China ‘social credit’: Beijing sets up huge system. It describes how the Chinese government is building an ‘omnipotent “social credit” system that is meant to rate each citizen’s trustworthiness’. Warnings about the advent of ‘digital dictatorship’ and phrases like ‘Big Data meets Big Brother’ have proliferated in research and Western public media ever since, and they reflect a rapidly growing focus on the contemporary global process whereby power and control become entwined with digitalization and result in new and often concerning forms of transparency.
Profiles’ are important technologies of organizing that are used in a multiplicity of contexts: customer-profiling, profiling for employment screening, credit-scoring, criminal investigations, immigration policy, health-care management, forensic biometrics, etc. Profiles organize perception and seeing and they are important media in (algorithmic) decision-making. They are ‘used to make decisions, sometimes even without human intervention’ (Hildebrandt, 2008: 18). All profiles are abstractions. In the process of profiling images of the person are created for the purpose of diagnosis or prediction. In the process of profiling ‘complex personhood’ (Gordon, 1997) is reduced to a finite number of traits, indicators, etc. Created models or figures may be fictions but these fictions are operationally effective, as they shape and intervene in the world. In the paper profiles are theorized as ‘ghosts’ that are produced in a ‘spectrogenic process’ (Derrida, 1994). The spectrogenic process describes the process of abstraction, in which (a) thoughts, ideas, data etc. are ‘torn loose’ from the ‘living body’ and integrated in a more abstract or ‘artifactual body’ and (b) the return of the abstraction (ghost) to the world of real life events in the process of ‘application’ where it ‘haunts’ those with whom profiles are associated. Continue reading “SCOS Talk on “Secret organizers: The ‘spectrogenic’ process of profiling and the effects of ‘ghostly demarcations’””→
Bei diesem Text handelt es sich um die deutsche Fassung einer Rezension, die in gekürzter Form und auf Englisch im Journal Organizationzur Veröffentlichung angenommen ist erschienen ist. Von Richard Weiskopf.
Erinnern Sie sich an den Skandal, um den Ge- und Missbrauch von Facebook-Profilen durch die Firma Cambridge Analytica? Oder daran, wie vor allem um 2016 herum Horden von Menschen, meist junge, umherzogen, begeistert und wie ferngesteuert auf der Jagd nach dem „Pokèmon“? Waren auch Sie überrascht zu hören, dass die (österreichische) Post mit den Daten ihrer Kund_innen einen regen Handel betreibt? Waren Sie auch schon erstaunt darüber, wie genau Amazon über Ihre Wünsche Bescheid weiß? Waren auch Sie schon in Versuchung, den smarten Kühlschrank über die Einkaufsliste entscheiden zu lassen oder das Smartphone über den idealen Heimweg? Erscheinen Ihnen solche Phänomene zuweilen als unheimlich oder gar bedrohlich?
Shoshana Zuboff, ihres Zeichens emeritierte Professorin für Business Administration in Harvard, ist in ihrem neuen Buch dieser Erfahrung nachgegangen und sie fragt nach den Kräften, die sie hervorbringen. Das Buch ist eine Fortsetzung von Work in the Age of the Smart Machine, ihrem grundlegenden Werk aus dem Jahr 1988. Hier studierte sie Veränderungen, die sich in der Arbeitswelt durch die Automatisierung und Informatisierung abzeichnen. Als eine der ersten Autor_innen erkannte sie schon damals das panoptische Potenzial der modernen Informationstechnologie. Seither sind mehr als drei Jahrzehnte vergangen. Es wurde nicht nur das Internet erfunden (und zunehmend als Geschäftsfeld erschlossen); auch die Möglichkeiten und Potentiale der Erfassung, Speicherung und Verarbeitung von Daten haben sich exponentiell erweitert. „Big Data“ verspricht nichts weniger als eine „Revolution“, die „die Art und Weise, wie wir leben, arbeiten und denken“ fundamental transformiert (Mayer-Schönberger & Cukier, 2013). Während optimistische Szenarien die großartigen Möglichkeiten von „data-rich markets“ hervorheben (Mayer-Schönberger & Ramge, 2018), warnen Kritiker_innen vor den Gefahren, die mit der „Datafizierung“ einhergehen. Mit den neuen digitalen Möglichkeiten ist die Überwachung vielfältig und „flüchtig“ geworden (Bauman & Lyon, 2013), sie dringt in alle Bereiche des Alltags vor und prägt die „surveillance culture(s)“ (Lyon, 2018; Harding, 2018) der Gegenwart.