Does human reflexivity disappear as datafication and automation expand and machines take over decision-making? In trying to find answers to this question, we take our lead from recent debates about People Analytics and analyze how the use of algorithmically driven digital technologies like facial recognition and drones in work-organizations and societies at large shape the conditions of ethical conduct. Linking the concepts of algorithmic governmentality and space of ethics, we analyze how such technologies come to form part of governing practices in specific contexts. We conclude that datafication and automation have huge implications for human reflexivity and the capacity to enact responsibility in decision-making. But that itself does not mean that the space for ethical conduct disappears, which is the impression left in some literatures, but rather that is modified and (re) constituted in the interplay of mechanisms of closure (like automating decision-making, black-boxing and circumventing reflexivity), and opening (such as dis-closing contingent values and interests in processes of problematization, contestation and resistance). We suggest that future research investigates in more detail the dynamics of closure and opening in empirical studies of the use and effects of algorithmically driven digital technologies in organizations and societies.
The online first version is now available online. Feel free to contact me in case you or your institution does not provide access to the journal.
Im Kontext des “Überwachungskapitalismus” werden digitale Spuren in Profile bzw. “Vorhersageprodukte” umgewandelt. Da jedes Stück Daten für irgend jemanden von potenziellem Wert ist, kann auch jedes Versatzstück in irgendeinem Profil landen, das für irgendjemanden von Nutzen ist und daher verkauft werden kann. Auf die Monetarisierung der Daten haben sich verschiedene Databroker, Directmarketer und andere Organisationen und Unternehmen spezialisiert, die eine Expertise in der Verarbeitung, Analyse und Verwertung von Daten haben. Auf der Basis scheinbar rationaler und neutraler Berechnungen verwandelt das „new profiling“ das offene Werden in eine kalkulierte Zukunft, die zu einer profitablen Quelle der Generierung von Einkommen und Kapital wird.
Dass wir uns im Visier bestimmter Organisationen befinden und Zielscheiben von Profiling- und Vorhersagemaschinen sind, wird uns häufig nur sporadisch bewusst. Beispielsweise dann, wenn wir eine individualisierte oder personalisierte Nachricht von unbekannter Stelle erhalten, wenn uns ein maßgeschneidertes Produkt angeboten wird, wenn wir am Flughafen oder an der Grenze angehalten werden und als Risiko oder potentieller Terrorist “erkannt” werden; es wird uns auch dann bewusst, wenn uns der Zugang zu öffentlichen Räumen, Versicherungsleistungen, begehrten Stellen oder Positionen, zu Krediten, Dienst- oder Sozialleistungen etc. verwehrt wird, weil wir (bzw. unser Profil) nicht zu den im Vorfeld von anonymen Instanzen definierten Kriterien passen. So wird auf der Basis von Prognosen eine mögliche Zukunft verbaut. Als eine Form der instrumentären Macht Vorhersageprodukte den ethisch-politischen Raum der Imagination und potentiellen Transformation zum Verschwinden. Der kritischen Forschung geht es darum, diesen verdeckten Raum sichtbar zu machen und damit eine Problematisierung der Wirkungen und Effekte dieser Verfahren in Gang zu setzen.
The success of the “measures” proposed by the government to contain and control the Corona virus depends to a large extent on the willingness of the population to go along with these “measures.” This willingness is contingent on a variety of factors. In this post, I pick out one factor that has a significant influence: the communication behavior of the government, or the communicative relations between the governed and the governed. I would like to briefly introduce two different models and put them up for discussion: that of strategic communication and that of frank speech.
Strategic communication and message control
In political and organizational communication, “strategic communication” is often offered as the means of choice when it comes to implementing “measures” efficiently. This model recommends that organizations and governments communicate strategically to various stakeholders. Messages and news that the organization/government sends out should be clearly structured, formulated uniformly and without contradiction, and sent out with the aid of suitable media.
In terms of communication theory, this idea is based on the classic sender-receiver model developed by the mathematicians Shannon and Weaver in the USA in the 1940s. The aim here was to explore how a message defined by a sender can be transported to a receiver in an efficient manner.
This paper situates organisational transparency in an agonistic space that is shaped by the interplay of ‘mechanisms of power that adhere to a truth’ and critical practices that come from below in a movement of ‘not being governed like that and at that cost’ (Foucault, 2003: 265). This positioning involves an understanding of transparency as a practice that is historically contingent and multiple, and thus negotiable and contested. By illustrating the entanglement of ‘power through transparency’ and ‘counter-transparency’ with reference to the example of Edward Snowden’s whistleblowing, the paper contributes to the critique of transparency and to debates on the use of Foucauldian concepts in post-panoptic contexts of organising. By introducing the notion of ‘counter-transparency’, the paper expands the conceptual vocabulary for understanding the politics and ethics of managing and organising visibility.
Recently I was invited to contribute to the debate on emerging forms of surveillance society:
Surveillance capitalism technologies are “polyvalent” and can be used for different purposes: they can facilitate an intensification of (state) surveillance, or they can protect privacy and anonymity (for example, facial recognition technology is a surveillance technology, but it can also be used to protect iPhone owners, as the New York Times reported recently in the case of the Hong Kong pro-democracy protests).
On my way home, I often pass a café, which displays an anarchistic saying in its show window: “Even more dangerous than the virus is blind obedience”. There is much about this saying that is correct and important. Much has been written and researched about “blind obedience” and its dangers. “I have only done my duty” – many “obedient” perpetrators have used this justification formula in an attempt to evade responsibility or to justify their own moral failure. But just as dangerous as “blind obedience” is “blind disobedience”. When one thinks of the various so-called “Querdenker” who today protest and defend themselves against the “restrictions” and “coercive measures” of the government in the context of managing the Corona crises, this becomes very clear. One must fear the “blind disobedience” at least as much as the “blind obedience”.
So perhaps the distinction between obedience and disobedience is not the core of the problem, but rather the blindness that is associated with them. Blindness – as a metaphor for the unreflected reaction to some impulse – is the problem.
Algorithmic profiling is a technology and practice that is increasingly used to make decisions, sometimes even without human intervention. Profiles can be traced back to their use in police work and behaviorist psychology of the early 20th century. Thus, long before the emergence of Big Data, profiles were used as a knowledge tool in a wide range of human sciences. Today, profiles and profiling are used in multiple contexts: customer profiling, profiling for employment screening, credit scoring, criminal investigations, immigration policy, healthcare management, forensic biometrics, etc.
Together with Bernadette Loacker (Lancester University) and Randi Heinrichs (Lüneburg) I co-edited and ephemera special Issue (PDF) on truth-telling and whistleblowing in digital cultures. The issue opens a space for discussing the specific ‘conditions of possibility’ of truth-telling and the multiple technologies, which mediate it in contemporary digital cultures.
The notion of the ethico-politics of whistleblowing is introduced to address the irreducible entanglement of questions of ethics, politics and truth in the practice of ‘speaking out’. The special issue brings together a set of papers, acknowledging that forms and mediations of truth-telling are complex and contested. The contributions discuss questions such as: Who is, in digital cultures, considered to be qualified to speak out, and about what? Under which conditions, and with what consequences can ‘the truth’ be told? How do digital infrastructures regulate the truth, and the process of making it heard? How is the figure of the whistleblower constructed, and how do whistleblowers constitute themselves as political and ethical subjects, willing to take risks and pose a challenge, to others and themselves?
On 26 October 2015, BBC News published an article entitled China ‘social credit’: Beijing sets up huge system. It describes how the Chinese government is building an ‘omnipotent “social credit” system that is meant to rate each citizen’s trustworthiness’. Warnings about the advent of ‘digital dictatorship’ and phrases like ‘Big Data meets Big Brother’ have proliferated in research and Western public media ever since, and they reflect a rapidly growing focus on the contemporary global process whereby power and control become entwined with digitalization and result in new and often concerning forms of transparency.
Profiles’ are important technologies of organizing that are used in a multiplicity of contexts: customer-profiling, profiling for employment screening, credit-scoring, criminal investigations, immigration policy, health-care management, forensic biometrics, etc. Profiles organize perception and seeing and they are important media in (algorithmic) decision-making. They are ‘used to make decisions, sometimes even without human intervention’ (Hildebrandt, 2008: 18). All profiles are abstractions. In the process of profiling images of the person are created for the purpose of diagnosis or prediction. In the process of profiling ‘complex personhood’ (Gordon, 1997) is reduced to a finite number of traits, indicators, etc. Created models or figures may be fictions but these fictions are operationally effective, as they shape and intervene in the world. In the paper profiles are theorized as ‘ghosts’ that are produced in a ‘spectrogenic process’ (Derrida, 1994). The spectrogenic process describes the process of abstraction, in which (a) thoughts, ideas, data etc. are ‘torn loose’ from the ‘living body’ and integrated in a more abstract or ‘artifactual body’ and (b) the return of the abstraction (ghost) to the world of real life events in the process of ‘application’ where it ‘haunts’ those with whom profiles are associated. Continue reading “SCOS Talk on “Secret organizers: The ‘spectrogenic’ process of profiling and the effects of ‘ghostly demarcations’””→