https://www.youtube.com/watch?v=QgNPDoqR2DE

The Philosophical and Technical Legacy of Bernard Stiegler_1080p.mp4

Pre-recorded Presentations for the ACM CHI Virtual Conference on Human Factors in Computing Systems, May 8-13, 2021

Session: alt.chi: Framing Dissent / Aesthetics

Harry Halpin, Center Leo Apostel Vrije Universiteit Brussel, Belgium, [email protected]

DOI: https://doi.org/10.1145/3411763.3450385

CHI '21 Extended Abstracts: CHI Conference on Human Factors in Computing Systems Extended Abstracts, Yokohama, Japan, May 2021

Abstract

Although technical systems may not seem on the surface to be philosophical in nature, there is a historical influence in Human-Computer Interaction (HCI) of Heidegger via the work of Winograd and Dreyfus. However, the late philosopher Bernard Stiegler critiqued this positivist reading of Heidegger, noting how Heidegger himself ultimately did not understand the political stakes of technology. Rather than abandon technology, Stiegler argued that we must repurpose technology to create a new form of society in the wake of the digital disruption. We review Stiegler's often difficult philosophical vocabulary, his political stance, and his nearly unknown role in motivating a number of innovative software projects at Institut de recherche et d'innovation du Centre Pompidou. We believe it is precisely a fundamental philosophical reorientation that will allow researchers to create the new kinds of programs that can meet the challenge posed by our digital epoch.

Keywords: Philosophy, Human-Computer Interaction, Digital studies

ACM Reference Format: Harry Halpin. 2021. The Philosophical and Technical Legacy of Bernard Stiegler. In CHI Conference on Human Factors in Computing Systems Extended Abstracts (CHI '21 Extended Abstracts), May 8–13, 2021, Yokohama, Japan. ACM, New York, NY, USA 8 Pages. **https://doi.org/10.1145/3411763.3450385**


1 INTRODUCTION

It is not everyday that the world loses a philosopher at the precisely the moment he is needed the most. On August 5th 2020, the famed French philosopher of technology Bernard Stiegler died. Our historical moment is characterized by what Bernard Stiegler termed the “digital disruption” in one his final books, where technology – long a lacuna of philosophy – has seemingly eclipsed our ability to even conceive of a collective future [23]. While much of academic philosophy fell into scholasticism and nihilism, Bernard was a philosopher dedicated above all to combat against impossible odds. He took upon himself nothing less than the task of rethinking of all of Western philosophy in light of technology, in order to create – as he put it in his yearly conferences at Centre Pompidou – “a new industrial world” based upon care rather than short-term profit.1 While there have already been many poignant elegies for the profound influence of Bernard, what is lacking is a guide through his often difficult and idiosyncratic vocabulary, in order to bring forth the profoundly political and technical aspects of his life's work. The field of computer-human interaction and design do not seem at first glance to be fields somehow indebted to Heideggerian philosophy. Yet it should be noted that the philosophical framing of much of what is current computer-human interaction today emerges from a fundamentally Heideggerian critique of artificial intelligence propagated in the United States originally by Hubert Dreyfus in his seminal What Computers Can't Do [4], originally published in 1972 at bequest of the RAND Corporation to understand the lack of progress in AI despite the massive funding by the United States government at the time. Dreyfus began a reading group at Stanford, where artificial intelligence researchers such as Terry Winograd attended. The notes for Dreyfus’ classes on Heidegger were later published, and became the dominant reading of Heidegger within American academia, despite the fact that only managed to cover the first part of Heidegger's Being and Time [3]. This was certainly the case within artificial intelligence, as witnessed by the later work by Clark and Wheeler in this vein on technologies of embedded and embodied cognition [6]. However, Hubert Dreyfus did not pose a positive path forward for computer technology, instead relying on Heidegger in order to critique the individualist and rationalist vision of artificial intelligence put forward by its early luminaries like McCarthy and Minsky [13]. This “classical” approach to artificial intelligence in a subterranean manner continued the legacy of Rudolf Carnap to capture all human knowledge in formal systems, and so a Heideggerian critique could naturally be applied in the same manner as Heidegger himself led the critique of formal logic [7]. Terry Winograd and Fernando Flores, both of whom attended the seminal lectures of Dreyfus on Heidegger and AI at Berkeley, put forward a positive vision of computers as tools for communication between humans, not as replacing humans, in their influential Understanding Computers and Cognition: A New Foundation for Design [26]. Their work crucially built on Heidegger's notion of zuhandenheit (“ready-to-hand”), where fluent use of technology makes it invisible. Thus in the context of designing new computer technology, the goal of design became to create technology that would seamlessly integrate into the cognition of the individual. In particular, this explicitly inspired research visions such as Weiser's ubiquitous and pervasive computing [25] and the development of “human computer interaction” itself [12]. Indeed, a reading of Heidegger that promotes the invisibility and seamless integration of technology into an embodied world has become in the decades since almost a mantra in design and HCI, and is no longer controversial [5]. Yet this does not mean that HCI is somehow actually critically attuned per se, as Google's mission seemingly being to reduce the time between the user intention and interaction with an application to an cognitively imperceptible duration in time [16]. The intellectual archaeology is quite clear: Winograd was the Ph.D. advisor of Brin and Page, the founders of Google [14]. Indeed, it has been argued that Dreyfus, and so his successors, fundamentally misunderstood Heidegger, as they built on his early work before Die Kehre (“The Turning”) after the second world war. While the reading of Heidegger from Dreyfus through Winograd to Google and modern HCI is consistent, it should not be forgotten that Heidegger was a national socialist (at least briefly) before the second world war. There was only relatively minor dissent in HCI from this philosophical background, such as the work of Agre that attempted to build on a more Derrida-inspired framework, but to a large extent it was not followed up in the creation of actual computer programs [1]. At first, this may seem to an immensely liberatory project, allowing the extension of human intelligence via distributed cognition and the possibility of emergence of collective intelligence. In a thoroughly apolitical world of angels, it is even likely that this would be the case. However, as “late” Heidegger noted, the terms of “Being” and “Time” must be inverted: Being is always historically conditioned, and thus Time and Being explicitly reversing the metaphysics of his early work on Being and Time [9]. Yet Heidegger's later work took a turn away from technology, considering it tool of calculation and domination rather than communication and liberation [8]. Although these new forms of human and computer interaction may be created with the best of intentions, Google and the rest are ultimately tools of capitalist profit and so short-term thinking. It can even be argued that there is a new form of computational fascism that rises, perhaps rather (or not) unexpectedly, from Heidegger's influence on the trajectory of technology. In Section 2, I would like to posit that Bernad Stiegler makes a fundamental correction to Heidegger, by positing that technology exists to connect our memories across generations, and so technology in the digital age – where these connections seem severed – are ultimately political in nature. As Bernard Stiegler was my postdoctoral advisor, this section contains personal anecdotes. Then I would like to show via example how this had led Bernard Stiegler to create new kinds of computer programs that he hoped would exemplify his philosophical position in Section 3. Finally, we will dwell upon the lessons that future generations of technologists should take from this rendezvous with philosophy.2 STIEGLER'S PHILOSOPHY As inscrutable as they may appear at first glance, it is precisely in Bernard's dense philosophical writings that we find the most ambitious political treatment of our highly technological age. Bernard Stiegler was one of the few rare philosophers in the tradition of Marx, Husserl, Heidegger, Deleuze, and Derrida for whom the entire world at large is their subject matter. Although it is impossible to do justice to his life in a few paragraphs, I would like to do tribute to what is upon consideration the hidden consistency of his philosophical vocabulary. This vocabulary outlined a new kind of politics that goes beyond the current paralysis of academia and the Left when confronted by technology. Nonetheless, more than the disappearance from the world-historical stage of a certain kind of grand philosopher, I mourn the passing of a certain kind of man, capable of globe-spanning ambitions and small kindnesses. “You may not be aware of this, but as we speak I am writing your mind like an algorithm” said Bernard to me in his light-filled office overlooking Centre Pompidou at our first meeting in Paris. It was Bernard's belief in the fundamental continuity between language and technology that brought me to Paris to work with him, as we both shared the marginalized philosophical position that the digital was ultimately an exteriorization of our all-too-human language. In contrast to Heidegger, Bernard put technology at the heart of being human in his first book, a critique of Heidegger, as he considered technology as an extension of being in the world [18]. Bernard Stiegler's general point was that by ignoring technology, philosophy ignored the reality of the very constitution of our being and the epoch we inhabit. Following the biologist Alfred J. Lotka, Bernard Stiegler called exosomatization this very technological extension of our capabilities beyond our bodies and our memory outside of the brain [19]. This seemingly implausible thesis of the “extended mind” was promoted by my Ph.D. advisor Andy Clark, the most neuroscientific of analytic philosophers, and Bernard Stiegler comes to the selfsame conclusion via a diametrically opposed philosophical tradition. Bernard goes even further than Clark, positing that this process is not just an exteriorization but an interiorization, as digital technology reshapes our very thoughts, and so the kinds of individuals we can become. Following Husserl, this spiral of exteriorizaton and interiorization is almost trivially self-evident to all writers in how the process of encountering, recollecting, and then writing (and rewriting) thoughts leads to the creation of altogether new thoughts. Anyone who knew Bernard would remark on how he was incessantly writing and reinterpreting his notes on his laptop, underlining them in various colors and adding annotations. Taking seriously Bernard's personal practice of digital note-taking was key to his philosophy, and this practice comes from his radical reinterpretation of Husserl [11]. To summarize far too briefly the outline of his philosophy given in his magnum opus Technics and Time, our original and immediate impression of a phenomenon is the primary retention, in other words, what is retained over time [18]. Our neural memory is the secondary retention, as it is what is retained even after the phenomenon itself has passed. Bernard Stiegler went beyond Husserl by emphasizing that it was the ternary retention that matters in the long-run, the externalized memory recorded on paper and hard disks that records what is past but can be again present: The person will die, but their exteriorized works will live on as an eternally present trace for future generations. Due to his profusion of writing, there are dozens of half-finished books, often enigmatically referred to as future volumes of his extant work, that are on his laptop. Massive is the work to whomever must play Engels to Bernard's Marx. For Bernard, the relationship between generations was at the heart of technology and politics. We are not born fully fledged individuals but become individuals via a process of interiorization of externalized memories passed on from generation to generation. Collectively these ternary retentions serve as the very basis of culture, through which humans go through a process of individuation, per Simondon, that transforms us from bare biological infants into full-fledged individuals as a part of society [2]. Originally this relationship between generations was organized by religion, nation-states, and even classical industrial capitalism, but at the present moment there is a “short circuit” of this transmission due to the rise of the hypercapitalist digital technology of platform capitalism, which has produced a “state of shock” [20]. This break between generations leaves us isolated and stunted by disindividuation and so the widespread psychological malaise and political discredit we see today, as well as hyper-specialization – the fragmentation of knowledge – and a failure of the traditional university system in the digital age. Trump was merely a sign of this underlying process, not the cause. What can be done? To abandon technology would be impossible, but to embrace acceleration was considered equally stupid by Bernard. The key to understanding Bernard's work is the infamously untranslatable pharmakon, a term he inherits from Socrates via his own mentor Derrida. The pharmakon is a development of such power that it is simultaneously capable of poisoning our entire existence, yet also capable of curing us from its very own toxicity and so making us stronger [23]. Bernard was able to confront technology neither as inherently positive or negative, but as simultaneously both positive and negative, similar to how Fredric Jameson once noted that capitalism was both the worst and best thing to have ever happened to humanity. What makes the difference is how we use technics in the course of our everyday life and struggles, not technics themselves. In this vein, Bernard told me that the difference between a dialectical and pharmacological approach is that the pharmakon is always open-ended. To think pharmacologically is not to look back upon the past dialectically to discover negation and reconciliation (or to have faith in the unfolding of all-too-certain laws in the future), but to take action in the present moment and so to transform our weakness into a strength. This is perhaps what led to Bernard's zealous overproduction of texts and political organizations, each attempting to rise to be sufficient to the latest turn in technology and politics. As the fate of the pharmakon was decided by our concrete actions in the here and now, Bernard's later politics – and some have criticized, prophetic turn – naturally flows from his earlier theoretical works. Bernard's personal office across from Centre Pompidou was always abuzz with an endless stream of artists, philosophers, and programmers. There I met Vincent Puig, Yuk Hui, Alexandre Monnin, and so many others who would go onto be great philosophers in their own right. After more than a year studying under Bernard, I finally gained enough courage to tell him that the French government had placed me and my friends on a terrorist blacklist, the infamous Fiche S, for our organization of climate change protests and unrepentant anarchism. As a man who had taught himself philosophy in prison, without even the blink of an eye, Bernard rejoined “I don't care about the police, but the problem is you have misinterpreted Heidegger.” It is his rejoinder to Heidegger's treatment of technology that forms the opening to his first book, Technics and Time. Humans, due to our défaut – our originary lack of quality – are forced to create technics with our hands that have been made free by the forces of evolution, from shaping of flint stones to the typing upon keyboards with our fingers. Language and speech are just examples of the wider development of technics itself that define our being as humanity, a being that has never become posthuman or transhuman because humanity has always been technical. This is why Bernard Stiegler always maintained, in sharp contrast to the rest of the French-speaking world, to use the term digital in French rather than numérique, as digital acknowledges the continuity of this world with the digits – the ten fingers – of our very hands rather than some Platonic world of zeroes and ones. He sought to develop a new kind of transdiciplinary approach called digital studies that would take into account the latest digital turn of technology as central to knowledge and society [20]. Warren Sack, David Bates, and others joined in a Digital Studies network to deepen this analysis. He was even invited to join a seminar on the topic myself and Alexandre Monnin organized with Hubert Dreyfus and Google researchers at the Googleplex itself, although he did not attend. As he became older, Bernard became more, rather than less, political. A former Communist militant and veteran of May 1968, he believed the Left had forgotten that the real question of Marx is technics, as given by the appearance of the industrial factory in the 19th century. For Bernard, the original formulation of revolutionary concepts from the 19th century is not sufficient for our highly technologized world. Due to the rise of consumerism after WWII, Bernard pointed out that the economic misery of the proletariat had been replaced by a certain symbolic misery. For the proleteriat is defined by their loss of knowledge, as the knowledge of craftsmanship and “how to live” is replaced by automatized industrial production that strips the proletariat of knowledge of how their own labor works, and even how their world works. Today, due to computation, our exteriorization of knowledge strips the most skilled of “white collar” workers of their knowledge, producing a situation of generalized proletarianization. Bernard was fond of pointing out that after the 2008 crisis, Alan Greenspan himself could not describe how the financial system worked. Highly skilled scientists, programmers, lawyers, doctors, and more could not understand their own work and “how to live” (savoir faire), all being reduced to being mere appendages of opaque algorithms harnessed in the name of short-term profits [21]. In the 21st century, by ignoring digital technology, the Left had abandoned the young generation. On this point, Bernard Stiegler looked for possible allies. One of my memories of Bernard is when he appeared in London at the Ecuadorian Embassy to discuss the future of the internet with Julian Assange. He brought with him not only wine and cheese, but also a veritable stack of books for Julian to read. At the core of his philosophy was that these exteriorized processes of memory and attention themselves could be thought of as external organs, an extended mind, that could then be automated via computation due to the advent of digital computers. In this manner, human experience could be reduced to mindless calculation and automation. The grave threat that Bernard saw first and all too clearly was that this calculation could be interiorized. As our memories were increasingly automatized by the new culture industries of Google and Facebook, our unconscious desires were also being automatized via digitally-driven behavior modification and the toxic addiction to devices such as smartphones. This prevented the necessary investment of desire and care in the future needed by civilization, as a frenetic planetary consumerism harnessed our libidinal drives at ever greater speeds. This would lead to what Bernard entitled in one of his later books, Automatic Society, a society that resembled more a hive of insects than humans with memory and imagination [21]. There was no greater insult for Bernard than to call someone a functionalist. Bernard enjoined us that to rethink society, we must rethink technology. The increased automatization would lead to the replacement increasingly larger and larger parts of the work force by robots and machine-learning, and so the collapse of employment and even the concept of labor itself. What Bernard then sought was to separate work from labor, yet in a way quite distinct from the calls to universal income from Negri and Silicon Valley. Inspired by how France pays the les intermittents du spectacle (workers in theatres and cinema) whom would be paid for the full year although the shows would run only for a few months, Bernard envisioned a contributory economy where people would be sustained after a certain amount of work had been contributed to society. Not content to theorize, Bernard took up a project to prevent the spread of toxic ideologies and revive the northern suburbs of Paris, called Plaine Commune, by implementing a contributory economy via experimentation in design and technology. In his later works, Bernard's politics moved from the fate of the biosphere in the age of the Anthropocene, an extinction event caused by the relentless profit-seeking nature of capitalist technology. For Bernard, the threat of the Anthropocene went beyond the extinction of biological life to the extinction of knowledge due to the closure of the future caused by technology. In closed systems, the second law of thermodynamics led to the inevitable growth of entropy – the destruction of difference, a night in which all information is random. Following Schrödinger's What is Life? [17], Bernard held that life was a struggle against entropy. In open-ended systems, the spread of entropy could be reversed and so there would be a rise of negentropy, the inverse of entropy, in which the complexity of life could flourish and be sustained. What Bernard theorized was an exit from the dead-end of the Anthropocene to what he then called the Neganthropocene, a new epoch where technology took care of the biosphere rather than destroying it [22]. Reinventing his proposition that we could rethink society by rethinking technology, Bernard renamed his political organization Ars Industrialis to the Association of the friends of the Thunberg generation, and for his (and yet untranslated at this moment) last book, the Lesson of Greta Thunberg, noted that Greta Thunberg represented exactly the gap between generations that politics must address [24].3 STIEGLER'S SOFTWARE PROJECTS To reshape society, we must reshape technology. In stark contrast to more traditional philosophers, Bernard took programmers and computer programs seriously: He strove not only to write about programs, but took programmers like me under his wing to create new kinds of programs at his Institut de recherche et d'innovation du Centre Pompidou (IRI) to counter the “stupidity” of Silicon Valley. His goal was the creation of new tools for indexation and annotation in order to jumpstart our genuine collective transindividuation. Bernard Stiegler did not limit himself to only books, but also his students and employees created a number of computer programs at IRI. Many of them not see the light of day, and he was also interested in using open standards and existing software such as hypo.the.sis and W3C Web Annotation standards.2 However, a few of his projects did eventually mature into products that were used, although to allow his students and friends to annotate and index their own video lectures and readings in order to cause debate. In philosophy, the term for this kind of discussion that leads to a greater understanding is hermeneutics, and Bernard put forward that we must replace the hypertext web not with a “semantic web” of fixed meaning, but with a “hermeneutic web” that would lead to further individution processes.Figure 1: A Decentralized Group-based Social Web.

His most ambitious attempt to do so was the Social Web project of Yuk Hui and myself, which attempted to rethink social networks as based on groups, rather than individual profiles. The history of social networking has been one of control and the metaphysical assumption that somehow friendship or connection should be a “link” we thought naturally led to a lack of true individuality and only spectacular narcissism. Sharing thinking with the idea of Lovink and Rossiter's “organized networks” [15], we created decentralized alternatives based on the XMPP (Extensible Message and Presence Protocol, an IETF standard3) that could connect diverse deployments of the Crabgrass open source software.4 As shown in Figure 1, groups can display their members and what resources are being updated via an a allowing users to share messages, post, and files all within access-controlled groups that could then opt to share them with other groups. The key aspect was that there was no profile page for an individual, only a group wiki with discussion commentary that we believed would bring greater annotation. Although the software itself was not widely used outside Stiegler's students, it did influence to eventually protocol level work on Message Layer Security (MLS)5 at the IETF via the NEXTLEAP project (a European Commission project between Inria de Paris and IRI),6 a standard for secure group messaging that will be deployed, perhaps ironically, by Facebook and Google for secure group messaging.Figure 2: Polemic Tweet.

One project that was widely used and published was Polemic Tweet,7 a program that added annotations and tags to tweets. The annotations allowed people to explicitly ask questions, provide responses, and disagree explicitly with another tweet [10]. The vision was that this would eventually allow Twitter to promote discussion and polemic, a sort of controversial debate between different points of view in France that is considered to be crucial for a healthy society, even if it does not lead to consensus. Without such annotations and explicit actions, Bernard Stiegler believed no rational discussion could happen on Twitter. The tool was used and is still used today by Centre Pompidou's ENMI conference by live audience members when speakers are presenting. It is illustrated in Figure 2, where the bar chart represents the tweets and each square represent one tweet sent during the conference. All tweets are color-coded: Red signals disagreement, green signals agreements, blue signals questions, and yellow signals references to earlier parts of the event.Figure 3: Ligne de Temps.

The final project was Ligne de Temps,8 a project to allow annotation and explicit debate over the meaning of speakers words in video. Bernard Stiegler constantly recorded himself speaking and recorded all seminars and guest speakers at IRI, and then would lead exercises in using annotations to provoke a deeper hermeneutic understanding and further questions. The interface allowed textual contributions and complex metadata to be added to video, as presented in Figure 3. Many students used this tool over hundreds of hours of seminars, which created the “retentions” that Bernard Stiegler felt were not being created in contemporary discourse via video. In the post COVID-19 world, this line of thought should be no doubt revisited. Whether or not these attempts were ultimately successful, they show that programs can indeed explicitly embody new philosophical paradigms such as those of Bernard Stiegler. The question of how they could become successful we leave for future work.4 CONCLUSION This eulogy to Bernard Stiegler is meant to inspire us not to mourn, but to take action as technologists. Our technologies are not mere tools of interaction, but radically constitute our being. This normally takes place over the span of generations, and so is almost unnoticed, but since the Industrial Revolution we have seen changes that, now sped up by what Bernard liked to call the hyperindustrial revolution of the digital, threaten to tear the connections between generations apart. We can see this in our everyday lives in the vast conflicts in politics between the youth and the previous generations that lead to inaction on climate change as well as the rise of traditionalist populism that posits a need for connection between generations on the level of fantasy, rather than new processes that create the transmission of memory and a shared future. Therefore, it is the task of technologists to build philosophies and technologies capable of meeting these stakes. Ultimately, philosophy often seems frivolous for computer programmers and designers, a mere past-time when the actual work of writing code and user-testing beckons. Yet the very foundations of the field of HCI ultimately come from a superficial misreading of Heidegger, one that we must correct in order to prevent new forms of surveillance, control, and computational fascism from emerging in reaction to the ongoing planetary crisis. While this may seem to be an impossible task for us technologists, it is indeed more simple than it appears. If a mere indexing algorithm such as Google can change the course of capitalist development, imagine what new kinds of annotation and indexing could do to yet again change our current trajectory. Rather than political catastrophe caused by Twitter and Facebook, could there be new forms of social networking that privilege dialogue, reflection, privacy, and the emergence of genuine individuals knitted together – with the help of digital technology – in a social fabric? Bernard Stiegler has left us a comprehensive, if unfinished, philosophy ... and computer programs as examples to show another philosophical paradigm for computation is possible, one that is not only critical but explicitly for a new form of politics. We should take up the task that he laid out, a task that was without any doubt too heavy for a single man, even Bernard Stiegler.COMMENTARY Samuel Huron Télécom Paris, CNRS i3, Institut Polytechnique de Paris, Palaiseau, France As the tone of Harry Halpin's paper is personal, I will also write a personal review in a format far less formal than what I am used to do. I worked at the Research Institute of the Pompidou Center at the same time as Harry Halpin and Bernard Stiegler. Bernard Stiegler was the director and the founder of the Institute. As I read Harry's paper I was deeply moved. Harry Halpin's paper offers a “guide” to Bernard Stiegler's concepts for the H.C.I community. The first part of the paper provides a personal view of philosophical influences in HCI. The second part presents a rapid and accurate overview of Bernard Stiegler's concepts. The third part of the paper exemplifies how Stiegler influenced various software projects. I will not discuss the first part, as I am not familiar with this aspect of the history of HCI. However, I found it instructive and I think it could be discussed at CHI. The second part corresponds to what I remember of Bernard's thinking. It would be helpful to provide a more formal or explicit translation of the interactions between the philosophical vocabulary and HCI concept. I would like to point out that trying to implement philosophical visions or concepts into technology is challenging and revealing the gap between theory and practice, between general concepts and practical design decisions, or implementation decision. A question that I always found fascinating was how abstraction such as philosophical concepts may / could / should / would influence the implementation or the design? It probably depends on many factors... I have often been influenced and inspired by philosophical or political approaches in my design and research. As a PhD student, rarely did I acknowledge them directly as I did not know how to mention that in my HCI work and I feared reviewers’ scrutiny. I also found it difficult to have a section on “philosophical inspiration” because of the gap between the philosophical concept and the implemented design. Maybe this should be more accepted. I also had very few discussions with Bernard on how to implement philosophical concepts. When these discussions did occur, I often felt this gap between the world of ideas and the world of designing techniques and coding. I would often reflect: “Yes, this is a really interesting concept, but if I implement it that way, does it respect the main ideas?” Over the years I have often wondered, why, while working in an institution founded and directed by one of the most prestigious philosophers of technique in France, I had not been able to bridge the gap? What are the differences between these two domains that cause this gap? But also how to account for the generativity (influence, inspiration, exploration) between the two domains? What make the gap? Many people have probably asked themselves similar questions. Maybe answering these questions could be useful for making “a guide to the philosophical concept” for HCI researchers. In this review, I will try to do it from my own naive personal experience. There is presumably a zillion differences between philosophy and HCI. I want to focus on four of these differences that probably contribute to this gap: 1) level of abstraction, 2) time scales, 3) social structure, 4) critical approach. Philosophy (at least Bernard Stiegler's) and HCI do not work on the same level of granularity - or level of abstraction-. The first one tends to reach generality to embrace “the entire world at large”, and to make sense of the world in one philosophical system. The second mostly works on the relation between humans and machines (generally computational machines) with a tendency to focus on empirical observations and implementations, and rarely tries to create theories that embrace the world at large. This difference of level of abstractions is probably related to the differences between object of research, and methods. However (at least to me) I found it difficult to switch from one level of abstraction to another without losing something in the process. Maybe another difference is related to the time scales of these domains of knowledge. HCI is a new and young domain and overly focuses on short time scales: seconds, minutes, hours, weeks, months. Philosophy is one of the oldest domains of knowledge and tends to focus on generalizations over long periods of time (years, centuries, millenaries). If I caricature, when I am thinking of designing a fitts's law experiment for a new technique, I do not reflect on how the continuum of digital technology has emerged over the last century and what were the conditions of this emergence. The two time scales are complementary but, as for the level of abstraction, it is difficult to switch and connect one to another without losing important aspects. I think the social structures of production of knowledge of these two domains are also very different. Authorship is often collective in HCI, while it is rather an individual pursuit in French philosophy. HCI production is happening through publishing papers yearly. French philosophy seems to rely on books and theses produced over multiple years. Types and sizes of contributions are also really different. Also the corpus of these two domains are different. Lastly, knowledge dissemination goes through different types of events and communication channels that do not often relate to each other's. Finally, philosophy is a highly critical domain. By critical, I mean that, as most human sciences, philosophy is a domain that reflects on society to reveal power structures, cultural conventions, ideology limitations, etc. and that critiques them. Most of the energy is dedicated to deconstructing what exists to be able to analyze and critique hidden assumptions. In HCI the paradigm is often – not always – more focused on generativity than criticality. Researchers and designers in average in HCI tend to spend most of their energy on designing and building new technology or techniques, and evaluating them, discussing them. As humans we have to make some decisions on what part of the spectrum we want to spend most of our energy: on creating or on criticizing. I guess both could be done, but they clearly require different skills and methods. Not everyone is good at both, at least criticizing. I guess both could be done, but they clearly require different skills and methods. Not everyone is good at both. At least for me I find it difficult to adopt both attitudes on the same project, even if I like them equally. But philosophy and HCI have a lot in common. Both can be seen as design science: design of concepts for philosophy, design of interactions for HCI. Both can be seen as performative, producing action by the verb. Both have powers to shape future part of our world. If as Gilles Deleuze, we consider philosophy as the science of constructing concepts, maybe we can consider philosophy as one of the sciences of the artificial (Herbert Simon), a science in which humans are partially creating the phenomenon they are studying, namely concepts. This is maybe an element in common with HCI and computer science. We can consider HCI as a science of the artificial – contrary to physics and biology – the phenomenon studied by researchers in HCI is somehow generated, designed by researchers. Computers are not pre-existing to humans. The object of study is intentionally made by humankind and sometimes by the very same researchers that are studying it. In the case of computer sciences and HCI the artificial production could be algorithms, interaction technique, visual encoding, devices, frameworks, design space, visions, etc. Even if both philosophy & HCI could be considered as sciences of the artificial, the outputs are different. In philosophy as Stiegler said, the concept you produce is “programming” the human mind, while in HCI we are programming devices and interactions. Philosophical ideas have nourished and shaped humankind history in so many ways (Confucius, Plato, Rousseau, Voltaire, Marx, Popper...) including shaping scientific epistemology, political systems, family relations, defining value systems, etc. Philosophical ideas could be seen as performative. I mean by that philosophical ideas have an impact on social actions and produce change on social structures by their adoptions, use or discussion. An analogy would be that philosophical ideas are “programming” long-term society. Similarly, at a much smaller time scale, HCI, interaction design and computer science concepts, visions, software, techniques are also performative. From Vannevar Bush's Memex to Berners-Lee's Web, from Alan Kay's Dynabook to Apple's iPad, from Douglas Engelbart's “mother of all demos” to so many other things, we can see how HCI ideas create new social structures and organizations (Wikipedia, Open Street maps, Ushahidi, Apple, Facebook, Google, Autodesk, Adobe, Tableau), new ways of living, for the better or for the worse. Both have powers to shape the world but in many different ways... Maybe philosophy could offer HCI some ways to reflect on what futures to shape? So what makes this gap hard to bridge? And what can we do? To me these bridges seem difficult to build for multiple reasons: 1) unquestioned assumptions in both domains, 2) level of indirection in the application of concepts of one to the other, 3) differences of contributions and social structures. Before interacting between the two fields, some common grounds should be built. However, implicit assumption made by the field or by the researchers themselves are often limiting the construction of this common ground. In both domains, implicit assumptions, tacit knowledge or positions could be part of the gap. For instance, in philosophy, making sense of the world often seems politically positioned. In HCI, the political questions are often – not always – hidden inside implicit assumption. My hypothesis is that the difference of level of abstraction between philosophy and HCI creates some level of indirection when one inspires the other. And the mapping between a philosophical concept and its technological applications is neither perfect nor direct, and sometimes difficult to explain, maybe subjective. For instance, if one tried to design a new pointing technique, or a cooperative system, or a visualization based on Bernard's concept of “trans-individuation,” the process of interpretation and transformation of the concept into various intermediate forms of knowledge before it became an artefact or technique would really be indirect and not linear. Maybe those transformations are worth documenting... Maybe this could be an avenue for future research about philosophy driven design in HCI. The notions of contributions, as well as the social structure are really different in HCI and Philosophy, and it could be really cumbersome for a PhD student to try to make contributions in both domains. Well, at least when I was a PhD student I did not feel like I could be as good in both domains. Maybe it is different for tenured faculty members that are less stressed by the time pressure of succeeding with the beginning of their career. Also, we could imagine that in the future, HCI papers could have a philosophical section, to allow authors to elaborate a philosophical critique of the research domain. Philosophy and other domains will probably always influence HCI and computer sciences, and HCI and computer sciences will probably always influence philosophy and other domains; I just hope that instead of having user centred design which is now transformed by IT company into profit-centred design, maybe we could also have a bit more vision driven design, or philosophy-driven design. Expanding HCI research space and design space could be through inspiration of other fields such as philosophy, including Bernard Stiegler's legacy.REFERENCES • Philip Agre. 1997. Computation and Human Experience. Cambridge University Press, Cambridge. Navigate tocitation 1 • Stiegler Bernard. 2012. Uncontrollable Societies of Disaffected Individuals. Polity, Oxford. Navigate tocitation 1 • Hubert Dreyfus. 1991. Being-in-the-world: A commentary on Heidegger's Being and Time, Division I. MIT Press, Cambridge. Navigate tocitation 1 • Hubert Dreyfus. 1992. What Computers still Can't Do: A critique of artificial reason. MIT Press, Cambridge. Navigate tocitation 1 • Christopher Frauenberger. 2019. Entanglement HCI: The Next Wave?ACM Transactions on Computer-Human Interaction (TOCHI) 27, 1(2019), 1–27. Navigate tocitation 1 • Harry Halpin, Andy Clark, and Michael Wheeler. 2013. Philosophy of the Web: Representation, Enaction, collective intelligence. In Philosophical Engineering: Toward a philosophy of the Web, Harry Halpinand Alexandre Monnin (Eds.). John Wiley and Sons, New York City, 21–30. Navigate tocitation 1 • Harry Halpin and Alexandre Monnin. 2016. The decentralization of knowledge: How Carnap and Heidegger influenced the Web. First Monday 21, 12 (2016). Navigate tocitation 1 • Martin Heidegger. 1954. The question concerning technology. In Vorträge und Aufsätze. Navigate tocitation 1 • Martin Heidegger. 2002. On Time and Being. University of Chicago Press, Chicago. Navigate tocitation 1 • Samuel Huron, Petra Isenberg, and Jean Daniel Fekete. 2013. PolemicTweet: Video annotation and analysis through tagged tweets. In IFIP Conference on Human-Computer Interaction. Springer, Berlin, 135–152. Navigate tocitation 1 • Edmund Husserl. 1970. The crisis of European sciences and transcendental phenomenology: An introduction to phenomenological philosophy. Northwestern University Press, Boston. Navigate tocitation 1 • Petter Karlström. 2006. Existentialist HCI. In Proceedings from CHI 2006, Reflective Design Workshop. ACM, New York City. Navigate tocitation 1 • John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon. 1955. A proposal for the Dartmouth summer research project on artificial intelligence. Navigate tocitation 1 • Lawrence Page, Sergey Brin, Rajeev Motwani, and Terry Winograd. 1999. *The PageRank citation ranking: Bringing order to the web.*Technical Report. Stanford InfoLab. Navigate tocitation 1 • Ned Rossiter and Geert Lovink. 2006. Organized networks and nonrepresentative democracy. In Reformatting politics: Information technology and global civil society, Jodi Dean, Jon Anderson, and Geert Lovink (Eds.). Routledge, Milton Park. Navigate tocitation 1 • Eric Schmidt and Jonathan Rosenberg. 2014. How Google Works. Hachette, London. Navigate tocitation 1 • Erwin Schrödinger. 1992. What is Life?: With mind and matter and autobiographical sketches. Cambridge University Press, Cambridge. Navigate tocitation 1 • Bernard Stiegler. 1998. Technics and Time: The Fault of Epimetheus. Stanford University Press, Palo Alto. Navigate tocitation 1citation 2 • Bernard Stiegler. 2013. Die Aufklärung in the age of philosophical engineering. In Digital Enlightenment Forum Yearbook, M Hildebrandt, K O'Hara, and M Waidner (Eds.). IOS Press, Amsterdam, 29–39. Navigate tocitation 1 • Bernard Stiegler. 2015. States of Shock: Stupidity and knowledge in the 21st century. John Wiley and Sons, New York City. Navigate tocitation 1citation 2 • Bernard Stiegler. 2018. Automatic Society: The Future of Work. John Wiley and Sons, New York City. Navigate tocitation 1citation 2 • Bernard Stiegler. 2018. The Neganthropocene. Open Humanities Press, London. Navigate tocitation 1 • Bernard Stiegler. 2019. The Age of Disruption. Polity, Oxford. Navigate tocitation 1citation 2 • Bernard Stiegler. 2020. Qu'appelle-t-on panser? La leçon de Greta Thunberg. Les Liens qui Libèrent, Paris. Navigate tocitation 1 • Leila Takayama. 2017. The motivations of ubiquitous computing: revisiting the ideas behind and beyond the prototypes. Personal and Ubiquitous Computing 21, 3 (2017), 557–569. Navigate tocitation 1 • Terry Winograd and Fernando Flores. 1986. Understanding Computers and Cognition: A new foundation for design. Ablex Publishing, Norwood. Navigate tocitation 1FOOTNOTE 1 https://enmi-conf.org 2 **https://web.hypothes.is/** 3 **https://tools.ietf.org/html/rfc6120** 4 https://we.riseup.net 5 **https://datatracker.ietf.org/wg/mls/about/** 6 https://nextleap.eu 7 **https://polemictweet.com/** 8 **https://www.iri.centrepompidou.fr/outils/lignes-de-temps-2/** Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s). CHI '21 Extended Abstracts, May 08–13, 2021, Yokohama, Japan © 2021 Copyright held by the owner/author(s). ACM ISBN 978-1-4503-8095-9/21/05. DOI: **https://doi.org/10.1145/3411763.3450385**