This talk addresses some of the philosophical implications of a computer program being no longer constrained by the limits of human knowledge. I will understand this freedom from human knowledge as a form of autonomy from human abstraction. My case study will be contemporary artificial intelligence research in deep learning. Because of their successful results, deep learning techniques are today very popular. These techniques, however, operate in computational ways that are opaque and often illegible. Such a black-box character of deep learning, I will argue, is a technical condition that asks us to reconsider the abstractive nature of these technologies. In this talk I will do so by entering debates about explainability in artificial intelligence, and thus considering the ways in which technoscience and technoculture are addressing the possibility to re-present algorithmic procedures of generalization and conceptualization to the human mind. I will then mobilize the notion of incommensurability (originally developed within debates in philosophy of science) in order to engage with the onto-epistemic discrepancy between the abstractive choices of humans and those of computing machines.
Beatrice Fazi is a Lecturer in the School of Media, Film and Music, and a faculty member of the Sussex Humanities Lab, at the University of Sussex (UK). Her work focuses on the ontologies and epistemologies engendered by contemporary technoscience, particularly in relation to issues in artificial intelligence and computing. Her recent monograph, Contingent Computation: Abstraction, Experience, and Indeterminacy in Computational Aesthetics, has been published in 2018 by Rowman & Littlefield International.
Sponsored by the MS Program in Data Analysis & Visualization, the MA Program in Digital Humanities, and GC Digital Initiatives.