This module introduces students to the most important philosophical questions raised by contemporary and future forms of machine learning and artificial intelligence: can machines ever be conscious: can they think, speak, feel? How does AI differ from other computational systems—is AI special? What are the ethical, social, and political implications of deploying AI in the real world?
|A||Semester 1 2023-24|
Students in this module will
(i) explore a selection of the most important philosophical questions raised by contemporary and future forms of machine learning and artificial intelligence;
(ii) gain a foundational understanding of key technological features of historical and contemporary forms of machine learning and artificial intelligence;
(iii) examine connections between the subfield of philosophy of AI and other philosophical subfields, including (but not limited to) moral, political, and social philosophy, epistemology, the philosophy of mind, the philosophy of language, the philosophy of perception, as well as logic and the philosophy of science;
(iv) reflect on the extent to which existing philosophical frameworks can be applied to understand the nature and implications of artificial intelligence, and the extent to which AI raises genuinely new and unique philosophical questions.
By the end of this module students will be able to …
understand and explain the central philosophical questions raised by artificial intelligence;
critically reflect on, and engage with, philosophical scholarship on AI and machine learning, by identifying the respective strengths and weaknesses of a range of cutting-edge philosophical views on this topic;
work autonomously on a piece of independent philosophical work that goes beyond the framework provided in lectures and seminars, and which synthesises information from a variety of different sources.
The module begins by providing an overview of (i) the history of automation, including finite-state automata (i.e. early innovations in robotics), automated calculation, early forms of algorithmic decision rules, and programming, and (ii) the technological underpinnings of contemporary ‘narrow AI’ systems, including the structure of machine learning loops, statistical issues related to measurement and the construction of training data sets, construct validity and abstraction in algorithmic modelling, validation and verification, and AI optimization strategies. These two initial sessions are designed so as to equip students from diverse, interdisciplinary academic backgrounds with the necessary foundational knowledge to be able to engage with philosophical questions surrounding AI with sufficient rigour and depth.
The module continues by covering issues related to AI consciousness and artificial general intelligence (AGI), as well as questions related to the boundaries of moral personhood and its philosophical implications for responsibility and accountability.
Subsequently, the module addresses topics linked to human-machine interactions, including the question of what role emotions, creativity, and embodiment play in those interactions, as well as the ways in which bias and algorithmic discrimination interact with social structures of injustice and power.
The module concludes by discussing two topics related to designing better forms of AI: first, the problem of aligning AI with human values and of ensuring that the breadth and pluralism of human values is sufficiently represented in the design process, and second, the question of how AI models should be designed, developed, and tested in order to count as deployable in the real world.
|Task||Length||% of module mark|
|Task||Length||% of module mark|
All feedback will be returned in line with University and Departmental policy.
Alan M. Turing, “Computing Machinery and Intelligence,” Mind 49 (1950): 433-460.
Joseph Weizenbaum, Computer Power and Human Reason: From Judgment To Calculation, San Francisco: W. H. Freeman (1976).
John Searle, "Minds, Brains and Programs,” Behavioral and Brain Sciences, 3 (3): 417–457 (1980).
David Chalmers, The Conscious Mind: In Search of a Fundamental Theory, Oxford University Press (1996).
Nick Bostrom, Superintelligence: Paths, Dangers, Strategies, Oxford University Press (2014).
Safiya U. Noble, Algorithms of Oppression: How Search Engines Reinforce Racism, New York: New York University Press (2018)
Shannon Vallor, Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting, Oxford University Press (2018).
Adrienne Mayor, Gods and Robots: Myths, Machines, and Ancient Dreams of Technology, Princeton University Press (2018).
Solon Barocas, Andrew S. Selbst, “Big Data's Disparate Impact,” 104 California Law Review 671 (2016).
Aylin Caliskan, Joanna J. Bryson, Arvind Narayanan, “Semantics derived automatically from language corpora contain human-like biases,” Science 356, no. 6334 (2017) 183-186.
Joy Buolamwini; Timnit Gebru. “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification,” Proceedings of Machine Learning Research 81 (2018): 1–15.
Andrew D. Selbst, Danah Boyd, Sorelle A. Friedler, Suresh Venkatasubramanian, Janet Vertesi, "Fairness and Abstraction in Sociotechnical Systems," ACM Conference on Fairness, Accountability and Transparency (FAT*) (2019).
Annette Zimmermann, Elena Di Rosa, Hochan Sonny Kim, “Technology Can’t Fix Algorithmic Injustice,” Boston Review (2020).