far.in.net


Ethics and the Future of Intelligence

COMP90087 The Ethics of Artificial Intelligence is a Master’s subject at the University of Melbourne designed as an introduction to moral philosophy and applied ethics for students with a technical background. It offers students who will go on to contribute to building AI systems of the future a framework for evaluating, critiquing, and designing AI technology in a socially responsible manner.

While there are many important social and ethical issues raised by today’s AI technologies (comprising much of the content of this subject), it is also true that as we project developments in AI technology into the future, we can foresee new and different ethical issues that might arise.

Accordingly, I was invited to give a guest lecture on ethical questions raised by potential future advancements in AI for the final week of COMP90087, 2024.

Here’s a public version of the lecture recording along with the learning outcomes and the readings I posted for students in advance of the lecture.

Lecture recording

I’m trying something different (than YouTube) and sharing this recording via the federated peer-to-peer video streaming platform PeerTube. Thanks to the instance Everything Video for welcoming me and helping me to share this recording!

Download: Lecture slides PDF (download the video on the Everything Video website).

Learning outcomes

At the end of this module, you should be able to:

  1. Understand and describe the concepts of narrow AI, general AI, and superintelligence.

  2. Understand and describe potential positive and negative consequences of building a superintelligent AI system and “the alignment problem.”

  3. Apply established ethical theories to reason about the question: “should society build a superintelligent AI system?”

Readings

Note: Reading (3) is just four pages long!

With the exception of readings (2) and (7), all of these readings are available from the publisher’s website, though you may require an institutional affiliation to access them.

On the possibility of human-level AI (and beyond):

  1. Alan M. Turing, 1950, “Computing machinery and intelligence”. Mind, LIX(236): 433–460. DOI: 10.1093/mind/LIX.236.433.

    The classic paper from Turing that proposes the “imitation game” (known today as the “Turing test”), a benchmark for AI systems to communicate in a convincingly human manner, and confronts several objections to the possibility of this test being passed by a future AI system.

  2. Nick Bostrom, 2014, Superintelligence: Paths, Dangers, Strategies. Oxford University Press.

    A book-length treatment from Bostrom, a philosopher and futurist, on the possibility of implementing a superintelligent AI system (and some possible consequences).

On risks arising from advanced AI systems:

  1. Norbert Wiener, 1960, “Some moral and technical consequences of automation”. Science 131, 1355–1358. DOI: 10.1126/science.131.3410.1355.

    A short and sweet essay from Norbert Wiener, the pioneer of cybernetics and a deep thinker about the relationship between technology and society, in which Wiener points out that future advanced AI systems may become difficult to control.

  2. Eliezer Yudkowsky, 2008, “Artificial intelligence as a positive and negative factor in global risk”. Global Catastrophic Risks, 308–345. DOI: 10.1093/oso/9780198570509.003.0021.

    Decades later, Eliezer Yudkowsky looked around at modern AI systems and thought, it’s time to take this control problem seriously, the consequences could be severe. This chapter contains some of Yudkowsky’s extensive writings on this topic.

  3. Andrew Critch and David Krueger, 2020, “AI research considerations for existential safety”. Preprint arXiv:2006.04948.

    More recently, Critch and Krueger put together this extensive research agenda that lays out a more robust case for risks from advanced AI systems.

On ethical questions raised by the advent of advanced AI systems:

  1. Nick Bostrom and Eliezer Yudkowsky, 2014, “The ethics of artificial intelligence”. The Cambridge Handbook of Artificial Intelligence 316–334. Cambridge University Press. DOI: 10.1017/CBO9781139046855.020.

    A complement to the lecture, some additional ethical questions that arise in a world with AI systems with human-level or super-human intelligence.

On the broader topic of ethics and existential risk (not just from AI systems):

  1. Toby Ord, 2020, The Precipice: Existential Risk and the Future of Humanity. Bloomsbury.

    Toby Ord was once a student in computer science and philosophy at the University of Melbourne. Ord since studied philosophy at the University of Oxford and has contributed to analysing ethical questions about the long-term future of humanity. This book gives Ord’s analysis of the science of extinction risk from various sources.