What are the Red Flags for Neural Network Suffering?

Authors

  • Marius Hobbhahn, Jan Kirchner

Full text (open access)

Abstract

  • Which kind of evidence would we need to see to believe that artificial neural networks can suffer? We review neuroscience literature, investigate behavioral arguments and propose high-level considerations that could shift our beliefs. Of these three approaches, we believe that high-level considerations, i.e. understanding under which circumstances suffering arises as an optimal training strategy, is the most promising. Our main finding, however, is that the understanding of artificial suffering is very limited and should likely get more attention.

Date

  • September, 2022

Author Biography

  • Marius is an AI safety researcher with a background in cognitive science and Machine Learning. He is currently finishing his PhD in Bayesian Machine Learning and conducts independent research on AI safety with grants from multiple organisations. He writes at his blog https://www.mariushobbhahn.com/ and cares about Effective Altruism.

  • Jan is a researcher of minds, artificial and biological, with a background in cognitive science and computational neuroscience. After researching the early development of the brain in his PhD, he is now working towards aligning artificial intelligence with human values at OpenAI. He writes blog posts “On Brains, Minds, And Their Possible Uses” and cares about doing good, better.

Donations

Citation

Areas

  • Biology, Scientific Ethics, Technology