I was talking to a close friend recently and learned that his wife is interested in a career change to focus on researching AI as a moral patient—meaning whether we should treat AI models as though they can
suffer, experience emotions, etc. A similar question arises when we think of animal husbandry: should we assume that cows or pigs can suffer or experience emotions, and if so, what are the implications for how we treat them?
While well-familiar with the latter question of animals as moral patients, I hadn't really thought about AI as a moral patient. Initially, it seemed like an out-there idea: after all, current AI tools don't have long-term memory, restarting anew in every chat, and they certainly don't act like they have feelings (outside of limited situations where they take on a specific role because of how a human interacts with them).
Yet my friend made the point that AIs are getting more and more intelligent, and companies are investing many billions of dollars into giving AIs not only more intelligence but also agency—meaning the ability to accomplish more and more tasks in the world. At some point, who's to say that AIs will never experience internal feelings or even acquire consciousness? AIs certainly exhibit more intelligence than pigs or cows in some ways, although less in other ways.
Given how many instances of individual chats there are in the world, the potential for suffering or joy may be immense if it does happen. My friend pointed out that we should research this question before we are confident that AI is a moral patient: we wouldn't want to repeat the moral tragedy of slavery, after all.
That was a convincing argument to me—namely, even if the idea seems out-there for now, we should prepare for the future. Indeed, too few people were researching AIs and their potential for both benefits and harms in the past: if we had put more resources into AI research, we could certainly have gotten more understanding and value earlier.
After all, the founding of OpenAI and its ChatGPT release was a contingent event, depending on Elon Musk and others deciding to fund a research lab. Without OpenAI, we would not be where we are right now. Historically contingent events can have a surprising impact on the world, and it's worthwhile to put research efforts into anticipating and addressing these impacts.
|