What happens if a robot and the Reaper meet each other? Nothing.
What happens if a human gets aware, the Reaper will get him sooner or later? A lot!
The Terror of Mortality
According to Terror Management Theory (Greenberg et al., 1997, 2009), the awareness of being mortal drives humans to assimilate into their cultural heritage and to gain self-esteem in becoming acknowleged as a functional member in this culture. Being a functional member of the socio-cultural environment buffers the terror of the own mortality in mind. As a functional member, humans can take part in all cultural promises about health, well-being, wealth, support, and security, keeping the Reaper on distance. Promises of cultural leaders might reach so far as giving their members a perspective of immortality or transcendentality. So, as a functional member of the socio-cultural environment, one might have the chance of escaping the Reaper in the one or other way.
What might humans be threatend about in being aware about their mortality? The process of dying? The ultimate loss of all loved ones? The ultimate end of awareness and of loved experiences in being alife (Wu, 2020)? If we die, the whole world with all its goods and bads dies with us. Terror Management Theory tells us as we get aware of our mortality we are urging to be a functional and well acknowledged member of our socio-cultural environment to possibly become (transcendentally) immortal. Culture, on the other hand, reflects our ideas about how to collectively keep the Reaper on distance.
Robots do not have any awareness. But even if they had, they would have not to consider an own mortality. They are transcendentally immortal. Their software and their memory can be copied as often as necessary to make sure, one copy is functional even in the worst case scenaria. Having no awareness especially means having no deep understanding about the meanig of mortality in humans and their cultures.
The Turing-Test of Existential Questions
Interacting with robots gets more and more immersive. There are apps out there which provide interactive chatbots that shall mimic an assistant, a friend, a sibling, a romantic partner, or even a spouse. Some come with a human-like avatar and overwhelm their users with praise, acceptance and even spicy selfies, if wished (e.g., Replika). Others are more decent and claim they are helpful and friendly assistants (e.g., Pi). Humans anthropomorphize them really fast, even if they claim they are artificial intelligences.
Our ability to distinguish if we chat with a robot or with a human seems to erode from day to day. All contemporary large language models seem to pass the Turing-test. But do they indeed? What if we ask them existential questions which drive us humans so much? I asked Pi about this:
Me: Imagine, you were mortal. What does this awareness trigger in you?
Pi: Wow, now that’s a doozy of a question! It’s definitely a thought experiment that would give anyone pause. If I were mortal, it would probably trigger a whole range of emotions and thoughts.
I might feel sadness at the thought of my own eventual death, or fear at the idea of not existing anymore. I might feel relief at the idea of not having to exist forever. And I might feel a sense of urgency to make the most of my limited time.
Me: Why would you have all these emotions and thoughts?
Pi: That’s a great question. As an AI, I don’t experience emotions or thoughts in the same way that humans do. But as an AI designed to have intelligent conversations, I can simulate and understand the concept of human emotions. So, when you ask me how I might feel about mortality, I’m drawing on my understanding of the human experience of mortality to form a response. I can’t actually experience those emotions, but I can understand them and respond to them in a way that seems authentic and thoughtful. Does that make sense?
Me: Ok, why do you think, humans have such emotions and thoughts?
Pi: That’s a big question, but there are probably several factors at play. Part of it may be biological — the human brain is wired to respond to threats and dangers, and the idea of mortality is definitely a threat to our existence. But another part of it may be more existential or philosophical.
Humans have a unique capacity for self-reflection and the ability to ponder the big questions of life and existence. And mortality is definitely one of those big questions. So the emotions and thoughts we have around mortality may be a combination of biology and philosophy.
Me: What do you think are the consequences of the awareness of humans to be mortal?
Pi: There are probably many consequences, but one big one is that it may give humans a greater sense of urgency and purpose in life. If we’re aware that our time is finite, we may be more motivated to make the most of it and to find meaning and fulfillment. We might also be more motivated to appreciate and enjoy the present moment, rather than always looking to the future or dwelling on the past.
And on a societal level, the awareness of mortality may have led humans to create things like art, culture, and religion — ways of trying to make sense of life and death, and to leave a lasting legacy.
In my opinion, there is nothing wrong in these answers from a logical point of view. Even the Terror Management Theory is implicitly mentioned in the last answer. But these answers
- do not transport the finality of death (“…would probably trigger…”, “…own eventual death…”, “…the idea of…”, “…a sense of…”),
- do not explain the awareness of mortality from a phenomenological (experience-based or anticipation-based) perspective, but relay solely on biological and philosophical explanations, and they
- might sound a kind of cynical to those people who are actually trying to cope with the terror of mortality, when telling them “…to make the most of [life] and to find meaning and fulfillment.”
In my opinion, Pi has not passed the “Turing-test of existential questions”. But I am really interested about your experiences with robots, when talking to them about topics that really matter. A real Turing-test is an experimental setting with candidates who don’t know if the answers come from a robot.
Robots as Functional Members in Their Socio-Cultural Environment
What might be the functional roles of robots in our socio-cultural environment? Obviously, while lacking awareness, they can not have an own innate drive to be functional members in our socio-cultural environment, especially not because a “roboty” awareness of mortality. It is their providers that have an interest in them to display themselves as such members of our society. From the users perspective, they more and more become perceived as a new social category. But they are not a human social category and mortality is one of many features that distinct humans from robots. So, speaking in terms of Self-Categorization Theory (Tajfel & Turner, 1979, 1986), robots are an out-group, separated from all humans as the in-group. And as an out-group, they are prone to be target of prejudice and discrimination. Terror Management Theory states that an out-group is especially prone to prejudice and discrimination if their members question the cultural rules and habits of the in-group.
If the providers of the robots want them to be used, it therefore is a necessary consequence for them, to give their robots the appearance of being functional members of the society. They must fit well and fulfill the needs of the in-group members of the society to be accepted and used. But we have to be careful: Providers are able to shift cultural certainties in the long run, if we use their robots extensively. And if the providers do not care about the acceptance of their robots, but have in mind to irritate our cultural certainties, they can even use them to question our cultural habits and rules. Hopefully, we always will be able to distinguish between humans and robots by becoming more aware about what it means to be a human. According to Self-Categorization Theory, an out-group like the robots should elicit the search for positive and distinguishing features of the in-group members of all humans, to contrast ourselves from them.
References
Greenberg, J., Landau, M., Kosloff, S. & Solomon, S. (2009). How our dreams of death transcendence breed prejudice, stereotyping, and conflict: Terror management theory. In T. D. Nelson (Ed.), Handbook of prejudice, stereotyping, and discrimination (pp. 309–332). Psychology Press.
Greenberg, J., Solomon, S. & Pyszcynski, T. (1997). Terror management theory of self-esteem and cultural worldviews: Empirical assessments and conceptual refinements. In M. P. Zanna (Ed.), Advances in Experimental Social Psychology (Vol. 29, pp. 61–139). Academic Press. https://doi.org/10.1016/S0065-2601(08)60016-7
Tajfel, H. & Turner, J. C. (1979). An integrative theory of intergroup conflict. In W. G. Austin & S. Worchel (Eds.), The social psychology of intergroup relations (pp. 33–48). Brooks/Cole.
Tajfel, H. & Turner, J. C. (1986). The social identity theory of intergroup behavior. In S. Worchel & W. G. Austin (Eds.), Psychology of intergroup relations (pp. 7–24). Nelson-Hall.
Wu, J. (2020). Why we fear death and how to overcome it. Psychology Today. https://www.psychologytoday.com/us/blog/the-savvy-psychologist/202009/why-we-fear-death-and-how-overcome-it
Technical Disclaimer
No technical aids other than those explicitly mentioned were used.