How to Create a Golem: From Artificial Intelligence to Artificial Human
The following essay utilizes the kabbalistic idea of the sefirot to provide a theory of personhood analyze the question of whether it is possible to create a digital person.
__________
The word “golem,” which in Hebrew has the connotation of “raw material,” is used in the Bible (Psalms 139: 15,16) to refer to the imperfect and unfinished human being wrought in secret by God from “the lowest part of the earth” on the sixth day of creation. In Jewish tradition the golem came to be thought of as an artificial anthropoid, presumably created from clay or mud by a righteous and spiritually endowed individual and infused with life through such acts as the incantation of divine names or by placing the word “emet” (truth) into the anthropoid’s mouth or onto its forehead
Judith Joseph: Golem Awakening
https://www.judithjosephart.com/
While the creation of a golem has for centuries been a subject of folklore and legend, with advances in artificial intelligence the question of an artificial human has become a matter of scientific and philosophical concern. In this paper I propose to explore the question of what it would actually take to create a golem, an artificial person. I will describe what I take to be the “ingredients” needed to produce an artificial humanoid, and I will argue that while such production may be in the realm of logical possibility, it must go far beyond what is required for the production of artificial intelligence. My aims in considering this question are twofold: The first is to expand the conversation around artificial intelligence and to provide an outline of the considerations that I believe must be taken into account prior to granting AI ethical status and the designation of “personhood.” My second aim is to outline a theory of personhood by describing the psychological, axiological, and dynamic characteristics that are the essential constituents of the human person.
The “Golem”
The Jewish interest in the creation of a “golem,” an artificial anthropos, dates back to the Talmud, where in Tractate Sanhedrin it is said that the righteous, if they so desired, could create both a world and a man, and that the rabbinic sage Rava had indeed created a human being. However, according to the Talmud, Rava’s artificial human lacked the power of speech, an imperfection that resulted from Rava’s and presumably all men’s, iniquities.[1] What is suggested by this Talmudic passage is that only a being of perfect goodness and righteousness, i.e. a God, could be empowered to create a full human being, one who exhibited the defining human capacity for speech, and who would thereby be endowed with a soul.
Later, as Moshe Idel has pointed out, the kabbalists of the school of Isaac the Blind discussed what would be needed to infuse a soul into the otherwise lifeless body of the artificial anthropoid. Their conclusion was that only through the powers of the ten sefirot, which for the kabbalists, are the archetypes of mind and value that comprise the “middot” or traits of God, the soul of humanity, and the elements of the world, could an anthropos become fully animated with speech and soul.[2] In this way the golem, like Adam, would be made Tzelem Elokim, in the image of God, as it would embody the same archetypes that are intrinsic to divinity. Indeed, Idel points out, that there is a Jewish tradition within which the term “tzelem” (referring to the divine “image”) is identified as equivalent to the term “golem.”[3] Idel suggests that it was via the notion of the golem that Jewish philosophers of a mystical bent were able to unite the Aristotelian notion of hyle, the original formless and souless matter of the cosmos, with the Platonic realm of forms or ideas, which, in the Kabbalah were recast as the sefirot.[4]
How the artificial humanoid was to be infused with the sefirot and powers of speech, soul, was, for the kabbalists, thought to be dependent upon such things as the ritual purity and moral righteousness of one who would to take on this endeavor, the use of virgin soil that had never previously been ploughed or otherwise disturbed, to form the humanoid’s body, and the magical incantation of divine names that would activate and ensoul the golem as a living, breathing human. The rabbis and kabbalists could not have dreamed of the possibility of a technological (as opposed to magical or spiritual) foundation for creating both anthropoids and worlds, nor of the production of supposed “digital persons,” many of “whom” now populate the Internet. Neither were they, in my view, in the position to develop a conception of the psyche that would do full justice to their remarkable notion of the sefirot, their archetypes of mind, value and creation. In what follows I develop a contemporary conception of these sefirotic archetypes, which I will then utilize to articulate an understanding of actual and artificial personhood. My “technique” for creating an artificial human differs considerably from those described in the Talmud and other Jewish sources, but as will become clear, it is inspired by the kabbalistic doctrine of the sefirot. My account will be largely theoretical, but it will, in a broad way, be practical as well, as it is intended to serve as a guide to (and test for) those who believe they can, and might be inclined to, create “digital persons” that are on a complete par with ourselves. And in the course of my discussion, I will question whether the creation of such a “digital person” is even possible.
Revisioning the “Sefirot”
As we have seen, the kabbalists envisioned the sefirot as spiritual-axiological entities that served as the elements of creation and the human soul. While the names and precise characteristics of the sefirot varied, and the same sefirah often carried more than one name or referred to more than one value or trait, they were regarded to be ten in number, and a typical listing of them involved: Keter/Ratzon/Tinug (crown, will, desire, delight), Chochmah (wisdom), Binah (intelligence, understanding), Chesed (loving kindness), Din/Gevurah (judgment, power), Tiferet/Rachamim (beauty. Compassion), Netzach (endurance), Hod (splendor), Yesod (foundation), and Malchut/Shekhinah (royalty, receptivity, femininity, time). The kabbalists held that these ten sefirot comprised the body and soul of the primordial human, Adam Kadmon (which they held to be a model for both humanity and the universe) and that they are reflected in each individual human body and soul.
While it is not my intention or purpose to examine or critique the traditional kabbalistic accounts of the sefirot in any depth, even a cursory examination of the above listing reveals that the kabbalists regarded the sefirot as “powers” that straddled the boundary between (or encompassed both) psychological functions and values, and it is precisely in this capacity, as archetypes of mind and value that they have drawn my interest. While I believe that the kabbalists’ specific tally of these archetypes is rooted in the spiritual and social categories of their time[5], the general account of the human soul being comprised of archetypes of mind and value is an important insight that can serve us well in any account of what it means to be a person, human, digital or otherwise. I believe that we will do well to follow this general account in developing a contemporary psychological theory of personhood.
By way of anticipation, it is not only the kabbalists’ understanding of the nature of the sefirotic elements of the human soul that serve as a basis for our own view, but their understanding of the sefirotic dynamics as well. What I have in mind is the doctrine of the “breaking of the vessels” (shevirat hakelim) put forth by the kabbalist, Isaac Luria (1576-72) and his followers. Luria held that it was necessary for the sefirot to become displaced and shattered, and for the “sparks” of divine light resulting from the rupture, to descend into a nether realm, and become enveloped by the shards of the broken vessels, entrapping and concealing them from human view and praxis. According to Luria and his followers, it is humanity’s divinely appointed task, through the performance of ethical and spiritual acts (mitzvot), to restore, emend and perfect the broken vessels, and in the process redeem and perfect both humanity and the world. I will later return to the “breaking of the vessels” and the restoration/emendation (tikkun) of the sefirot in order to complete my account of personhood.
I believe that a revisioning of psycho-axiological categories, constructed along the lines of the sefirot can provide us with the beginnings of an account of what it means to be a person. With this in mind, I have constructed (not 10) but 14 “modes of mind and value” that I will examine in light of the two fundamental concerns of this paper, the nature of artificial and actual personhood. I have previously discussed these modes of mind and value in a purely psychological context.[6]
I will explain both the derivation and explanatory power of these fourteen modes in a moment, but I will begin with their enumeration: experience (“qualia”, feeling), being (substance, existence), desire, action, cognition, equality/inequality, symbolization, (navigation in) time, transcendence (imagination), reflection, personal identity, relationality, sociality, and limitation. As we proceed, we will see that each these 14 “modes” are correlative to particular (1) psychological functions, (2) values, and (3) theories of “the meaning of life,” and it is precisely these connections that provide them with the power to account for the “person.”
There is nothing strange or occult about these 14 modes of mind; indeed, they are very familiar to each of us, and it is their very familiarity that serves as the guarantee of their foundational status. One cannot, but be familiar with experience, existence, desire, action, cognition, equality-inequality, symbolization, time, imagination, reflection, personal identity, relationality, sociality, and limitation. While these modes or functions are typically intertwined in our lives, they comprise what can be thought of as a common-sense account of the human psyche. It is this set of common-sense categories that will serve as a guide for understanding the human person, and, hence the possibility of a transition from artificial intelligence (AI) to artificial humanity (AH).
We need not, however, simply appeal to common sense in order to arrive at a foundation for the modes of mind model. We can also adopt a “transcendental” approach to their derivation, an approach, first suggested by Kant and the Neo-Kantians, and which begins with the question of what are the necessary conditions for the existence of human subjectivity, and moreover, personhood. When we adopt this procedure, we soon recognize that to be a fully conscious human subject one must “participate” in each of the 14 modes of mind I have described. To be a person, a fully conscious human subject, one must have qualitative experience, i.e. sensations, feelings, etc., have the sense of one’s own existence (and that of other subjects and a world), have desire and purpose, engage in action in pursuit of one’s goals, think and reason (cognition), judge sameness, difference, (equality/inequality), utilize language and other symbols (symbolization), imagine or otherwise reach beyond one’s present experience (transcendence), be aware of, navigate, and develop within time, reflect on one’s experiences, desires, actions, and existence, achieve a sense of personal identity, engage in individual relations with others (relationality) participate in a community of other subjects (sociality), and be subject to a host of limitations (including those created by one’s location in space and time, the limits of perception, memory, and intellect and mortality).
I believe that the 14 modes of mind are implicit in the ideaof personhood. While I recognize that there is a degree of arbitrariness in my choice of modes, I am of the view that this scheme, or one very much like it, provides an account of personhood, and that none of these modes can be eliminated without radically impoverishing the person or self. To see why this is the case need only to reflect upon what a human life would be like without any one of these modes of consciousness.
The philosopher Charles Siewert once suggested a thought experiment in which we are offered the possibility of trading away our “phenomenal experience” for anything we desired. Siewert argued that no one would be willing to make such a trade, and effectively live their life as a “zombie”, for anything that they would not be willing to trade their life for.[7] I would suggest that (with the possible exception of limitation) the same would be true for each of the other modes of consciousness I have outlined. (Who would be willing to live without cognition, desires, action, relationships, etc.?) I have, however, included limitation as a defining mode of human subjectivity because a consciousness that was not limited, that did not engage its environment from the limited perspective of location in space and time, would no longer be human, and would in effect be what philosophers and mystics have spoken of as an absolute or infinite God viewing reality sub specie aeterna.
The Ingredients of Digital Golemhood
The “modes of mind” model provides a categorial scheme that outlines the psychic functions that are the necessary conditions for human subjectivity and personhood. We have thus arrived at a preliminary assessment of what would be required in order to create a “golem”, an artificial human, and, by extension, what would be necessary to move from the production of digital artificial intelligence to the creation of a “digital person.” A golem, to be designated as a person would have to enter into and perform the “functions” of each of the 14 modes of mind, and do so at a level of performance that would be indistinguishable from the level achieved by natural human beings. A similar view of artificial consciousness and personhood has been adopted by others who have attempted to articulate the functions[8] or “indicator properties”[9] to determine whether we should attribute consciousness to AI. It has been argued that the “strong version” of AI, obe that could achieve consciousness must be able to develop such capacities as imagination, emotional intelligence, self-reference and self-reflection, none of which have as yet been effectively achieved at a human level.[10]
However, for reasons that I am about to consider, even a broadly conceived functionalist or “Turing test” for personhood may not be nearly sufficient to assess the personhood of our digital golem. Many discussions of whether AI can achieve personhood focus upon its capacity for sentience or consciousness. This question began to receive considerable attention at least as far back as the 1970s. In 1980, John Searle put forth his famous (and controversial) “Chinese Room Argument” in which he claimed that a computer could perfectly translate and converse in Chinese via a purely rote syntactical process without understanding. Searles argued that a digital computer is inherently limited to syntactical exchanges that manipulate symbols without understanding or consciousness.[11] The question of AI understanding and meaning, remains unsolved, and it is universally agreed that current computers remain wholly syntactical, manipulating symbols without grasping “meaning.” While the output of artificial intelligence programs like Chat GPT, may be indistinguishable from human discourse, and hence pass a Turing test, AI manipulates symbols and matches patterns on the basis of learned statistical associations without actual understanding. While various proposals for resolving the so-called “symbol grounding problem” have been proposed, unless and until this occurs it is argued that computers will lack consciousness,[12] and hence “personhood.”
It thus might be suggested that AI will attain personhood if it (1) demonstrates functions (capacities and behaviors) that are indistinguishable from those of humans, and (2) have sensations, feelings, understanding and other qualitative experiences. An AI “golem” that met the first of these criteria, i.e. passing a “Turing test” for all elements of human functionality, but failed the second would be a philosophical “zombie,” i.e. an entity that behaved like a person but who lacked “qualia.” Such a zombie might, for example, show the outward signs of relating to humans (or other zombies), but would lack the experience, understanding and “feelings” we associate with friendship, love, empathy, and compassion. In creating our golem we want to avoid the mistake of producing a zombie.
Determining whether an artificial hominid has “qualia” and is not a “zombie” is no easy matter. In fact, it mirrors the problem of how it is that anyone, assured of their own qualitative experience, can be assured that other human beings have such experience as well (the philosophical problem of “other minds”). Some philosophers have actually denied the existence of “qualia” and effectively argued against the distinction between truly sentient beings and zombies. Arguments against qualia have often relied on the Wittgensteinian idea of the impossibility of a “private language.” In part, this argument asserts that because (1) we learn language regarding our so-called inner qualitative states through others labeling our behavior and (2) it would be impossible to create a consistent language by directly labeling our inner states and using that to communicate to others, (3) our mental terms cannot refer to such inner states. Others have followed Daniel Dennett in holding that “qualia” are an illusion which play no functional role in human cognition or behavior, and our mental states are best conceived in terms of their functional roles or as neuropsychological events and processes.[13] I will not consider these arguments in detail as, at least to my mind, they have the quality of denying the existence of an elephant in the room. For example, while it is clear that we initially teach and learn language about “mental states” through the labeling of outward behavior, we all come to then associate such language with our inner qualitative experience of such things as pain, love, sadness, memory, understanding, etc., and then use that language to communicate that experience to others. While there is no external guarantee that we are using that language consistently (or honestly) (e.g. that you and I label the same inner sensation as “pain” or “love’) , much of this language would be empty without its internal referents.
Some philosophers, accepting the reality of qualia, have held that qualitative consciousness is a byproduct of complex “re-entrant” information processing (in which output is continually reprocessed as input),[14] and thus leave open the possibility that future AI could reach at least minimal levels of consciousness.[15] Others have held that consciousness is a peculiar development of biological evolution[16] and that we have no reason to believe that it is present in machine models of intelligence. Much has been written on the topic of the purported critical difference between the “natural consciousness” of human beings and the “artificial intelligence” of computers. The former, which arises in the body and brain of a biological being, is, for lack of a better way of putting it, “rooted all the way down.” By this it is meant that the human mind is systematically embedded or “nested” in a series of somatic, cellular, chemical, physical and even quantum processes, with each of the “higher” processes (e.g. at the cellular level”) dependent upon processes at the level below it (in this instance, the chemical). The entire system has evolved over billions of years, each level building upon the level “below” it to produce life and, with it, mind. On some views, the qualitative aspect of the human mind is dependent upon, indeed caused by, events and processes that involve a complex interaction between organ, cellular, chemical, atomic and even quantum, levels within the human species. While the production of digital intelligence results in a machine that appears to duplicate (and improve upon) several of the functions we associate with human sentience and subjectivity, it is devoid of this “nesting” character. Computer “intelligence” and other functionality arises at the chip level, and the “deeper” chemical, atomic and even quantum[17] process that operate within its chips are4\ irrelevant to its functioning. It is in this sense that, unlike the human mind, computer intelligence is not rooted all the way down. The idea that digital entities can have qualitative experiences is dependent on the hypothesis that such rooting doesn’t matter—that it is the processing of the information that counts and the medium doing the processing is irrelevant.
These considerations should, I believe, give us some pause about our digital Golem. In some sense to assume that an AI that is functionally an AH is a person, might be akin to holding that a stage set of a forest, one that is completely convincing to the naked eye, but where the components are made of plastic, is for all intents and purposes the equivalent of an actual forest. Of course, it can be argued that anyone who gets close enough to the stage set will realize that it is made out of plastic. However, something very similar can be said about anyone who gets close to a “digital golem”—they will realize that it is composed of metal rather than flesh, blood and bone.
The idea, as Carl Jung once put it, that “consciousness can be made in a retort” makes the most sense within a metaphysics which holds that information is the basic, indeed only, stuff of the universe—a philosophy which follows, for example, from David Chalmers’ assertion that it hardly matters at all whether we exist in the so-called “natural world” or in a computer simulation.[18] If the entire universe is a digitally constructed apparatus than all we can do is play our digitally constructed part. On some views, we may (already) be in a computer simulation ourselves, and AI is no more artificial than we are.[19]
An Axiological Theory of Personhood
The question of whether computers can attain qualitative consciousness is, however, not the only challenge we must face in our efforts to construct a digital golem. We must also confront the question of how to fully involve our artificial hominid in value. While the question of value is rarely brought up in connection with the nature of consciousness, it is, in my view, of paramount importance in ascertaining whether and what functionality is sufficient for humanity. I will argue that in order for a digital or other artificial humanoid to be considered human. it must be an “axiological’ being, i.e. one that is both immersed in and generative of values.
A wide range of philosophers and psychologists have recognized the intimate relationship between consciousness and value, holding that virtually all value is inextricably linked to consciousness or mind. For example, the British philosopher David Ross held that all values involve “states of mind” or the relationships between such states.[20] More recently, the neuropsychologist and ethicist Sam Harris has stated, “We can know, through reason alone, that consciousness is the only intelligible domain of value,”[21] and Max Tegmark, an MIT physicist whose book, “Our mathematical Universe” argues that the multiverse is a vast mathematical object, has expressed the view that meaning and value exists only “through us humans and perhaps additional life-forms, our Universe had gained an awareness of itself.”[22] (p. 391). Finally, as we have seen, Charles Siewert has argued that sentience is essential for value and that life without conscious experiences would be “little or no better than death.”[23]
There can be a little doubt then that value is dependent upon conscious experience. Nothing, it would seem, can be valued except to the degree that it is apprehended by or impacts, a conscious, sentient being. However, what is often ignored is that the relationship between consciousness and value works in the opposite direction as well. Consciousness or mind, and here I am including the 14 modes of mind that I have discussed above, are themselves intrinsically bound up with and conditioned by value.
There are good reasons for holding that while on the one hand mind is a condition for the actualization of values, values necessarily enter into experience and each of the other modes of mind that I have outlined above. Values, for example, are essential to both perception and cognition. To understand why this is so we simply need to ask why it is that a human subject selects one object over another or a particular perspective on each of the objects it perceives. Why, for example, do we view a slab of meat, a building, or a painting, to take three arbitrary examples, as the objects just named, as opposed to a bundle of sensations? We do this because our attention, focus and synthesis of our experience is conditioned by the values which dictate the categories through which we encounter and speak about the world. It is highly questionable, whether there could be “facts”, “things” or even “information” in the absence of the very axiological categories through which we sort our experience. As Ian McGilchrist has pointed out, our attention, conditioned by values has an ontological status that is prior to the things we encounter in the world. A mountain is something very different depending on whether one is a navigator, a prospector, a painter, or a spiritual seeker. We might go so far as to ask why it is that we at certain times see a mountain as an entity rather than focus upon the rocks, soil, trees, and leaves, or even the molecules that comprise it. Or why it is that we see anything of these things as opposed to just shapes and colors? Again, our interests, which reflect our values, condition us in this regard. A similar point can be made with regard to each of the other modes of mind I have described. For example, our desires are revelatory of what we value, our relationships are conditioned by such values as love, compassion, hatred, and envy; our reflection, is directed towards determining, prioritizing and achieving the values and goals of our endeavor. In short, even more so than mind, it is values that make us human.
As indicated in Table 1 there is a link between each of the 14 modes of mind I have described, and specific values. The modes of mind model enables us to detail a cartography of values that is more comprehensive than those in earlier axiological systems. For example, Ross (1930/2002), who held a view of the mind that was limited to the traditional elements of “cognition, feeling, and conation,” limited the range of essential values to knowledge, pleasure and virtuous disposition and action, which he saw as corresponding to these mental elements. Those who hope to create a golem, an artificial human, should bear in mind that a requirement for true golemhood, i.e. personhood, is the capacity to recognize, actualize, weigh, reason about, and make choices in connection with each of the modes of mind and their associated values.
Table I: Modes of Consciousness and Values
Mode of
Consciousness |
Function
Consciousness… |
Associated Values | Meaning in/of human life |
Experience | has qualitative experience, sensation “feeling.” | pleasure, happiness, beauty, | Pursuit of subjective well-being; abundance of pleasure and a minimum of pain, the experience of beauty. |
Being/Existence | Has and posits existence vs. nonexistence, reality vs. illusion | life, health, material resources, wealth, well-being | Pursuit of a long life, health, prosperity, and offspring
|
Desire | is motivated and directs its interest and attention | satisfaction, fulfillment | The realization, satisfaction, fulfillment—or overcoming—of one’s desires. |
Action | enacts its desire and aims to achieve goals through intentional behavior | freedom from
…freedom to… |
Pursuit of a life project |
Cognition | endeavors to ascertain factual and propositional truths | knowledge, truth,
open-mindedness, love of Learning |
The search for truth, knowledge and intellectual insight. |
Symbolization/ language | articulates itself in symbols, language, narration | meaning and culture | Immersion in a collective myth or culture |
Personal Identity | coalesces in a center of experience, desire and action—a “person” | commitment, perseverance, recognition, achievement, vitality, responsibility, courage, integrity, humility/modesty, courage,
temperance, etc.
|
Actualization of an individual’s personality or self; the development of virtues. |
Relationality | relates to other minds. | love, empathy, kindness, intimacy, friendship, trust, humor, gratitude, compassion, mercy, forgiveness, peace | Love, friendship, kindness, and/or compassion to others |
Sociality | understands itself within a communal and cultural context | familial and communal welfare, cultural identification, humanitarianism | Identification with a community, nationality or humanity as a whole.
|
Equality/ Inequality | makes judgments of quality/inequality | justice, fairness, lack of bias, opportunity— excellence, depth, greatness of soul, saintliness | The pursuit of excellence and/or justice |
Time | experiences itself as inhabiting in and navigating within time | “having” and “giving” time, endurance, remembrance and
respect for tradition |
Fully leading a life within or escaping the cycle of time
|
Transcendence | transcends itself, its situation and what is present through its capacity for imagination | creativity, humanity, vision, hope, gratitude,
spirituality, enlightenment |
Meaning through creativity, altruism and/or self-transcendence |
Reflection | takes itself as an object | perspective, wisdom,
insight, self-knowledge, |
Attaining psychological and/or spiritual insight |
Limitation | Is limited in experience, perspective, understanding, memory, achievement | Humility, acceptance, openness to the unknown | Accepting, celebrating death, chaos and the absurd |
Our golem, like we ourselves, cannot primarily be a perceptual/cognitive being but must be a being that exists for love, pleasure, happiness, truth, justice, and each of the values that condition and are conditioned by the 14 modes of mind. We might say that humanity is defined by its role as an axiological (value oriented) being and not, as Aristotle supposed, simply a “rational animal.” Since the time of Aristotle there has been a prejudice in favor of cognition over all other modes of mind, a prejudice that has conditioned both epistemology and metaphysics and largely sidelined axiology.
The centrality of values in constituting the human subject prompts us to question whether the modes of mind I have described are simply functional or behavioral states and processes. As we have seen, if this were the case, then the project of creating our golem, while still quite formidable, would be simplified, for all that would be necessary for a digital creation (for example) to be a person would be for it to exhibit the behavior and achieve the (functional) operations we expect for each of the 14 modes of mind, and for us to be convinced that it has qualitative conscious experience.
However, in light of our discussion about values and the modes of our conscious life, it can be argued that unless our golem participates in, prioritizes, and realizes values, including values such as pleasure, happiness, beauty, kindness, love, compassion, forgiveness, justice, wisdom, etc., it can hardly be regarded as human. Further, the question of whether our golem is “infused” with such values, is integral to the question of whether it is conscious, because as we have seen (as Sam Harris has put it) “consciousness is the only intelligible domain of value.” AI “zombies” could not participate in the re4alm of value precisely because they have no conscious experience, and thus, while they may be of value to us, they have no more value to themselves than a mechanical drummer boy, or a stone.
It has been argued that while it is critical to embed basic values, such as respect for autonomy, non-maleficence, fairness, transparency, and accountability within AI,[24] computers instantiating moral algorithms are neither autonomous nor morally accountable because they lack consciousness and moral comprehension. “Values” for computers are simply prioritized “items on a list“, are not experienced emotionally, and therefore cannot be grounds for personhood.[25]
Some have argued that even without the attribution of consciousness, at least a secondary moral and social status might be granted to super intelligent AI.[26] While it is true that in humans the various “modes of mind” may often function outside of awareness, I believe that the significance and value of such unconscious mental activity is contingent upon its potential to enter conscious experience. This is true even for the “cognitive” and symbolic” values that are most readily associated with AI. “Truth” and “knowledge” are examples of cognitive values, while “meaning” and “culture” are values associated with the symbolic, but none can be considered to be worthwhile if they are confined within an automaton that has no qualitative experience. Phenomenal experience penetrates each of the 14 m odes of mind in the production of values, and it is for this reason that I call them modes of mind or vehicles of “consciousness” rather than “psychological functions.”
Meaning in (Digital) Life
The relationship between the modes of mind and values leads inevitably to the realm of meaning, and particularly to the meanings that provide purpose and meaning in human life. Each of the modes of mind, mediated through their associated values, provide avenues for life-meaning, For example, can be seen in Table 1, the mode of experience is associated with the notion that meaning in life involves the experience of pleasure and/or beauty, action is associated with meaning gained through the pursuit of a life project, relationality with meaning through love, friendship and compassion, etc. For our golems to be human, it too must enter into the modes of mind and their associated values in a manner that affords them the possibility of achieving meaning in their (digital) lives.
Thus while both the realization of multiple psychological functions in conjunction with qualitative experience along with participation in values are important person-like achievements for our golem, they are not sufficient for it to achieve “personhood.” Our golem must have the capacity to participate in a meaningful existence, and without such participation it is hard to imagine that the golem’s “values” will be deep and integrated.
Earlier I suggested that we must turn to the Kabbalist’s understanding of both the nature and the dynamics of their archetypes of mind and value, the sefirot, in order to gain important insights in how to make a golem. I indicated that the kabbalist Isaac Luria introduced the idea that in order for the sefirot to be fully actualized they must be subject to a displacement and shattering that results in their fragments along with sparks of divine light to fall into a metaphysical realm of darkness, chaos and evil. Luria described how as a result of this rupture and descent the sparks are enveloped and entrapped by the shards of the fallen broken vessels, and how it is humanity’s task, through the performance of ethical, spiritual and other valuational acts to extract and release these sparks and thereby enter into the process of restoring, emending and perfecting the vessels, which in turns leads to the redemption and perfection of humanity and the world. As I have described elsewhere,[27] the reason that the sefirotic vessels must be displaced, broken, and later restored, emended and restored, is because, as Rabbi Aden Steinsaltz once told me,[28] only in a displaced broken world, a world on the brink of disaster, that the values of wisdom, knowledge, kindness, compassion, etc. can be realized. An Eden world offers humanity no opportunity to suffer, struggle and think, and it is thus only after the expulsion from that world, after the vessels have been displaced and shattered, and birth and death made a part of human existence that we can achieve the ethics, value, and meaning that comprise our humanity.
The implications for the creation of a golem, or a digital human person, should be eminently clear. Beyond mimicking human cognition, even beyond being able to mimic the 14 “functions,” I have described as the basic categories of human subjectivity, and even beyond meeting the threshold of having “qualia” or felt-experience, an artificial hominid would, in order to be a person, have to participate in the pain, suffering, struggles and looming tragedy of human life; it would need to experience the vicissitudes and dramas of birth, development, joy, anguish, decline and the inevitability of death, which ultimately make human life both valuable and meaningful.
Would it be possible to simply “program” values and a human-like personal history into our golem? Or would this be no different than an author writing in a history to flesh out a character in a novel or a movie? Is ersatz history the same as the real thing? Those who claim that “simulated” experience is as valid as “real” experience or who, like the philosopher Nick Bostrom[29] (as well as Elon Musk![30]), hold that we are already living in a simulation will answer that this is a distinction without a difference. This, of course, assumes that the problem of “qualia” has been solved in favor of digitally based conscious experience, and that our AI golems, so programmed, are not zombies. It also assumes that, for example, having the experience of a life with, and then the death of, a parent one never actually met and who was never actually your parent, being programmed into your circuitry, is somehow the same as actually having been their child and going through the pain and sorrow of their death.
That actual history is a precondition for value is evident in the fact that a computer can be programmed to recreate a valueless painting that is indistinguishable from Mona Lisa. Perhaps the same should be said regarding the value of a computer simulation of a human being.
Digital Soul?
Noam Chomsky has argued that the “golem” that we take for artificial intelligence today is an automaton, a probabilistic machine, a kind-of super-sophisticated “auto-complete”, one that is axiologically indifferent and ethically blind.[31] It is, I would suggest, a simulacrum of human cognition devoid of morality and values, devoid of immersion in the struggles, joys and tragedy of human life, devoid of participation in the conscious, felt sense of a human life embedded in the cycle of birth, development and death, and devoid of values and what we might call “soul.” What is frightening is that we will soon have a world comprised of a series of AI machines interacting with us and one another, writing plays, and poetry, painting pictures, forging sculptures, composing music, solving mathematical problems and making scientific discoveries and then writing articles and reviews about all of these things —and even creating images of human beings that act as if they are falling in love with us and one another, without an ounce of soul in any of it. Human beings will very likely see “soul” in all of this machinery, and be moved by the AI poetry, drama, music, and art—and maybe the poetry, music and art will be “moving,” but we will be (and already are) fooled, just as many are fooled by psychopathic lovers who have no feelings at all. Perhaps humanity will indeed believe that it can download itself into a matrix-like simulation and exist eternally, cut off from the original natural world and become eternal artificial hominids ourselves. Human beings are not well equipped to distinguish simulated from genuine values and soul. We are terrible lie detectors, and we have from time immemorial been vulnerable to hucksters, scammers, and seducers. We have certainly not evolved to distinguish digital golems from biological human beings and, as more of us enter into the world of AI friends, lovers and therapists, we are rapidly being taken in by an image, as if we believed that the people we see on screen in motion pictures are really there in the room interacting with one another and having consciousness and feelings.
Creating a golem is no simple matter. I have in these few pages provided an outline of what might be regarded as a “manual” for the creation of an artificial human. Artificial Intelligence may appear to- be a first step, but even this may be a fool’s errand. As I have argued, there are several other steps necessary to achieve Artificial Humanity, one or more of which may be impossible to achieve in the digital medium that has produced AI. However, long before we can determine whether this can be achieved, AI may not only replace us in many of our roles and pose other dangers that it may well be too late to forestall.
The rabbis of previous centuries were entranced by the idea of an artificial hominid. Some of them considered whether a golem should be granted ethical status,[32] (Idel 221-3) and held that there are serious dangers when human’s attempt to create artificial simulacra of themselves. According to Jewish tradition, a golem is not truly intelligent and as such may perform a task requested of it in a literal, and potentially disastrous manner. The 18th century scholar, Rabbi Jacob Emden, wrote that his own 16th century ancestor Gaon R. Eliyahu Ba’al Shem of Chelm created a golem that grew so large that Eliyahu became concerned that it might destroy the world.[33] Eliyahu was able to destroy the creature by removing the divine name from the golem’s forehead, but was injured and scarred in the process. We may not be so fortunate as to be only injured and scarred.
Personhood
Aristotle defined the human person as a “rational animal,” and Boethis defined person as “an individual substance of a rational nature.” Locke also later included rationality in his definition, and added that a person “can consider itself as itself, the same thinking thing, in different times and places.” More recently Harry Frankfurt held that what defines a person is the capacity to reflect on one’s desires and have second-order desires about them, and others have described a person as an entity that should be granted ethical rights, in particular the right to self-determination. In this paper I have put forth a more expanded definition, one that involves the exercise of a range of mental modes or functions, the having of qualitative consciousness, and partaking in the birth, development, joy, anguish, decline and the inevitability of death, which makes human life and personhood an ultimate challenge and both valuable and meaningful.
[1] M. Idel, Golem: Jewish Magical and Mystical Traditions on the Artificial Anthropoid. Albany, NY: State University of New York Press, 1990, p. 128.
[2] Idel, Golem, p. 136.
[3] Idel, Golem, p. 6.
[4] Idel, Golem, pp. 140-2.
[5] The question this raises is whether any account of the human person will, at least to a certain extent be historically or culturally limited. I will consider this question later in this paper in connection with a discussion of whether the values associated with the various modes of mind are objective and “trans-world.”
[6] Drob, S. (2016). An Axiological Model of the Relationship between Consciousness and Value. New Ideas in Psychology 43, 57-63. Drob, S. (2022). Psychology, Values, and the Meaning of Life: Bridging the Philosophy—Psychology Divide. Journal of Humanistic Psychology. DOI: 10.1177/00221678221116170, journals.sagepub.com/home/jhp
[7] Siewert C., The significance of consciousness. Princeton, NJ: Princeton
University Press, 1998, p. 329.
[8] Xie, J., An explanation of the relationship between artificial intelligence and human beings from the perspective of consciousness. Cultures of Science, 4(3), 2021, 124-134. https://doi.org/10.1177/20966083211056376
[9] P. Butlin, et. al. Consciousness in Artificial Intelligence: Insights from the Science of Consciousness . arXivLabs: experimental projects with community collaborators, 2023
https://doi.org/10.48550/arXiv.2308.08708, Accessed December 28, 2024.
[10] Ng, Gee Wah. Journal of Artificial Intelligence and Consciousness 07:01, 2020, 63-72
[11] Searle JR Minds, brains, and programs. Behavioral and Brain Sciences 3(3), 1980: 417–424. Cf. Block, Ned. “Psychologism and Behaviorism” The Philosophical Review, 90 (1), 1981, 5-43
[12] Pentti O. A. and Haikonen Journal of Artificial Intelligence and Consciousness 2020 07:01, 73-82
[13] Dennett, D. C. Quining qualia. In A. Marcel & E. Bisiach (Eds.), Consciousness in contemporary science (pp. 42–77). Oxford University Press, 1980.
[14] Tononio, Giulio, “Consciousness as Integrated Information: A Provisional Manifesto.” Biology Bulletin, 215:216-242, 2008.
[15] Fallon, Francis, Integrated Information Theory of Consciousness. Internet Encyclopedia of Philosophy. https://iep.utm.edu/integrated-information-theory-of-consciousness Accessed December 28, 2024.
[16] John R. Searle, Mind: A Brief Introduction, New York: Oxford University Press, 200).
[17] Roger Penrose, Shadows of the Mind: A Search for the Missing Science of Consciousness. Oxford: Oxford University Press, 1994. Cf. Bruce Rosenblum and Fred Kuttner, Consciousness and Quantum Mechanics: The connection and Analogies. The journal of Mind and Behavior, 20, 3 (Summer 1999), 229-256.
[18] Chalmers, D., Reality+: Virtual Worlds and the Problems of Philosophy. New York: W. W. Norton, 2022.
[19] For some of the philosophical implications and paradoxes that emerge from simulation theory see: Drob, S. L. Are you praying to a videogame God? Some theological and philosophical implications of the simulation hypothesis. International Journal of Philosophy and Theology, 84(1), 2023, 77–91. https://doi.org/10.1080/21692327.2023.2182822
[20] Ross, D. The Right and the Good. New York: Oxford University Press, 1930/2002, p. 140.
[21] Harris, S., The Moral Landscape: How Science can Determine Human Values. New York:
The Free Press, 2011, p. 32.
[22] Tegmark, M., Our Mathematical Universe. My Quest for the Ultimate Nature of Reality.
New York: Alfred A. Knopf, 2014.
[23] Siewert, C. The Significance of Consciousness, p. 329.
[24] van de Poel, I. Embedding Values in Artificial Intelligence (AI) Systems. Minds & Machines 30, 2020, 385–409. https://doi.org/10.1007/s11023-020-09537-4
[25] Véliz, C. Moral zombies: why algorithms are not moral agents. AI & Society 36, 2021, 487–497. https://doi.org/10.1007/s00146-021-01189-x
[26] Torrance, S., & Roche, D., Does an artificial agent need to be conscious to have ethical status?. Technologies on the stand: Legal and ethical questions in neuroscience and robotics, 2011, p. 296.
[27] Tegmark, M. (2014). Our mathematical universe, p. 391.
[28] Steinsaltz, A. The Mystic as Philosopher: An Interview with Rab bi Adin Steinsaltz. Interviewed bt Sanford L. Dronb and Harris Tilevitz, Jewish Review, 1990. www.newkabbalah.com/stein.html/; https://steinsaltz.org/essay/mystic-as-philosopher/
[29] Bostrom, Nick. Are You Living in a Computer Simulation? Philosophical Quarterly 53, 2003: 243–255. Available online: http://www.simulation-argument.com/ (accessed on August 2, 2022).
[30] Musk, Elon. Full Interview. Code Conference, 2016. Available online: https://www.youtube.com/watch?v=wsixsRI-Sz4 (accessed August 3, 2022).
[31] Noam Chomsky: The False Promise of ChatGPT, New York Times, March 8, 2023.
[32] Idel, Golem, pp. 221-3.
[33] Idel, Golem, p. 210.