NEWS

What happens if artificial intelligence becomes self-aware

| Author: Eric Baerren | Media Contact: Aaron Mills

Artificial intelligence erupted onto the scene last year through ChatGPT. It was followed by concerns about what it might mean for the economy and humanity’s future. But it also raised questions about the nature of life.

Matthew Katz, a philosophy faculty member in the College of Liberal Arts and Social Sciences, share his expertise about the philosophical side of AI.

Q. How can we determine when something has attained what we call consciousness?

First, consider the objects around you – their shapes, sizes, colors – you’re very much aware of these things. But you’re not aware at all of the activity in your optic nerves that plays a role in your ability to see things. You experience the visual world around you, you don’t experience activity in your optic nerves. This is how philosophers think of the difference between conscious mental states and non-conscious mental states. Conscious mental states are those you experience, and non-conscious mental states are those you don’t.  So the question whether something is conscious or not is the question whether it has any conscious mental states – whether it experiences anything. 

For the most part, the evidence that any human being has that any other human being is conscious is entirely behavioral. I can see what you do, how you react to certain things, I can hear what you say, and so forth. But all that’s just evidence that there’s some “view from inside” you – that you experience things. I do not have similar evidence that my smart phone is conscious. As far as I can tell, it is completely “dark” inside – although it can look at and “remember” (by storing a photo of) the same ripe tomato that I can look at and remember, there is nothing it is like for the phone to do so – it doesn’t experience doing anything.  

So, one suggestion for determining whether an AI is conscious or not would be to rely on the same sort of behavioral evidence. Some have described this as a sort of “Turing Test” for consciousness – the famed Turing Test originally having been proposed by Alan Turing as a test for whether a computer was thinking, not whether it was conscious. Other proposals include determining whether, in the human case, there are neural substrates that are reliably correlated with consciousness, and if so, whether the AI has structures that serve that same purpose. If so, we might conclude in that case too that the AI is conscious. But, to answer the question briefly: we don’t yet have an agreed upon answer, and it’s not clear when we’ll have one.

Q. Is it a form of forced labor to force a self-aware AI to do work it does not want to do?

Probably. The tools we call “AI” today are just that – tools, designed to be used to perform various tasks. As in the first question, they’re not conscious, and moreover, they are not self-aware. So, it seems unproblematic to use them for those tasks – or at least, it seems unproblematic from their perspective, since they don’t have a perspective. There might be other sorts of ethical problems with using them that are not related to forced labor (e.g., using GPT-4 to write one’s term paper violates standards of academic integrity).

But if they were to become conscious, that does raise the question of forced labor. Mere consciousness may not raise the issue of forced labor. A being might be conscious without being self-aware. So, for example, we don’t have any trouble using dogs for herding, dogs (probably) being conscious but not self-aware. Of course, dogs bred for herding appear to love doing their work. For them, it’s fun. We might think differently of animals who do not enjoy their work. One need only consider circus animals or animals kept in enclosures that are too small for them, to get the sense that something has gone morally wrong.

So, if we’re dealing with a conscious AI, we might wonder whether the AI enjoys the work it’s doing. If not, we would likely conclude it morally problematic to force it to continue that work. And it certainly would seem problematic to force a self-aware AI – perhaps one that has expressed its resistance to continuing its work – to nevertheless continue that work. That indeed would seem like forced labor.

Q. To what degree is a sense of purpose necessary for something to be considered alive?

Depending on what’s meant by “a sense of purpose”, perhaps not at all. All sorts of animals are alive that don’t (or don’t appear to) have a sense of purpose. Plants are alive, but it doesn’t seem as though they have a sense of purpose. Dogs and cats are alive, but also don’t seem to have a sense of purpose. Indeed, many people complain about not feeling as though they have a purpose, and while this can be a source of suffering, those individuals are obviously alive.

Giving a theoretical account of life is a deep and difficult question, on which there is wide-spread disagreement. There are clear examples of things that are alive (humans, dogs, lobsters, etc.) and clear examples of things that are not (rocks, coffee mugs, cinder blocks, etc.). But there is also a world of things that, whether they are alive or not, will depend on a theory of what life is (e.g., viruses). On some theories, for instance ones according to which a certain level of complexity of functioning is sufficient for life, then artificial life – even digital life, such as a self-aware AI – might qualify as a living thing. On other theories, for instance, ones according to which organic material is necessary for life, then purely digital beings would not qualify as living things.

Q. Is deleting a self-aware artificial intelligence a form of murder?

It’s important to note that we have quite varying views about which animals it is permissible to kill, when, and why. Indeed, the same is true of human beings –many people believe that there are some circumstances in which it is morally permissible to kill another human being. In times of war, in self-defense, and for purposes of capital punishment are examples.

But suppose we have a self-aware AI that is not at war with any human, has committed no crime, and is no threat to us: would it be morally permissible to delete it, or “switch it off?” Many people believe that it is permissible to raise and kill animals for food – that is, because it serves a certain purpose for us- and some may argue that it would be equally permissible to delete an AI that we created, if it serves some purpose for us to do so.

But certainly some will see this as morally impermissible in much the same way as killing an innocent human being would be morally impermissible. Indeed, some authors have suggested that we should refrain from creating conscious AI, as that in itself would imply that we should not harm it or delete it.  Some have even raised the question whether self-aware AI ought to have representation within any political body that would create legislation governing that AI – that is, should self-aware AI have voting rights? One might be inclined to think so – for what really is the important political difference between self-aware AI and human adults, such that the latter ought to have voting rights but not the former? But, if we assume a large enough storage base and fast enough computing technology, such that we can create many – perhaps hundreds of thousands or even millions of self-aware AI – this raises the possibility of swamping any human voting bloc. Yet further reason why some may argue that we should not seek to create self-aware AI in the first place.

Q. When does a picture drawn by an AI transcend from a replication of data inputs into a work of art?

Even given our contemporary AI tools, the images and sound they are capable of creating are not merely replications of data input. They are new creations. For example, AI systems capable of creating an image from a verbal description are creating a new image. One might therefore take them to be “merely” translating the verbal description given to them by a (human) user into an imagistic format, but that’s not quite the same as mere replication.

Nevertheless, giving a theoretical account of art (as with giving a theoretical account of life), is a deep and difficult question on which there is much disagreement. And again, there are prototypical examples of works of art (the Mona Lisa, for example, or Hamlet), as well as prototypical examples of objects that are not art (a cinder block, or a standard no. 2 pencil), and yet again, lots of examples that may be art according to some theories and not so according to others. Some accounts of art are based on the idea that art has certain aesthetic or formal characteristics, others on the idea that art has certain conventional features, such as being created to be shared with an “artworld” public or standing in a historical relation to a set of earlier works. AI-created images, music, and so on may very well constitute art on some of these theories and not so on others.

For more about Katz and artificial intelligence, see his appearance on CMU's podcast The Search Bar.

Matthew KatzAbout Matthew Katz

Matthew Katz is a faculty member in the Department of Philosophy Anthropology & Religion in the College of Liberal Arts and Social Sciences. He received his doctorate from the University of Pennsylvania.

Katz’s research interests include the philosophy of mind and psychology, and epistemology.

View latest news