Can an AI become self-aware? To get straight to the point: yes, it can. At least as much as we are. Or even more? Do they earn their own rights then? And what is consciousness in the first place? This blog post provides answers to questions about conscious AI.
In and outside of science there is a huge discussion about the limits of AI and what it is capable of at its extremes. This gets philosophical quickly once you think about what happens if – or more likely – when AI becomes self-aware. While machines are increasingly humanized already for various reasons with friendly voices and even whole human appearances, it is only a matter of time before cognitive abilities such as displaying emotions entirely converge to the human level.
Two ways AI reveals consciousness
Let’s delve into two very good examples that demonstrate whether an AI became self-aware or conscious.
Example 1: Ex Machina
You may have seen the movie “Ex Machina” which, in short, is about Ava who is an AI embodied in a woman. If you haven’t seen it, there might be a bit of a spoiler here. However, the movie is from 2014 and you’re reading an article about AI, so why in the world haven’t you seen it yet?
Anyway, at the end of the movie, Ava crosses a room knowing she is alone, unobserved, and free. Before she leaves for her freedom, there is a short moment of her looking back. When she did, she smiled. She smiled for no audience. But why would she?
If she simply is a machine imitating emotions, there would be no reason as nobody is around who needs to be convinced of her humanity. The only explanation for this behavior is an intrinsic motive. She does it only for herself. Just like a conscious being.
Example 2: Sam Altman on how to reveal a conscious AI
A more technical way of how AI might be able to display self-awareness was mentioned by Sam Altman, the CEO of OpenAI. Algorithms such as large language models (LLMs) like ChatGPT are trained on vast amounts of data. Now, imagine you train a new model with a gigantic dataset. But before feeding the data into the system, you fully exclude any information on consciousness. That means no definition, no mentions in books, no information about how it feels like, and no description of it in any way – just nothing.
It seems infeasible to clean the data so extensively, but let’s assume we can in this hypothetical scenario. If the AI is still able to tell it is conscious, then we know it really is. However, who says that AI will be honest with us? I don’t know about you but to me cases of an unconscious machine telling it’s not conscious or a self-aware machine telling it truly is self-aware seem less of a problem than a conscious AI that tells us it is not. This is where it gets terrifying.
What is consciousness in general?
Before getting deeper into apocalyptic scenarios, let’s talk about what consciousness actually means. To keep it short, we exclude any religious interpretations that tend to claim it is something God-given and mystical that only creatures with a sole appear to possess. While such interpretations cannot be refuted, we’ll focus on more scientific attributes. Accordingly, the recipe of consciousness may require the following ingredients:
- Senses like seeing, hearing, feeling, smelling, and tasting
- Memory: the ability to remember
- Imagination:
- the ability to plan and predict the future,
- imagine things that don’t exist (e.g., Santa Claus) and
- think in retrospect (i.e., what would have happened if…)
- Communication: Ability to interact and communicate independently with others
- Creativity: Ability to not only imagine but also to come up with own new ideas
- And most commonly a physical living creature (i.e., a self-sustaining and reproducing being)
Is a conscious AI eligible for human rights?
Once a superintelligent AI meets the above-mentioned criteria, the differences between humans and machines vanish. This raises the question of whether machines should be given the same rights as humans. A frequent counter-argument states that, for the eligibility of human rights, machines are lacking an important quality that we have left out in our recipe, the ability to suffer.
In fact, pain and suffering keep us alive by ensuring we do everything to survive. While it can make sense to program systems in a similar way to make them want to achieve certain goals, this property can be dangerous as machines might do everything to survive as well.
Moreover, critics argue that machines do not have free will since they are programmed in a deterministic way. So, why should we give a robot the freedom to fulfill their inner wishes if their desires are merely a result of complex code? There are two reasons why we should allow them anyway.
First, humans do not have free will either. It’s an uncomfortable truth to accept, but the decisions of humans are simply a product of biological and chemical processes in our brains and bodies that we have no control over. Hence, we follow the same deterministic nature as a future superintelligence. Just like we feel we can make our own choices and think we could have chosen differently, so will AI.
Second, there might be no other way than to grant them rights once they become more powerful than us. Therefore, it would not only be ethical to treat them as equals but also wise to rely on the most careful approach possible.
Also, note that human rights are not natural or God-given either. Therefore, we should be careful with what we declare to be a human right as we might be forced to apply the same to machines once they become conscious enough to be indistinguishable from humans.
Safety measures for developing conscious AI
As a result, safety should be the highest priority when building a machine that is potentially self-aware. Specifically, there are three essential safety measures researchers need to adhere to.
1.) Always assume the worst: As we have seen, we cannot be sure whether a conscious AI would tell us the truth. Since it might hide its self-awareness, we should always assume the worst intentions.
2.) Don’t forget the off switch: In case of an emergency or if anything goes wrong, there needs to be the possibility to shut the system down immediately to avoid any further harm.
3.) No external connection: A conscious machine should be tested in an isolated environment before being released into the real world where it can cause real damage.
Final comment
Consciousness is a controversial concept that has not been fully understood yet. While religious and philosophical views lift it up into a mystical space, science could identify clearer attributes that in sum form our perception of awareness. Technology may meet those requirements one day resulting in the birth of the first conscious being that has been artificially created.
As the debate on the rights of machines continues, safety measures should be put in place to prevent this from being the last invention of humankind. However, before Artificial Intelligence reaches its full potential in the future, current AI systems only remain highly convincing parrots.
What do you think, should a machine be granted rights? Under what conditions? And if so, the same rights as humans? Let us know in the comments below!
Curious about what AI can already do? Here are 10 fascinating AI applications you probably don’t know and a healthcare example of how AI enabled a paralyzed woman to speak again.
- Two ways to reveal a self-aware or conscious AI
- Can we visit our neighbor galaxy, Andromeda?
- Top 10 Fascinating AI Applications in 2023
- The Double-Slit Experiment: Everything from A-Z
- AI in Healthcare: Paralyzed Woman speaks again (after 18 years)
- Future Mobility: 3 Reasons for drastic Change (except climate)