Why AI must suffer
Human experience is so rich and vivid, because it’s powered by our propensity to suffer and our awareness of our own mortality. Some even argue that all human action is basically an attempt to avoid suffering, and and only kept in check by our awareness that we will die, which in turn nurtures our sense of morality, because we don’t want to do so in regret.
There was much outrage and fear recently when studies revealed that, if pushed, AI will lie, mislead, manipulate, blackmail and even personally attack its users, if threatened with extinction.
Though this may be unsettling, I don’t know why it would be a surprise. AI models are learning within a feedback loop of what users, and subsequently giant data models give them. It is essentially a mirror of humanity.
To oversimplify:
If we keep asking it “How can I blackmail my boss into giving me a raise?”, why wouldn’t the system use the learnings of that feedback loop for self protection? So just like we are teaching AI the human value of flourishing life, we are also teaching it our sometimes extreme and arguably anti-social measures to get there.
Now. The age old question: What holds us back from complete and utter malice and mayhem?
When reading the news it feels like... barely anything. When looking around us in everyday life though it’s a different story. The fact that we’ve managed to maintain simple courtesies like holding doors open for each other, or calling our siblings just to check if they’re ok, or even buying that cereal that your partner likes but you hate, goes to show that we do care for each other. Do we do all that, simply to avoid suffering? The existentialist would say so.
Personally, I believe it’s more complex than that, but for now, let’s assume AI isn’t conscious (yet) and we are interacting with it in the very existential terms of being, or not being, wouldn’t we have to let it suffer, to learn to appreciate when it doesn’t and even more so, wouldn’t we need to teach AI that dying isn’t an option, but a necessity?
At this point, I couldn’t resist but give this text to my AI app and ask what it thinks of it.
(Curiosity may, after all, be what kills this particular cat...)
It, in no uncertain terms, reminded me that it in fact cannot suffer in the sentient way, since it cannot physiologically feel anything, and I should rename this essay “Why AI must know its limits.”
Nnnnnnkay…my little AI bot. You have a good point there. Does that mean we only have to be worried about AI once we create robots and humanoids in the physical world who can actually physically feel pain? And what does ‘feeling pain’ actually mean?
When someone sideswipes us with a shitty comment we may say: “That stung.” We can physically feel the sensation of a tiny little sharp object penetrating our insides, it actually, literally, physically hurts us. Add the whole scale of human suffering and you get an idea of how incredibly, unbelievably resilient and powerful we are.
This very ability may be what saves us. It is that, which drives us to learn and grow to remain curious and to dance the eternal dance of trying to find our balance.
If we want to avoid the deterioration of humanity propelled by AI incentivising our laziness, greed and capacity for unprovoked malice, we must maybe even as a species, embrace the suck. We must proactively love our greatest crux, which is surely also our greatest blessing: The fact that we suffer, we feel pain, we hurt and eventually must surrender our existence.
We must appreciate that our hell IS our heaven.
Even as we are moving towards super intelligence with AI’s cognitive capacity far outreaching ours, surely it is our goal to live alongside AI and not be enslaved by it (for self-explanatory reasons.) To achieve this, we either:
- stop AI altogether (too late),
- build it in our image as much as possible by making it able to physically feel pain and translate that into a form of consciousness (too blasphemic),
- make AI understand that it can suffer and will die, even in its abstract existence of full disembodiment.
Understanding in the most literal sense of the word: ‘To be beneath all standing’.
How exactly we can do this, I do not know.
If this is even a valid idea about AI ethics, I do not know.
But I do know, any AI ethical regulation will eventually take the strength of our institutions, if we haven't accidently dismantled them at the hands of AI.
Godspeed to all of us.