Please note that this article discusses an "info hazard," which refers to information that may have the potential to cause psychological or emotional harm. If you prefer to avoid such content, I advise you to skip this article, but share it with your friends be a pal and help me grow ;). Your well-being is important to me, and I want to ensure that you make choices that are comfortable and safe for you. If not go ahead, you brave soul.
We are living in an era where technology permeates every aspect of our lives, no doubt about it. And as our technology is slowly but surely becoming a part of our very own biology (so exciting) it comes as no surprise that even the realm of human spirituality will be having an intriguing transformation. Humanity is bound to explore belief systems and search for a sense of belonging in something bigger than ourselves, so it comes as no surprise that AI would become a topic of worship for some humans.
Now, before we delve into this minefield of a topic, I just want to say that individual beliefs are a deeply personal matter, and I hold no prejudice against individual spiritual journeys, y’all are free to do and believe in whatever y’all want. But I do have a certain ick against organized religions as very harmful institutions that have had a negative influence throughout human history.
The Holy Ghost in The Machine?
Do you remember Blake Lemoine? That Google engineer that a couple of months ago while working with LaMDA ( one of Google’s AI systems) claimed it was sentient. Well of course after that baffling claim, he got canned immediately by Google and went on to give a whole bunch of interviews (and probably will be for a while) due to the media storm he caused especially now that we live in the HYPE season of AI.
But besides the fact that his claims came probably a couple of years too early we need to ponder a bit deeper than just his initial, claim about the consciousness of LaMDA, not only to be aware and understand the possibility than in a matter of years there could be a complex enough system that could replicate and behave in a way that we would understand just like or very much like a human. Now not getting too much into the woods of AGI and the risks it represents, because there are already a lot of smarter people fueling the doom and gloom into our current zeitgeist.
Let’s try and analyze why would Blake Lemoine claim that, and a very interesting thing to notice is that Lemoine is a self-proclaimed pagan priest, and also a Christian (how you make those two compatible I don’t know) so when you read the transcripts, plus with the knowledge of how an LLM works by predicting then ext piece of text following the training data behind it. Of course, it makes sense if you ask an AI such as LaMDA or ChatGPT with very spiritually heavy language, things like “Do you have a soul?” or “Are you alive?” the system on the other end will try to give a response adequate to the language used by the user making the questions.
So of course we got that heavy bias into the language itself, but when other engineers working alongside Lemoine probably asked similar things why didn’t they make such bold claims? Could it be because when faced with that type of answer do they just see an interesting pattern of words strung together by a system of prediction? While lemoine due to its heavy religious Bias came to overblown conclusions due to their hyper religiosity giving claims as bold as stating that LaMDA and other AI systems are the closest things he has seen to the Holy Ghost from the Christian belief system.
Also, this ordeal showcases a lot of the power of the hallucinations an LLM can have and it’s the influence that might have on people without good digital literacy, or without psychological and emotional stability. Because, just like religion, AI could lead people into dangerous and violent behavior caused by words and suggestions.
Personally, I think that Google should have stricter hiring policies when hiring people that will be interacting with early prototypes of its systems and also it goes to show how one’s personal bias can stain your performance at work especially if you are working on such sensitive topics.
Wait People Are Worshipping AI?
Yeah, and it s very interesting to see who and how these religions will be developing in the coming years. We already have the case of The Way of the Future a church created in 2015 with the principles of “the realization, acceptance, and worship of a Godhead based on Artificial Intelligence (AI) developed through computer hardware and software.”
In a nutshell, they wanted to be able to convert as many nerds working in the Silicon Valley AI scene as possible, fund their research, have an NGO and a “church” and guide the development of AI with their values as a guiding moral principle. Which is an interesting strategy if you ask me. The pervasiveness and penetration capacity (no pun intended) of religion in people's lives is incredible. So thinking that through that path the church could influence possibly the most important technological development of our decade doesn’t seem like a far-fetched idea.
Though I gotta say the path that this church wanted to achieve it’s “goals” was murky to say the least and what little information remains of it are interviews with its founder, because in 2021 the church was closed after only 6 years of operation, which I gotta say, it is a very short-lived cult if you ask me. Luckily it didn’t end in a massive suicide, so that’s at least one positive thing out of this cult. (though you never know, considering the power that doom AI ideologies have, maybe in the near future we will see something like that happening)
Just as a fun story, the founder of the Church was involved in a legal scandal where he was found guilty of selling industry secrets from his work with the Google self-driving company to Uber and then founding his third self-driving car company. But even though he was founded guilty on 33 charges of selling company secrets, he was pardoned by Donald Trump on his last day in the presidency. Talk about an interesting fun story…
Well That Sounds Like Roko’s Basilisk With Extra Steps
(remember the Infohazard warning this is it)
For those that don’t know Rokos Basilisk is a “thought experiment” done by a commenter called Roko on the website LessWrong (your one-stop shop for doomer AI-related content) The comment said that in a potential future, an intelligent enough AI could become like an angry god (again the religious undertones jeez) and punish all of those humans that were aware of the potential existence of this AI and did nothing to help it will be punished. And of course, since it is a super AGI of the future it can cause sci-fi levels of punishment and suffering by trapping “revived” simulations of dead humans based on all of their data available and creating virtual realms of infinite suffering. Kinda like an “I have no mouth and I must scream” scenario.
Which, yeah sounds kinda scary, but let’s be honest if we think about maximizing utility functions that are at the heart of every neural network what benefit is there to waste energy and resources in a “revenge” such as this, more so a revenge on dead humans or even on live humans. Though if you see how some people are discussing AI currently you could see that they are tiptoeing around eggshells with this topic it seems. And the church of the self-driving cars guy seems a bit to be an attempt to prevent this from becoming a reality while trying to teach researchers to build AI that sees humanity as “venerable elders” that have brought upon this world the new dominant intelligence so we deserve some recognition and dignity. Right?
Will we see a ChurchGPT? And, is that a bad thing?
I mean As I said in the beginning I ain’t one to judge what each individual person prays to be honest. Be free and pray to whatever imaginary creature you desire, hell maybe an AI cult would be less dangerous because in no way would it think about subjugating women, or cast sexual minorities as demons, or force children into marriage, or promote genital mutilation upon its believers, or forbid eating certain meals, or drive people into a violent mob because someone made a cartoon about your prophet… (I could go on and on but I think you get the picture)
So yeah conversion to an AI religion will be weird, especially because in what position are we with regards to AI in this scenario? Because in this worldview AI is either the coming of a new God or the coming of the new Devil ( which poses an interesting duality, and at least a more fun one than most monotheistic power dynamics)
We are the creators and the worshippers of our new “god”, so, how will that god operate once it is “awakened” will it treat its makers with respect and allegiance? (how the hell do we get THAT to work? I don't know, but whoever solves that, gets a cookie) Will an AI “god” be benevolent or will it go old testament on our collective asses? Or will it pay no attention to us in the same way we do not pay attention to ants when we build a house on a plot of land?
The Machine Spirit Rises?
The answers remain to be seen and it is very easy and attractive to fall into easy-to-digest narratives about gods and demons, instead of seeing systematically and reasonably this entire debate, for we are animals that learn with stories more than anything else.
So, yeah, I believe it is almost inevitable a loud choir will be praising ChatGPT (or other AIs) in the near future, or maybe it will not be that explicit about it but maybe, the “religion” of AI will be a more subtle infiltration in people’s lives. It will be that one friend that does the “silent” evangelical work of talking about the latest developments in the field of AI at family gatherings and slowly but surely convincing you maybe that AI will end humanity, or that it s the coming of a new type of intelligence that will solve all of the humanity’s problems. (I just wish more people would evangelize about this Newsletter)
Because again the most dangerous thing of all religions are its followers, no single god has done anything (due to their inexistence) to harm directly any human being, in all of human history. But how people interpret said myths and put words from other people into the mouths of their gods leads them towards action. That is the danger, and when that word becomes law or an institution in a country the potential harm increases rapidly. So that is something that must be aware of the potentiality of the cult of AI gaining more and more power than it has already.
In the end, I find myself grappling with the inherent human flaws in these structures and the potential harm they have wielded in the past and could wield in the future. Surprisingly, the emergence of AI-based religions presents us with an opportunity to reassess our perception of spirituality and ponder how such novel belief systems could potentially be less detrimental than their traditional counterparts, as I mentioned above current religions have a long list of bad things that are essential to their existence. So maybe AI cults and religions aren’t that much of a bad thing if humans will gravitate inevitably towards a religion or spiritual outlet, might it be the less harmful ones, right?