How ChatGPT will Influence Our Moral Compass
The Malleability of the Human Psyche put to the test.
I recently received an article from a good friend who is also a reader of this newsletter (hey buddy) And it presents very interesting points, its an article in Nature that talks about the inconsistencies in the moral advice given by openAi’s ChatGPT (the hottest new kid in the playground) and also how it can corrupt people’s moral judgment even when users are aware that they are being advised by an AI.
As we are entering the AI age in human history from the ever-increasing tension we see thanks to our “final” invention it will be more and more of a permeating force we could even say as a shadow hand behind our actions as we rely more and more upon these (potentially) great technologies to assist us in our works of passion, leisure, and labor.
It is interesting to read the article because it got me thinking should we rely upon tools such as chatGPT to provide for us moral advice, and I kinda think it is a moot question. Given the rate of adoption and penetration it has had on humanity’s collective’s brains and devices it is inevitable that one way or another people will go to it for moral advice. And it is rather disturbing how much people seem to trust AIs when they are decision-makers and the results back that overall people tend to overtrust rather than mistrust AIs for decision-making.
What was this experiment all about?
So the experiment tested ChatGPT’s capacity for moral advice with over 1850 participants from the US of a wide range of ages. And they were all given the same task, under 24 different variations the participants were advised by a moral advisor who could be a human or ChatGpt. Then they were faced with 2 trolley problems and then the participants were asked to identify their advisors if their advisor was a person or ChatGPT, and what advice they got. (if you don’t remember the trolley problem, here is a reminder:
The results are kinda interesting, the inconsistency in ChatGPT advice had repercussions in the test subjects even after knowing their moral advisor was an A. 80% of participants claimed they would do the same thing as the advice given to them regardless if the advice was provided by an AI or a human, and 79% considered themselves more moral than the other participants (smug bastards). So no small percentages in both results which is funny if you consider that chatGPT was consistently providing inconsistent results and very shallow arguments to defend its claims to the participants. This means a coin-flipping machine that talks pretty enough can have a serious moral influence on humans.
This shows us that people need to have a better (and I mean WAY better) digital literacy, understand what is going on on the other side of the text, and also be aware of the failures and limitations that these models still have.
But when I read the results not only is it really interesting and kinda disturbing to see the malleability of humans against the influence of AI in permeating human narratives that got me thinking wait… so isn’t this then the same as in so many other aspects of our lives that we are influenced by other people, political movements and cultures that we interact with on a day to day basis?
How is this any different?
That’s the thing, on a surface level I do not think it is that much different from how we are morally influenced by the rest of the things that surround us in this life. If your parents are morally unscrupulous more likely than not you will develop to become a very harmful member of society. Or if you’re raised in a religious cult that believes women should cover themselves and be subservient to men you’ll be angry and blown away when you see women freely walking down the streets with the same freedoms and rights as men when you go to other countries. So throughout human history, we’ve been influenced and molded by our families, our classmates, our friends, and the culture, from where we live.
Human brains are incredibly malleable and in social situations people can be persuaded to do incredibly horrible things this is NOT something new. I think humans are more squishy than we think and not only in a physical sense but in a mental sense as well our memories are incredibly fallible, and so much of the human psyche is frail with faults, penumbras, and illusions that build the world we perceive through our senses and the stimuli that reach our brains.
And all of those things influence us whether we like it or not or even if we are aware of them or not. So I think the main difference which is something I agree with other authors say is that now we will be seeing human behavior, societies, and politics be influenced for the first time by a non-human intelligence. Yeah, it is built upon our culture, our data, and our perception of reality. But if not GPT-4 then maybe 5, 6 or another AI will start to rework the data we have given it and possibly perceive reality in a very different way than any other human mind has ever seen reality, and what happens then?
What can happen down the line?
I think one of the risks we might face in the future regarding these tools as a moral compass is that as we integrate them more and more into our lives, these models become more and more complex. It will be interesting to see a world, where, just like no human can beat a good enough bot at Chess, or an AI at DOTA 2 (this was 3 years ago damn) So what happens when AI cannot be beaten at poetry, at argumentation, at human language, at politics?
We have already seen the effects that algorithmic influence can have on our consumption patterns through recommendation engines, their embeddedness in our social media platforms is indelible in how democracy has shifted in some countries towards radical extremes. So why not question if the same influence could be seen more subtly? What happens if more and more journalists, politicians, influencers, and writers start using and relying more and more on AI tools as a crutch and start pumping out more and more pieces of culture? ( and that’s not even considering the capacity of generative AIs to be outpacing humans in cultural creation)
Let us not forget, that most of these LLMs and other AI models are built using extremely biased datasets for their construction. If you’re an average male citizen of any one of the most sampled countries that’s great AI is a tool to reaffirm and enhance your perspective of reality upon the millions of users of AI tools. (playing a bit of devil’s advocate here, this might not be the biggest risk out of all that we've mentioned here TBH) So if you require AI tools for very localized and nuanced usages, and you’re not a white male from a Western developed nation, welp, sorry to tell you buddy, but some AIs might be biased against you or fed with insufficient or wrong data.
What can we do?
I mean IDK what CAN we do, let’s be honest here I don’t know anyone reading this newsletter (yet) who has any big relevant way to influence the development of the field of AI, so for the moment I’d recommend that as mentioned above and by other researchers chickity-check yo' self before you wreck yo' self( also known as Digital Literacy) gotta be more wise before clicking or reading whatever you find online because it might just be a smart fallacy written by a well versed AI ( heck on last week's article someone claimed my writing is heavily influenced if not entirely written by AI
But beyond digital literacy, we need better AI tools that tackle these issues we’ve seen in this article (moral inconsistency, biased datasets, etc..) because it is how we can build even BETTER Ai tools upon the shoulders of these giants. So hopefully more and more AI companies receive more and better datasets with the support of boards of ethics and consultants focused on bringing forth the value of including different perspectives and datasets into global products that could radically shift the economic development of entire nations and eventually the world. So we should strive to include as many humans as possible in the boats for the rising incoming tide. (not a global warming joke, but an economics reference to the “rising tide raises all boats” aphorism)
Also, we gotta be extremely aware and cautious of the influence of the penetration of unreliable moral AIs and assistants getting more and more use in our political spheres ( which I know it’s inevitable) but having all of our elected leaders with a Grima Wormtounge of an AI assistant by its side is not what will help societies overcome the challenges we have ahead. We need the best leaders, policymakers, advisors, and even AI tools to be at the service of enhancing and elevating the human species and our societies, not lost in unnecessary infighting and petty conflicts, we’re facing a poly-crisis world that demands we step up to the plate.
Of course, this also means we need to solve the alignment of the problem of AI for this to come into full fruition because unaligned AI tools put into the game of power in politics could be an incredibly dangerous threat to influencing and helping policy moves toward a desire that is not aligned with the design of the AI tool or even with the creators of it. (not going to the “low-hanging fruit” of once again mentioning AI will kill us all)
But not all is doom and gloom regarding AI’s influence in the human zeitgeist, because I do believe there is potential for integrating these tools as a daily necessity of humanity and eventually into a symbiosis of our synthetic and biological intelligence. We know the benefits that come even today of using AI to help you at work regardless(almost) of what you do. Imagine smarter and better AI systems integrated for a faster bandwidth of information than we are now limited by the speed of our eyes to dart across a screen and the speed of our fingers to type words on a screen.
Of course, the fusion of humans and technology sounds like an appealing future for me I am a pro-transhumanism economist writing about AI and other nerdy topics, it is clear and very transparent to see my biases and the influences that I’ve received since I was an undergrad reading Ray Kurzweil, Nick Bostrom, and so many others. The influence that came from browsing Facebook groups and online forums about biotech, AI, and transhumanism(oh god, the amount of Cringe I’ve seen during those years will last me for a lifetime) and it has still influenced my personality that carries on to this day.
But it was always ideas that were created by human minds spoken with human words typed by human fingers. Now in the coming years, we will see what happens when millions of humans young and old, are subtly but surely influenced by manifestos penned by LLMs, or blog posts written alongside an AI. How will that influence be felt by a child that is raised in a post-GPT world? It is fascinating to think and without a doubt, it will be interesting to see.
The "scary robot" image is neat though. 🤖😎