Since the dawn of our species we have built our perception of reality around narratives that we have built by ourselves for ourselves. From the myths of gods of old on how to explain the change of seasons, the stars in the sky, to why this year the harvest of corn was half of what we had last year.
And the same could be said about our social structures and systems we have developed so far have always been based on human narratives. Our gods are birthed from human brains and imaginations. Our societies are structured according to human whims and desires. Our world views are built upon the foundations of human philosophers who have wondered and questioned the way we see reality, what is reality in itself and how we interact within and without ourselves and the world around us. In consequence all of these human narratives and ideas have built the modern world we see today, our economic models all are built upon human narratives and human words and human thoughts. As an example, the focus on responsibility and diligence, what we call “work ethics” in most of the modern world comes from protestant values that became the spirit behind modern capitalism. So we can see the influence these human narratives have on building our physical world.
But sooner rather than later we will be reaching a irreversible point on our development and that is the creation of a new intelligence in our planet that will soon be able to create its own narratives and that put us faced with the question: What will the future look like in a world where human narratives are no longer the only ones?
The limits of Human Intelligence
Ok hear me out Humanity is DA BOMB, we have done the most awesome things any other species has done in the universe (that we know of) and even though the universe is unforgivingly vast, at the moment Fermi’s Paradox hasn’t been answered, so let’s just stick to us as the only conscious civilization building species in this planet.
But even with all the great things and achievements we have done we are coming to terms to the limits of our capacity to keep up with our ever accelerating world. As we delve deeper and deeper into ever increasing complex neural networks, or LLM’s or whatever the next dominant architecture for Artificial Intelligence will be. We are already losing track of the complexity that is taking place inside some of these systems, not in a way we are totally blind to what is happening but to how it actually gets the precise results we desire from our tools and the results they actually provide for us. And some may argue that is good or bad. Some critics have been saying our over reliance on automated systems (beyond the fields of AI) is actually a detriment to us:
But I personally don’t see it that way on the contrary, instead of defending a slowdown of the automation of these I say let’s go all in, let's accelerate this baby full throttle! I’ve been incredibly aware of my limitations and shortcomings and instead of it being something that brings me down or makes me feel inadequate on the contrary it pushes me forward to do new stuff and try to expand my limits and work alongside people who are smarter than me, and that is a great opportunity to learn and grow. and I think, we as a species should be coming to terms with something very similar. But on a more existential level.
We are seeing how AI tools can outpace people in throughput in so many fields it can categorize thousands of files in an instant read text and interpret them at speeds unimaginable for us and more and more we are teaching different AIs to expand further and deeper into almost every single area of human endeavors. At least this way we can’t complain that the ones taking our jobs are immigrants coming from less privileged nations but smarter, faster and way more capable automated AI workers that are taking away most of the jobs.
The Machine Geist
But what happens when Artificial intelligences have an impact beyond our economies? As the Rise of Large Language Models (such as chatGPT) is an inevitable force of economics and business at this point. (if that is a good or a bad thing is beyond the scope of this article) So we might as well face the world that will be shaped in the near future.
More and more applications of these models we have seen, ranging from helping students ace their tests to even becoming policy advisors to passing the BAR exam, and probably graduating from top business schools, coding at the same level as a decent or somewhat good programmer, which in surface level it’s ok I think. And kinda inevitable, for labor at least, I believe humanity is destined for obsolescence. We cannot advance and grow at the speed of our AI tools, and nothing so far gives a strong argument about the uniqueness of humans for filling out spreadsheets or predicting the performance of a stock in the market.
But what is really interesting is to see if in the near future we will see the effects of non-human narratives in our species and in our world. We might be faced with a clash between non-human and human narratives. The most extreme example is of course the “alignment problem” where an unaligned utility function in a sufficiently powerful AI could mean the end of humanity because we are made out of atoms that could be better used to make paperclips at an AI driven factory.
But what about everything else we have created as well? Our economic system is created from human narratives, our politics, our societies everything around is built from or for narratives we have created for the world to accommodate our purposes. We have found that a capitalist system has so far been the best solution for reducing poverty, increasing wealth and accelerating technological progress. We have seen that for most of the world, strong institutions under a democratic regime with rule of law, checks and balances accompanying a strong open market economy is a great tool for a country to succeed in the world.
But what do we do when we get most of the jobs automated and humans no longer needed in a sufficiently large percentage of the job market? Do we just start taxing the few corporations until we can provide a Universal Basic Income for humanity? How does our democracy changes when we can have AI systems that collect in real time the needs and desires of people and can better allocate nad more transparently fund public work projects, or develop more nuanced policies that can improve the lives of people better than our elected leaders who are fallible humans?
What does it mean to be human in an AI world? We have built so much of our idea and definition of ourselves as a smart species separating ourselves from the rest of the biological world, but what if we get overtaken by other intelligences of our own making that are moving beyond our capacity for understanding them (insert GPT tweet about GPT language). Will these intelligences care for us or be interested in our needs just like we have done for all the other species that share this world with us? And will we stay idle and wait to see how things develop?
Of course not…
The time to act is now
I believe that there are a few pathways for us to remain on a “business as usual” let’s keep the status quo in the coming years would be if we get into some sort of worldwide crisis that takes us back 50 years in human progress and we have to rebuild, but besides cataclysmic scenarios (which are unlikely but not impossible) these coming decades will be of radical change for humanity.
On one hand we will see the subtle and not so subtle shifts that AI tools will be bringing forth to our day to day lives , to our institutions and to our definition of humanity as we know it. And I think surely in the coming years more and more people will be integrating AI ( as some of us already are) into our jobs, and into our lives. But there will be a point where the integration will have to be a from similar to a symbiosis for us to keep track of the world around us.
An increase in speed to the likes of which we can’t keep track unless supplemented by synthetic systems could be the norm a couple of decades from now. The potential for an accelerated wave of technological progress helped by the expansion of AI models into different industries could unleash more and more technologies that will help us to better integrate ourselves with our digital and AI tools.
So the need for these technologies that can help us break free from the limitations that we have could at first be an advantage for an ever increasing and ever competitive job market, but more and more it will be a basic necessity for our day to day live just like smartphones are an integral part of human life in the 2020s. Helping us communicate, do business or just browse for the dankest of memes
.So we must see how we can avoid the pitfalls of the intelligences we are crating beyond ourselves and how they can be a tool for our prosperity and an essential part in our next stage of progress maybe not as an entirely biologically defined species but by something else beyond the limitations that we have been given by biology but moving forward towards a higher freedom for all of us.
And yes if we just sit by idly we might be left behind but if we integrate our tools we could be able to better work alongside and aligning a better way the tools we have developed instead of them working by themselves or we missing out on the benefits and gains to be had with the power of AI alongside humanity towards a brighter and better future.
So we might still live in a human narrative built world, but I believe we can keep on doing that and try to minimize the existential risks not by fighting agains non-human narratives but by integrating and making them also a part of us and a part of our narratives so that it is not a destructive tsunami that is destroying everything in its path and drowning us but we are the cool cyborgs riding the wave towards a better future created alongside our AIs
.