With the Hoopla that ChatGpt is creating about the acceleration of the Global existential Risk, that an unaligned AGI could have on our species. It has put the burden of such a responsibility on the shoulders of it’s creators and founders, just look at this interview at the Sam Altman the CEO of OpenAI:
Now look at his face and what he says, how would you respond if a product created by a company you started had the potential to radically alter the future of humanity or bring about its doom. How would you deal with a responsibility like that, and that’s only a POSSIBILITY of it happening (a very likely one , or almost inevitable according to some experts)
Why are private companies the one tasked with this responsibility? And is that the task of a company or should we leave it to someone else’s?
Similitude to Organisms.
A company's responsibility is with its shareholders, with its investors and its customers. There has been a debate and an evolution in business schools and economics about the social responsibility of companies.
And don’t get me wrong I’m all in favor of that transformation. Examples such as stakeholder capitalism and others are great alternatives that I wish that were more widely adopted and implemented but if we get down to the nitty gritty of things, a company will always act accordingly to what makes them profitable and helps them survive longer, it’s embedded into their survival.
Any effort done by a company/organism on a basic level is only about the survivability of themselves. Just like in biology the engine driving forward biological evolution which led to humanity flourishing and becoming the dominant biological species of this planet is the perpetuation of chromosomes that want to replicate and pass on from one generation to the next.
Also companies exist in competitive environments, of course there is space for collaboration and cooperation, but the lifeblood of a company is the flow of capital and it’s profitability, you either get profitable through a good business model or get enough traction and funding to survive long enough until someone buys the company and integrates it to a larger profitable and sustainable organization.
But most companies have one of these two goals in mind (with some nuance of course) so where do you put “environmental concerns” or “AI safety” or “creating a good quantity of jobs for people” in that (simplified) equation? No where, it is not in the design and structure of any corporation, it just happens because they need people to do the things required to keep a company alive or growing. They do outreach to relevant communities for a marketing strategy or showing relevant values with potential customers so a company doesn’t get “canceled” and people stop buying from them. It’s just as simple as that.
The Non-Existent Martyrs
So, companies as institutions are not so different from basic organisms they will do whatever it takes to keep themselves alive. Now of course a company is somewhat of a complex institution with many actors with individual free will inside of them. But, is a large enough % of a company willing to reduce their income for a social cause? (call it the environment, social justice, or any other cause)
Do you honestly think an employee that is the provider for themselves and maybe for their family through their salary will be willing to sacrifice that? I honestly doubt it. If you were a team leader how would you say to your employees and people under your responsibility “Hey guys next week y’all will be meaning 15% less to feed your families and pay for your Children’s school because our supply chain isn’t clean enough”
And I think it is a similar argument to people pushing the ludicrous idea of de-growthing, where privilege educated people who see themselves as the average of the human experience have an opinion on how others should live their lives, while most of the world’s population are striving to maybe achieve a fraction of what they have achieved due to lucky circumstances or just plain old privilege of being born in some wealthy developed nation.
No human in their sane mind would say no to a better quality of life, to a higher salary, to a better position ( of course this comes with nuance on context and stuff but on average this is mostly true) Money does buy happiness (until a certain point) and companies want money and people want money. So we should strive to align as much as possible and try to reduce the risks and the negative externalities of our progress.
And I mean this not only for environmental damage but for potential future risks we know are incoming that could target humanity if we don’t adapt, prevent and overcome, we need better institutions, better policymakers and better leaders in industry to face the challenges and step up humanity to the potential that we have.
Whose responsibility is it then?
A couple of weeks ago Microsoft decided to fire it’s entire team of ethical AI and Society as part of 10.000 jobs they eliminated from their payroll to reduce costs due to many factors(increased interest rates, unstable economic outlook, the collapse of SVB and other banks shaking the financial sector etc…) And some people are concerned due to the fact that Microsoft has a large stake in Open AI the creators of ChatGPT, and if OpenAI( which is anything BUT open lol) keeps being the dominant player in AI in the coming years they will have to bear the burden and the responsibilities largely of the radical changes we will be seeing in the coming years.
So is OpenAI responsible for what happens? Or are they just doing what any company does, work within the boundaries of the law and the regulation, while maximizing it’s profits to keep themselves alive and thriving? Also when does a company become too big or large enough that it’s influence goes beyond the private sphere and starts influencing our politics and other areas of society (and not even talking about lobbying and pushing for tax brakes and stuff)
It is a complicated thing because by design these concerns aren’t part of the DNA of a private enterprise. So should we rely on our policymakers? With their track record of reactive policy towards technology ( in most of the world) their proneness to corruption or lobbying by these private entities, and also the flexibility a company has (specially digital ones, and let’s not even talk about DAOs and stuff like that) to shift from one regulatory landscape to another. (remember Google Ireland and the whole taxes debacle?) What are the odds that governments and policymakers will be able to preemptively come into agreement on highly technical topics they cannot entirely understand?
Also, these regulations could be seen as a threat by private companies that it will slow down their rate of innovation or make their products/ services more expensive, which could lead to a decrease in welfare by the end consumers having less of said product/ service. Also it can become a risk for the country and its economic and geopolitical competitiveness as well if that is a core industry that rakes in millions in taxes, creates jobs and wealth for its citizens and workers.
Using AI as an example, many experts in computer science and AI safety are concerned that an uncontrolled or badly design AI if sufficiently advance is a existential risk for humanity. (in a nutshell, the risk might be a sufficiently powerful AI could threaten humanity if we give it a command like “make the number of people with cancer on the planet = 0” and if the AI not aligned on how to reach this goal, with the AI’s utility function, I it can kill all of humanity so no living humans = no people with cancer, since it looks for the most efficient way to solve this issue) Ok so we have AI that could be a risk for humanity, but it is an incredibly lucrative endeavour, that not only brings huge returns on its investment but also it can help us solve other issues like developing new drugs, or solving the climate crisis, or developing clean energy.
It is a life changing technology that harnessed properly could uplift humanity. But in the wrong hands it could destroy us all. Also if you restrict it too much in your country then companies and researchers would just move to another location where you don’t have influence. And even if you manage to convince a large group of countries to ban or restrict then your opponents or geopolitical competitors (looking at you China) would just grab the people and companies and offer them incentives to take their business elsewhere. And then you’re left without the benefits of the technology and still the same catastrophic outcome or even worse if the technology is harnessed by an authoritarian regime who’s hungry for power.
What now then?
So, is it hopeless? I do not know.I really hope that we are able to overcome these different global catastrophic risks we are facing as a species. Because for the survival of humanity and our livelihoods we need to work together and collaborate.
We need scientists and funding to go into research that brings us closer to the solutions that are needed. We need policy to help us reduce the risks and the negative externalities of our lifestyle and regulate and control private companies that are able to advance and accelerate the distribution of great solutions for some of the problems that we are currently facing. Start creating more dialogue and a focus on problem solving beyond our political and social tribalism.
I know this might sound like hopeless optimism without any basis on the current geopolitical status, but I think it is just a better alternative than wallowing in despair. Of course There is work to be done and a more in depth.
We need more and better people talking loudly about these topics trying to simplify but have a nuanced conversation about these topics. And come up with solutions and new ideas, maybe we will need new systems of corporate structure, maybe new economic and social models for the future of humanity. How do we deal with these changes will not be a business as usual dynamic we need to break the inertia if we want outstanding results outstanding efforts are required.
The future is murky and chaotic but I believe there is the light of a better future shinning through these dark clouds ahead. Hopefully we survive good enough to see the other side. (and if not… well at least I hope it was fun while it lasted)