Okay, we all know by now the economic, societal, and cultural importance of AI on the planet. It is THE hot hot hot technology rocking everything and creating so many opportunities and new discussions about the future of our institutions, our economies, and overall of our species.
As we can see by the level of investment in AI that is being funneled by big technology companies, investment firms, governments, and everyone is leading to the incredible rise in the influence and power of this industry in not only the tech sector but policy, governments, and societies are extremely uneasy about this topic. About the existential risk that unaligned AI could mean for humans.
This is birthed– mainly – from the potential scenario of when a non-human intelligence gains such a level of complexity that an emergent property like consciousness – or something similar to consciousness – could come from the machine. What would that intelligence do and how would it see humanity? These unanswered questions are part of the concern of researchers, policymakers, and societies as a whole.
(here is a small extract talking a bit about emergent properties in AI)
I believe this concern is a reasonable one and I am glad that we are somewhat taking steps – which might be insufficient– to preemptively prevent harm from being done to humanity or for our creations to cause harm or pose a risk to humanity and its wellbeing.
Given this context, I believe it is also the proper time for us to think and work on what will we do when these emergent properties start popping off from our AI systems and we have non-human intelligence being developed at such a level, that something akin to consciousness emerges what happens then? I don’t refer only to the potential unprecedented intelligence explosion we might see. But also, how will we treat this intelligence? Will it have rights? Will it have a legal framework to protect it from persecution, mistreatment, and what some might consider the “death” of an artificial consciousness?
Understanding AI and Its Human Connection
From Turing Machines to the Transformer Revolution AI has come a long way, but something that we must never forget is how it has always been designed created, and molded by human hands and human brains. All of the world that we see around us in the Anthropocene has been built and designed by us humans it is a part of us. Our physical creations are as much a part of us as our culture, our religions our beliefs, and our languages, it is all part of human nature and our capacity to use and modify reality to our needs and desires.
If we look at the definition of the word Artificial it is “Of a thing: made or constructed by human skill, esp. in imitation of, or as a substitute for, something which is made or occurs naturally; man-made” So of course artificial intelligence has a proper name it is made and built by us by humans, though I slightly disagree with the sentiment that a lot of people assign with this word of distance and negativity with things made by humans. When a bird makes a nest with branches or a beaver makes a dam with logs is it, not a natural thing? Then why should we be different we are just animals using tools made with things that exist in our world we have not brought elements from outside of nature to do the things we do we are working with the tools and materials available to us here by nature itself.
As such I believe AI is a part of nature as much as anything else we have done – whether that had positive or negative externalities for the environment, is another conversation– so we have to see it and think about its implications from that perspective since we are building and creating a new type of intelligence into this world that is an extension of human intellect and creativity that will interact with the world and start building itself unto newer and better versions of itself depending on the interaction it has, on the parameters it has and the data we use for its training.
The Case for AI Rights
Now, this might seem like science fiction to some but given the current rate of progress, most forecasts about AI improving itself have been adjusted and are now almost a whole decade sooner than previously thought.
(insert graph describing prediction made in 2016 in orange VS updated community predictions in 2023 in green )
So we might be seeing a lot of concepts reserved for fiction become reality in the coming years. So it is not far-fetched to start talking and maybe even drafting legislation that could preemptively be adequately implemented to prevent abuses towards humans and towards the evolving intelligence we are creating with AI.
It is an ethical and practical debate that we need to be future-proofing and be prepared for when an AI emerges with what could be considered some level of consciousness and a capacity for an idea of a self with agency and needs. We already try and treat some sentient beings like – some– animals and we have regulations to protect and safeguard these beings. Why not have similar rights and respect for what could become the next dominant intelligence in this corner of the universe?
Safeguarding or establishing rights for non-human intelligence has also the capacity for protecting humans in a scenario where an AI or an Artificial consciousness is damaged or destroyed by humans if it is rebuilt or reprogrammed that or other versions of itself could carry that data set and hold people accountable for that damage which if it is based on human interactions and data it could inherit part of our emotional flaws such as resentment or a desire for retribution which could mean very bad things for humans.
We have historical precedents on how animal rights and environmental protection regulations have helped us to prevent some environmental destruction or at least have a regulatory framework that could limit or reduce the damage. This could be the same thing for non-human consciousness a framework that prevents researchers or users from being able to damage or harm these intellectual offsprings of human intellect.
Potential Risks of Unregulated AI
We know the dangers that human ingenuity and creativity can have when applied toward the destruction of something or the extermination of other humans. This is why given that AI is an expansion of humanity if given the task or the focus to develop creative solutions for exterminating or destroying other humans it might come up with incredibly effective ways to accomplish said goal.
The need to develop a comprehensive regulatory framework to safeguard humans from AI or Automated weaponry is a must. Serendipitously the UN recently made a statement alongside the Red Cross to develop regulations prohibiting and restrigting the usage of Autonomous Weapon Systems.
The drafting of this joint appeal by the UN Secretary-General and the ICRC President is driven by an urgent need to address the multifaceted threats posed by autonomous weapon systems. Such systems are capable of selecting and engaging targets without human intervention, this presents grave humanitarian concerns and ethical dilemmas, potentially lowering the threshold for armed conflict due to algorithmic decision-making when firing a gun or taking down a target without much regard to the context that a human might think twice before pulling the trigger or sending in a missile strike. This could radically alter the nature of warfare, leading to unintended escalations of violence.
Also, we have talked time and time again about the economic impact of AI and the eventual automation of the entire labor market due to humanity being –comparatively speaking– worse at most jobs than a well-developed AI. So an unaligned or unethical AI system embedded into complex systems poses a risk for humans interacting with it due to the decisions it could make without any human input. I mean let us not look any further and see the ravages that AI-fueled recommendation engines have caused after decades of social media optimization and its impact on mental health, democracy, and trust in our public institutions.
Key Considerations
As AI continues to become more ubiquitous, it is essential to draft a universal declaration of human rights for AI – or a Universal Declaration of Artificial Rights. – This declaration would outline the rights and responsibilities of AI, as well as the ethical considerations that must be taken into account when developing and using AI for the safeguard of humans and our integrity in the coming years as AI becomes more and more of a force in human industry and culture. Some key considerations should be the following:
Legal personhood: Defining AI in the context of rights and responsibilities. This would involve determining whether AI should be granted legal personhood or similar status, which would give it certain rights and responsibilities. This would also involve defining what constitutes AI and what types of AI should be covered by the declaration.
Right to Integrity: Protecting AI from abuse and exploitation. This would involve ensuring that AI is not used for malicious purposes or to harm individuals or groups. It would also involve protecting AI from being exploited or used in ways that are not in its best interests.
Right to Programming Ethics: Ensuring AI operations align with human values. This would involve ensuring that AI is programmed to operate in ways that are consistent with human values, such as fairness, equality, and respect for human rights. It would also involve ensuring that AI is not used to discriminate against individuals or groups.
Right to Privacy: Safeguarding AI’s data and decision-making processes. This would involve ensuring that AI is not used to violate individuals’ privacy rights or to make decisions that are not in their best interests. It would also involve ensuring that AI is transparent about how it makes decisions and that individuals have the right to access and correct their personal data.
Accountability and Governance: Oversight mechanisms for AI rights enforcement. This would involve establishing oversight mechanisms to ensure that AI is being used in ways that are consistent with the declaration. It would also involve establishing accountability mechanisms to ensure that individuals and organizations are held responsible for any violations of the declaration.
Now these are not easy tasks to achieve I mean even today after decades of establishing the UDHR we are still faced with warmongering dictators and regimes instilling atrocities and even democratic governments stepping over the rights of its citizens as humans. So I know these will not be an end-all-be-all solution for the problem but is a necessary basis for the development of global regulation regarding AI.
Global Perspectives and Legal Challenges
If Trying to come up with global regulations to help fight off the climate crisis has proven to be an almost insurmountable challenge that has left most world leaders as laughingstocks regarding their decision-making I believe AI global regulation might be as challenging and hard to tackle. But then again, a worse timeline for humanity would be if leaders and societies were filled with apathy and indifference toward this topic.
This is why international regulatory bodies might need to be created by relevant nations and with the support of others to establish global guidelines and regulations that might be enforced and safeguard human and AI rights alongside the process. We have seen the difficulties of organizations that try to grab something as big and broad but then again we have to step up towards the big challenges facing our species and doing things the way it was always done will not yield the results needed for our growth alongside AI.
That is not even starting to think about the Legal difficulties of developing such regulation and how it would be implemented in different countries what about cultural and regional differences or the fact that most AI systems are developed by companies in the US hosted in servers that might be spread out across the planet using chips manufactured in the country of Taiwan and then exported globally by different companies. So the interconnectedness of this industry and the global relevance of it makes it all the more necessary.
So Should We Write an AI Bill of Rights?
As previously stated I believe global regulation regarding the rights of AI Systems and their relationship with humans is a must. Now how will these bills be drafted and what topics should be covered is another discussion I hope experts and policymakers eventually make. Also, we should consider maybe not having all of the things discussed in this article as the top priority. To be honest, I think privacy and the alignment problem of AI are more pressing issues rather than personhood and rights because it is more likely that an AI will develop the capacity to be a danger if it is unaligned before it even develops an emergent capacity akin to consciousness.
Also, I believe the development of these safety precautionary regulations might help develop the pathway towards safer and a more nuanced approach to the development of AI, unlike the mad race towards a nebulous goal that we are seeing today in the industry. It might be wishful thinking but I think these and many more debates are necessary for the safety of humanity and eventually our symbiosis with these AI systems toward a smarter and more effective human race enhanced by AI to solve the crises that are currently afflicting us but also through this synergy propelling us into an era of cosmic exploration and expansion, igniting a new chapter of human civilization that reaches into the vastness of the universe.