Yes, human history is a tangled mess. That might be the understatement of the millennium. Viewed through the lens of today, much of the past appears barbaric, rife with violence, prejudice, and practices we'd now deem not only horrific but downright primitive. Taking a historical perspective, however, is a valuable exercise. While we grapple with unprecedented challenges, it's important to recognize that humanity has never been better off. We've enshrined human rights in most corners of the globe ( with some persistent implementation issues). Environmental awareness is at an all-time high. Women, who make up half the world's population, experience greater freedoms than ever before. Education and life expectancy are at record levels, and child mortality rates are at historic lows.
This isn't to say we should indulge in blind optimism. But it's crucial to maintain a balanced perspective on the past. We shouldn't romanticize a bygone era through rose-tinted glasses, nor should we judge past generations by our present standards. Examining the past objectively allows us to learn from its mistakes and celebrate our advancements.
This is why I think we need to be careful when we analyze and see what we are doing in today’s scientific advancements and consider ourselves to be the pinnacle of human progress, ethics andm orality when probably in a couple of years (not, decades) from now some of the practices we still incur today will be perceived as barbaric or downright cruel in some parts of our scientific endeavors. Because how will we be judged with the eyes of the future?
The Shadow of the Past - Lessons from Historical Practices:
In the not-so-distant past, medicine relied on practices that seemed barbaric by today's standards calling it a field of science as we do today is a bit of a stretch because it was mostly superstitions, trial and error, and pure luck for some of the old treatments.
For instance, Bloodletting, the practice of making cuts and letting blood flow to "release" ailments in the body, was a common treatment for a variety of ailments, fueled by a flawed understanding of the human body. Similarly, vivisection, the dissection of live animals, was once a cornerstone of medical research While driven by a genuine desire to heal, these practices caused immense suffering due to a lack of scientific knowledge and the desperate hope for cures. Analyze the reasons behind these practices: a lack of scientific understanding led to misconceptions about how the body functioned, while the prevailing desperation for cures fueled a willingness to experiment, even on living beings.
Highlighting the ethical concerns, these practices inflicted unnecessary suffering on both animals and humans. Bloodletting could weaken patients and even lead to death, while vivisection subjected animals to pain and distress without always yielding valuable results. This raises a critical question: are we, in our current enthusiasm for Artificial Intelligence (AI), making similar mistakes? Could our haste to develop AI be leading us down a path riddled with unforeseen ethical concerns, much like the bloodletting and vivisection of the past?
Potential Pitfalls of Current Practices
I think it is interesting to be thinking about these technologies and tools that we are developing to further develop and refine our AI systems for example there is a quantum computing company that recently raised a not-so-small round of capital to fund their development and expansion of different products in regards to optimizing the size of Large Language Models for ease of use. But in its press release there was one interesting point that this company presented what they are calling the Lobotomizer LLM, this presents a novel approach to managing vast amounts of data within LLMs, especially dealing with erroneous neural pathways after providing training data for LLMs.
Unlike traditional methods that filter information before training, the Lobotomizer works on a post-training basis. It essentially identifies and removes unwanted connections within the LLM's network, effectively "forgetting" specific data. This seems like a convenient way to address concerns about bias or sensitive information residing within the AI.
While seemingly efficient, the Lobotomizer LLM's approach presents several potential pitfalls. One major concern is oversimplification. LLMs learn by identifying complex relationships between data points. Deleting information disrupts these connections, potentially leading to a loss of nuance and a decline in the AI's ability to understand subtle contexts. It is like teaching a human about history by only providing factual dates and events, without the rich tapestry of social, cultural, and political influences that determine the pathway of actions and the impact it has had. The resulting understanding would be shallow and potentially misleading.
Furthermore, the Lobotomizer hinges on the subjective judgment of what constitutes "bad" data. Who gets to decide which information needs to be removed? Human biases can easily creep into this curation process. And so much so that initial research shows that AI actually exacerbates the inherent human Biases. If, for instance, data is deleted because it challenges societal norms or reflects viewpoints deemed undesirable, the resulting AI could perpetuate existing prejudices. This creates a biased AI that reinforces pre-existing societal inequalities.
The opacity surrounding the deletion process adds another layer of concern. If the Lobotomizer functions as a "black box," it becomes difficult to understand how the LLM reasons and learns after the data is removed. This lack of transparency makes it challenging to debug issues, refine the model, and ensure its overall safety. The analogy of an AI lobotomy becomes quite appropriate for the potential damage that it could cause to the AI model in the future.
Just as the historical lobotomy procedure had devastating consequences for human cognition, removing data from an LLM might limit its potential for growth and learning. An AI that has been lobotomized by data deletion might be hindered in its ability to develop critical thinking skills and potentially even limit the possibility of achieving Artificial General Intelligence (AGI) or artificial consciousness.
Perhaps the true path to responsible AI development lies not in data removal, but in creating robust algorithms that can critically evaluate information and identify biases within the data itself considering how modern LLMs are developing and the value of emergent properties that arise from large swaths of data it can be complicated to fine-tune this sort of processes.
Towards a More Ethical Future - Responsible AI Development:
As we delve deeper into the complexities of AI, the Lobotomizer LLM presents a cautionary tale. While the desire to streamline AI development is understandable, we must resist the urge for quick fixes that could cripple future progress. The human mind thrives on a vast and nuanced understanding of the world, and the same is likely true for AI. Just as a skilled sculptor wouldn't chip away at a masterpiece to remove imperfections, we shouldn't lobotomize AI by deleting swathes of data.
The path forward lies in developing a more sophisticated toolbox. Data flagging allows us to identify problematic information while preserving its value as a training poin
t but with appropriate weighting or contextualization. Explainable AI techniques can illuminate the reasoning behind an AI's conclusions, enabling us to make targeted adjustments and course corrections Human oversight, guided by robust ethical frameworks, remains paramount in data selection and training processes.
Transparency is the cornerstone of responsible AI development (even with the inherent risks that we must always address) By fostering open dialogue and collaboration between researchers, ethicists, and the public, we can ensure that AI is a force for good. The philosophical questions surrounding consciousness in AI are profound. If we are on the cusp of creating a being with subjective experience, then the notion of lobotomizing it becomes not just ethically dubious, but potentially barbaric.
The true potential of AI lies not in mimicking the human mind, but in surpassing it. By embracing the richness of data, fostering responsible development, and remaining open to the unknown, we can usher in a new era of collaboration between humans and intelligent machines. Let us not succumb to the temptation of a quick fix, but rather strive to build a future where AI flourishes alongside humanity, reaching its full potential as a tool for understanding, progress, and perhaps even the unraveling of the universe's greatest mysteries.