A Short Love Letter to the Internet
I love the Internet, as someone who was born in one country (Brazil) but lived most of my life in another is what a lot of people would call me a “Third culture kid” and while yeah I think I fit into that criteria, but one key factor I cannot underestimate its deep influence in my life has been the Internet. I think probably even heavier than the culture that I lived in for most of my life (Chile) because the internet was and is such a deep part of my life, of the day-to-day life of my sense of humor and the references I make and also because of this “common” culture it is a very interesting way that I connect and bond with people through life.
So thank you oh, internet, my boundless well of connection! Through your pipes and threads, I've woven a tapestry of friendships, mentors, and even lovers. The internet brought the world to my doorstep, letting me learn from brilliant minds across oceans and continents. From passionate organizations to inspiring individuals, you've opened doors I never dreamed possible. Heck, I was able to meet the people I built a company in my 20s thanks to internet forums about transhumanism. So yeah Thank you Internet for being the bridge that connects hearts and ignites a thirst for knowledge.
But besides this small ode of mine to the internet, we are currently witnessing the possible end of this optimistic and “golden” era of open internet seen as a place of public discourse or as a way to engage and connect with strangers online who might become great people and allies in your life. And one of these potential threats might be the current trends we’ve seen in online discourse accelerated by Generative AI. This is what researchers have dubbed the “Dark Forest Theory of the Internet” which I love as a name (and I'll tell you why)
For the Forest is Dark and Full of Terrors
The Dark Forest is a term that comes as an Answer to the Fermi Paradox which questions given the vastness and incredible age of our universe how come we don’t see signs of intelligent civilizations in the cosmos that might have expanded beyond their home system towards the stars?
One possible answer to this was proposed by author Liu Cixin in his book Series “The Three-Body Problem” (which a lot of friends have recommended me to read hopefully soon I will.) So, The Dark Forest theory proposes that the reason we haven't found signs of advanced civilizations despite searching with satellites and telescopes is that the universe is a dark forest. In this theory, most civilizations actively avoid detection by others.
They fear revealing themselves will make them targets for more powerful civilizations or “predators”. This suggests the vast emptiness of space might be teeming with civilizations too afraid to reach out, knowing it could be their downfall. (welp humanity has done a very crappy job at not announcing itself, so if that were to be the case let’s just hope our electromagnetic radiation doesn't reach any other civilizations for a very long time or that they come in peace.
OK, Now How will This affect The Internet?
Now, The former CEO of Kickstarter Yancey Strickler wrote a fascinating article about how the internet is slowly becoming akin to a Dark Forest for a couple of factors that are flourishing more and more online:
Predatory behavior This includes advertisers, tracking bots, clickbait creators, trolls, and anyone who exploits users for personal gain.
Information warfare: The constant barrage of manipulated content, fake news, and deepfakes creates a climate of suspicion and distrust.
These factors are pushing users away from large social media platforms and towards more localized, closed-off networks like Discord servers, Telegram groups, or WhatsApp groups. This trend reflects a retreat from the open internet, driven by a sense of threat among users. They fear the online violence that erupts when a tweet or opinion goes viral, attracting a deluge of unprovoked vitriol and hatred.
So much so that the latest article of The Economist (If you want to skip the paywall use this) talks a lot about how after 20 years since Facebook's Launch the landscape has been changing significantly on one hand we have the controversy and success of Facebook’s rise and dominance in the online space, highlighted by Mark Zuckerberg's recent confrontations with American senators and Meta's thriving financial outcomes. This narrative unfolds against a backdrop of significant transformations within the social media sphere, where personal interactions are shifting away from status updates by the people in our close networks and now it is getting filled with algorithm-driven content from strangers, trolls, or optimized ads tracking us and getting to sell us some product or service. This is moving discussions from public posts to more secluded groups.
These changes while offering some benefits like toned-down political messaging and potentially improved mental health in private groups, also introduce challenges including the spread of unregulated misinformation, diminished public discourse, and a reduced focus on news content in favor of entertainment. Just see the issues we’ve been having with echo chambers on social media and its effects on democracy and imagine a future of a dark forest Internet where this phenomenon only gets more and more intense.
Overall, we're witnessing an erosion in our ability to trust and feel safe online. This stems from harassment, overstimulation, and the difficulty of expressing opinions and being heard. This trend directly contradicts the initial promise of the Internet as an open space for free speech, association, connection, learning, and trust-building.
This reflects on the broader implications of these dynamics. As Harvard Professor Shoshana Zuboff's concept of "Surveillance Capitalism" suggests we are currently living in economies built upon the commodification of attention and the constant surveillance of users and people’s online behavior to tailor to their ads that can be sold to companies and brands which showcases the inherent trade-offs of human communication that are at the core of these issues. Privacy, content extremism, and information quality are key concerns as the social media landscape continues to evolve.
The Rise of the Machines:
Talking about evolution we haven’t even mentioned probably the largest culprit in the acceleration of the Darkening of the internet, Generative AI. As we have seen in the past year the explosive rise of generative AI models such as ChatGPT, Gemini, Midjourney, Sora, and many others we can see how the automation of content creation is becoming more and more effortless. In the times of long ago – which in AI-time means more than two years ago– the automation and indiscriminate use of AI came mostly from social media platforms and tech giants that ran incredibly complex and powerful systems to sell us ads and track our every move so they could better recommend us content that would keep us engaged with their content.
The Landscape has shifted radically in recent times, and these automated systems are more and more capable while also being mostly open and democratically available for the world to abuse. Now we have generative AI that can create realistic images, text, voice clips, and soon enough videos, that are indistinguishable from reality unless you have a keen eye or we develop safer software for identifying AI-generated content. This has deep implications for the creation of fake news and spread through social media at an alarming rate.
We’ve already discussed how misinformation threatens the democratic process, through the generation of fake news articles images, and videos a carefully constructed misinformation campaign can be just enough to send undecided voters one way or to have the most staunch believers in their candidates to stay home on election day because they KNOW their candidate is going to win so why even bother.
We can already see examples of this like the case of a viral clip of US House of Representatives Speaker Nancy Pelosi where she was speaking at a conference and then a group of political opponents edited a video to make her seem like she was slurring her speech which made her seem either like she was drunk or had probably senility-related issues and that made her unfit for her position. As you can see in the video below the editing was very basic just slowing down her voice a bit and it already had an effect. Not even complex deepfakes, so what will happen in this election cycle of then ext or 5 years from now when these models become even better and better? How will we deal with the weaponization of Deepfakes?
Automated Agents Rise
We truly are entering an era where trust in what we see online will be eroding faster and faster, so much so that recently published research by Europol says that by 2026 90% of online content will be done by Generative AI if we stop to think about it not only is this incredibly daunting and scary but also… It might be too optimistic to think we have this much time before most of the internet is AI-generated instead of human content.
This near-future scenario would represent an internet shrouded in perpetual twilight, where trust is a crumbling relic. This chilling vision, aptly described by Maggie Appleton's lecture about the Dark Forest of the internet (guess what inspired me to write this article) takes shape with the rise of Language-Model Agents (LMAs).
These aren't benign bots. LMAs are autonomous AI predators, unleashed on the internet with a singular purpose: manipulation. They weave intricate webs of deception, effortlessly spawning a multitude of social media accounts. Their arsenal? Fabricated content: newsletters, articles, even entire books. Meticulously crafted to push a specific agenda. Think of it as a symphony of propaganda, conducted by a shadowy hand.
While individual pieces may appear amateurish, even AI-generated, their true power lies in orchestration. LMAs orchestrate a network of seemingly independent "news sources" and websites, all singing the same deceitful tune. Social media becomes their echo chamber, their automated accounts relentlessly amplifying the message. The sheer volume of content creates the effect that this might be a true trend with a relentless barrage of content and noise eliminating our capacity for discerning what is human or not.
The result? A weaponized narrative, its influence peddled to the highest bidder. These tools are becoming more and more available shortly anyone will be able to try to twist the public perception on any topic imaginable with a strength we had never seen before. The chilling truth is, that trust online is no longer threatened, it's being dismantled.
Seeking Solutions: A Balancing Act
We are in a difficult time and the capacity of finding potential solutions for this conundrum is well beyond the capacity of just one individual to try and solve all of this. We need industry-wide regulation, technological solutions, and societal changes all wrapped up and somewhat aligned if we plan to keep the internet an open place for humans and AI systems to collaborate and work for the betterment of ourselves instead of falling into the echo chambers of our safety networks.
Possible technical solutions could involve, Blockchain-based verification systems for content to certify its authenticity as fully human, also the development of AI-detection tools to flag potential AI-generated content and even censor it from certain browsers like an AD-block but an AI-block from our internet. Of course, most technical solutions can be circumvented by malicious tools or by tech companies trying to protect their income from our attention and engagement in their platforms.
Non-technical solutions are also crucial. One approach involves a "humanity confirmation" system: a physical verification process for online activity. However, this concept faces significant logistical hurdles, including implementation costs, the potential for discrimination, and a direct conflict with the decades-long fight for online privacy and security. While technically possible, such a solution seems ultimately infeasible.
It seems that the old New Yorker joke “on the internet nobody knows you're a dog” has evolved from a fun gimmick to a dark omen of things to come:
Can we Illuminate the Forest or Should we Burrow and Nest?
Honestly, I don’t know, I think there are still many variables and unknowns out there that make the dark forest outcome the most likely so far, and I remember the quote by one of the founders of the Center for Humane Technology that said “If you know the rules of the race, you know the outcome” and seeing the current business models of the largest tech companies, the influence they have in molding and shaping online discourse we kinda see where things are heading.
The rise of AI-powered social media presents a stark choice. Will we become passive consumers in a curated echo chamber, or will we reclaim the internet as a platform for connection and diverse perspectives? The current retreat to "cozy webs" and closed groups highlights a yearning for a more human online experience.
We, the users, voters, and consumers, hold the power to shape the future. Demand better regulation, push for ethical product development, and advocate for a more open and human-centered internet. Remember, a healthy democracy thrives on informed citizens. Let's work together to ensure the internet remains a tool for connection, not control