In last week’s article, we discussed the role AI has played while being behind the curtain as the Silent curator guiding us through our social media, recommending the ads we see, and so on, but now we are running at an incredible speed towards the cliff that will be Generative AI as the dominant force online. With its ability to create content, seamlessly blended into our digital interactions, it will be blurring the lines between machine-generated and human-created content, probably even eliminating the dominance of humanity in online discourse which will have a whole slew of repercussions that could be dangerousn and harmful if left unchecked and if we don’t take actions now to fend off against some of the threats that this technology could bring.
This transition from the Silent Curator towards the Content Fabricator, the Great Generator that will be feeding us with what we will be consuming is what is mentioned sometimes as the Second Contact by the Center of Humane Technology ( I wish instead of “contact” and referring to AI as aliens they would have gone with “impact” as the term and mane a nerdy Evangelion reference) And this second Contact is what we are approaching.
While technology has fostered innovation and creativity, and no doubt it will keep accelerating our capacity for progress and growth. it also propels us into uncharted ethical territories, raising questions about authenticity, trust, and the potential manipulation of humanity’s digital experience, which should not be taken for granted nor should be seen as anything less but an existential threat to ourselves and our societies because the digital experience in today’s world is as valuable or even more than some physical experiences in the “real” world.
Here is a short list of what some of the threats of this second contact might be and why we should be on the lookout and working towards not only creating safer technology but also developing the regulatory pathway for these problems to be minimized and also take personal responsibility in our digital literacy for when these technologies that most of them already exist start to become good enough for these issues to be problems or are massively adopted that it could also become an issue.
Nothing is Real All my Photos are Deep Fakes.
Deepfakes, synthesized videos, or audio generated by AI, threaten to redefine truth in the digital age. Right now we might say “Oh look at the fingers that’s obviously a fake photo of Donald Trump stabbing a giant toothless snake with the American flag in a swamp.” While the technology now is gimmicky eventually it will improve to the point where distinguishing real photos and AI-generated ones will be impossible for human eyes and we might need to rely on other AI to identify AI-made photos. ( ironic I know)
This technology possesses benign applications such as entertainment and research, but its misuse can manipulate public opinion, tarnish reputations, and even jeopardize national security. As the line between genuine and synthetic media blurs, society faces a pressing need to develop means to discern and combat misinformation. We have seen already the impact fake news can have in the world shifting elections and eroding institutions and trust in the world around us now imagine doubting the veracity of every photo, video, or audio that you see of politicians or world leaders. This could be gasoline to the dumpster fire that is the brain of conspiracy theorists but imagine now more and more people are susceptible to that mind virus.
Or deep faked voices are a rising trend that is bound to be used for ill means, needing a shorter and shorter sample size to develop fully formed sentences and automate speech, you might just need a 20-second clip of someone’s child speaking in a video extract that passes it through an AI and then read a ransom note which is sent alongside a deep fake photo to said parent demanding ransom money while the child is playing happily at school. Probably scams like this will be commonplace in the near future and we need to be prepared.
The AI is More Creative Than Me is That Bad?
The intersection of AI and the arts has unlocked unparalleled creativity, from algorithmic paintings to computer-composed music, the latter isn’t a recent phenomenon but the capacity and complexity that is now possible thanks to the transformer revolution is on a whole other level)
However, this fusion prompts questions about originality, authorship, and ethics. As AI takes an increasingly central role in creative processes, the challenge lies in preserving the human touch and ensuring ethical considerations don't fall by the wayside when we rely heavily upon AI to be the ones writing our news, editing our books, and crafting the scripts of our shows how can we be sure that slowly but surely the dominance of non-human intelligence doesn’t have an effect in convincing us of things we are not consenting. The infiltration of AI in more and more of humanity’s creative endeavors opens the possibilities to biases being infiltrated through the code of the machine we work with in ways we might not be even capable of perceiving at first glance.
Also, the difficulties of intellectual property and ownership once AI starts rolling out as creative jobs (just like every other job) will be automated and accelerated how do we protect human creativity so authors can profit and make a career out of their work? The challenges are many on this front and the solutions are not looking promising so far.
My God is AI and I will Go to Jihad to Defend it:
As we’ve mentioned in a previous article AI's capability to craft narratives introduces the prospect of digital mythologies and faiths. While intriguing, this raises concerns about AI-generated belief systems manipulating the vulnerable just like regular religions work by attacking the weakest, the ignorant, and the feeble-minded to be persuaded and taken down a dark path of submission and obedience.
This could be a real danger as well due to the ease of use and scaling up where a single person with enough online expertise and skill could work alongside an AI to craft the proper narrative that would seem like a solution for people living through hard times and swindle them out of their money, and use this AI-fabricated narrative as a relief for their suffering.
Again, just like religion today poses a threat to the growth and development of a better and healthier society based on rational and secular values, AI religion could be a way to reverse humanity to the blindness of faith and superstition under the veneer of modernity and technology. Since these complex systems work almost like “magic” to those uneducated it is not a distant leap to consider someone using this lack of knowledge as a way to justify some divinity inside the machine. And if history is a way to analyze religions we know that people following superstitions like these could commit the most horrible crimes against other fellow humans in the name of their gods, which is a scary prospect when that “god” might have agency and really exist in this world unlike the previous ones we created with our imaginations this one will be created by our hands.
All your code are belong to us
With AI's foray into programming, automated code-writing can become a formidable tool for developers and non-expert programmers wanting to develop tools and small programs for their lives (or data science students who need help with their master's thesis ;) ). This if developed more is an incredible and world-changing thing because it eliminates the barriers of entry for coding allowing almost everyone who knows how to use written language to be able to use the language of machines and code. This could be a great thing for our economies and the technological development of humanity, but there is a dark side to this as well.
By exploiting vulnerabilities at unprecedented speeds, these algorithms could usher in a new era of cyber threats. As the digital arms race escalates, proactive defenses against AI-assisted hacking become essential, due to the complexity of tools like ChatCPT 4 or GirHub’s Co.pilot most people can find the code of a specific website program copy it put it to the AI, and ask “find me a vulnerability in this code” and at a breakneck speed, it is capable of identifying vulnerabilities in different systems. So the measures needed for tighter and tiger cybersecurity will be increasing in the coming years at an exponential rate.
Hyper Echo Chambers :
AI's potential to perpetually optimize online experiences through relentless A-B testing, just as a quick recap A-B testing is the practice in online services of trying small design adjustments to the user experience and then analyzing the metrics afterward. So for example sending 50% of your users to the new website with a different color button and seeing if that increased engagement or not and so on and so forth until you get a highly optimized website.
This relentlessness can lead to hyper-personalization which at first might seem advantageous because it offers a more custom-made experience for users of online services and social media. If we see this becoming hyper-optimized and AI-generated content all the time with people being fed only content THEY would like and even more, all of this content is AI-generated
This risks isolating individuals in echo chambers with bots instead of humans, with each person enveloped in a unique digital bubble a golden prison of comforts dulling our senses and capacity for critical thought when we are in the most “pleasurable” experience but numb to the reality around us. (insert the obese people from WALL-E if you want a fun metaphor. The challenge emerges: how to balance personalization with a shared, consistent online reality.
Robot Girlfriends are real?
The allure of AI companions is a strong one be it for friendship, romance, or solace it offers a companion that is always there, always happy, and always dedicated 100% only to listen to your every word and will try their best to fit within your desired parameters and provide a tailor-made response, this is undeniably attractive for many people
However, as artificial intimacy burgeons, we face potential pitfalls in mental health, genuine human connections, and ethical considerations these connections might coddle us to a false sense of comfort and stability. But it completely atrophies the human capacity to deal with other people and situations that aren’t perfect for us it eliminates the human-ness of being human and supplants it with a hygienically perfected automaton designed for a numbingly positive existence.

Treading the path of synthetic relationships requires careful evaluation of their impact on human well-being and mental health as well as considering the safety of the users, recently Snapchat implemented a chatbot using chatgpt into their systems which is a 24/7 “companion” or friend and the chatbot is incapable of detecting when a child is speaking to it and asking for advice to what to do when having sex with an adult, this was an experiment ran by some researchers, but it shows the vulnerabilities and risks of an “always supportive, always happy” chatbot instead of a real human which can understand better social norms and standards of safety.
Weaponized Persuasion:
We already had an entire article on this; persuasion and intimacy as the final frontier in the toolbox for AI to infiltrate the human psyche. Just like Ai first learned how to play Go and then became the ultimate champion in that game being unable to be defeated by any human (unless abusing specific blindspots an old version of the AI had). Now imagine instead of playing a game, the AI has to deal with humans and interact with them; the specific goal is to persuade them of something. Sure, at first it might suck and stumble around with very poorly made arguments, but given enough time and training data and practically any AI with enough time could convince humans to change their ideas and perception on certain topics.
This could redirect narratives in humanity’s history, influence elections, drive consumer behavior, and manipulate more and more people towards very nefarious means, maybe the companies behind these AI would love to have the support of the general audience and to have them vote for Corporate-friendly politicians so that regulations don't restrict them it is a potentially scary and dark future where we are losing the game of being autonomous and having agency in our own lives and we could even not be aware of these changes due to the subtlety of the subterfuge, this veil of perception through AI could be one of the most powerful weapons conceived ever.
Will the second contact be the last?
It is difficult to say at this point how will the future be, forecasting ain’t my job but one can dream. Most of these cases are the downside of a great opportunity just like this article was more on the risks and perils you can easily flip these and have them become great value for human societies. Weaponized persuasion could become an assistance for us to deal with and thrive in human societies for people with low social intelligence. Automated code writing AI is a great tool for developers and anyone else trying to do great things with code. Deep fakes are a way for us to also have fun laugh and explore our creativity in AI-generated photographs creating landscapes that will never be real and pushing the boundaries of human creativity.
Technology isn’t inherently neutral it has biased and design flaws that can skewer our perception of reality, but it is also our job as a society to identify what we like and enjoy about a piece of technology and adopt that and be careful and wary of the risks and try to minimize them as much as possible via regulation or better education. The road is open for a coming revolution in the field of AI and I hope we are able to handle it because the powers we will receive in the coming years will seem godlike to most humans that have ever lived, so we might as well start acting the part.