War is a horrible thing to have happened where the human condition gets pushed to its limits the amount of human suffering is unbearable both for the people in the direct combat zones but also for those who just happen to be where the war is being fought as civilians also counting the families and everyone close to the people involved. It is without a doubt one of the worst things that can happen anywhere, but sadly due to the thirst for power of some authoritarian leaders or the systematic policies of certain states all over the planet after a relatively “peaceful” period war seems to be rearing its head back again in the XXIst century.
With the invasion of Ukraine by the Russian Regime, and the conflict between Israel and Hamas, Alongside all the other conflicts happening in Africa, Asia, and the Americas we are starting to see a clear rise in the number of wars happening all over the place.
In this Changing landscape, technological development can change the tides of war while at the same time preventing or accelerating the number of people who will die a futile death in the bloodshed. With concerns about the usage of AI in autonomous weaponry being a thing the UN has been trying to make the world aware of its risks and repercussions, it seems the warning is falling on deaf ears just because of the rise and proliferation of companies using AI in weapon systems ( or thinking about it)
Curiously enough some of these companies are defense startups working in military technology development and some of the most high-profile companies have very high-profile founders or backers to bring forth innovations in the killingI mean defense industry. Some of these could be like Anduril Industries founded by Palmer Luckey the Inventor of the Oculus Rift the innovative VR headset, or White Stork the “secret” project supported by Google’s former CEO Eric Schmidt, who’s also had a previous encounter with backing Military Technology startups though with a company that ended up burning billions and seems to be on a slow spiral towards its downfall.
The Rise of Warfare Startups
The military industry is by no means a sleeping one with billions flowing constantly into funding defense contracts it has been something of a silent giant ever since the war on Terror mellowed down ( or just became background noise for the world) But with the Invasion of Ukraine by Russia and the visibility this conflict has had it brought a breath of fresh air– and capital– towards military companies and defense contractors. Especially in NATO and European countries after realizing that Russia is still a military and geopolitical threat to their stability (also let us not forget the potential chaos that an Invasion of Taiwan by China would mean for all of us)
So of course this acceleration of the threat of Authoritarian regimes has sent a whole bunch of countries and investors rushing to fund innovative technologies that might give free countries the upper hand against the threat of dark forces looming over the horizon. This also has brought unconventional players throwing their hats into the ring coming from Silicon Valley it seems like more and more entrepreneurs, investors and other figures are entering into MilTech.
White Storks Falling From The Sky
Eric Schmidt, the former Google CEO, has been discreetly developing a military drone startup, White Stork, in the U.S. and Ukraine, concealed within a series of LLCs. Forbes, revealing White Stork for the first time, highlights its focus on mass-producible AI-driven drones capable of operating in environments with GPS jamming. This initiative, reflecting Schmidt's investment in military technology, aims to support Ukraine against Russian forces, leveraging his role as a defense tech advisor and financier in the region.
White Stork, named after Ukraine's national bird, was established in August of 2023 it seems, with its development centered on drones designed for visual targeting without the need for communications, a critical advantage in electronic warfare scenarios. Despite operating under stealth, White Stork's efforts have attracted attention within the drone community, with Schmidt actively engaging with factories, testing ranges, and startups in Ukraine. This can also be seen by his interest in the Ukraine-based military tech accelerator Dare to Defend Democracy (god that’s a cool name)
Schmidt's advocacy for drone technology as a countermeasure to Russian forces has been public, stressing the importance of "kamikaze drones" in combat scenarios. However, his direct involvement with White Stork and its mission to equip Ukraine with competitive drone technology has been less known. His efforts align with broader U.S. national security interests, reflecting his continuous engagement with military tech innovation, including his leadership roles within the National Security Commission on Artificial Intelligence and the Defense Innovation Board.
Deep Dive into Anduril Industries
Palmer Luckey, once a celebrated young entrepreneur in Silicon Valley, built a billion-dollar fortune with Oculus VR the virtual reality company that was bought by Meta and eventually, he was ousted from the company for supporting Donald Trump. Afterward, he decided to radically shift in sectors when he co-founded Anduril Industries in 2017, venturing into the realm of defense technology.
Anduril focuses on advanced autonomous systems, developing AI-powered solutions like sentry towers, drones, and counter-drone technology. Their core software, Lattice, aims to connect autonomous systems for "mission autonomy," enabling them to react and make decisions without human intervention. This is exactly what the UN is warning about regarding the risks of autonomous weapon systems while some in the defense sector claim this will be the ultimate reduction of unnecessary deaths and offering more humane solutions by automating decision-making from humans to machines.
But, unlike traditional defense contractors waiting for Pentagon requests, Anduril has been walking its path by actively developing technologies they believe the military will need in the future. This bold approach for war suppliers seems to be paying off or at least attracting investors, with investments exceeding $1.8 billion, aiming to speed up acquisition and potentially disrupting the defense –wink– industry
Luckey's political views and Anduril's aggressive tactics have drawn criticism. However, the company gained validation through recent contracts – notably, securing U.S. Special Operations Command's drone defense for potentially $1 billion and contributing to the Army's JADC2 program. This is a program developed by the Department of Defense focused on connecting sensors from all branches of the armed forces into a unified network powered by artificial intelligence. This could bring unprecedented information to generals and other military decision-makers in real time about what is happening under the fog of war. (I played so many RTS games when I was young that I had to use this term at least once in this article)
Anduril stands out with its ambitious vision for AI-driven warfare. They face intense competition and ethical challenges, but their success could significantly shape the future of military technology.
The Future of Warfare and AI
Now, with companies like these, venture funds, the DoD and so many other players actively pursuing innovations in the industry of war we are left to ponder whether is this a good innovation or not. And I think it is important to discuss while this side of the industry is still somewhat in its infancy.
Automating decision-making on killing another human being on the battlefield reduces the risk for soldiers from whoever developed the technology first, from that perspective it can potentially save some lives. The problem is that it dehumanizes the decision to end someone else’s life so much that it isn’t even a decision made by a person but a probability calculation done by a machine.
We need to balance both sides into this perspective of how will these technologies affect how we live in the XXI century. The rise of suicide drones that can be easily and cheaply made and then can find a target poses a very innovative low-cost solution on the battlefield as has been seen in Ukraine. But, what happens when this technology continues to advance (which it will) and we get smaller and faster and more nimble drones till we can have a small drone the size of a small bug with just enough explosive charge to self-detonate on a specific human skull and “terminate the target” we could release swarms of small cheap drones with a small explosive charge and have them track down and eliminate someone almost anywhere in the world.
How can we feel safe in a world where something like that exists and is efficient enough to infiltrate someone's house or to find a target if they are walking outside and CCTV cameras happen to get a hold of them which thanks to facial recognition software can be easily tracked and then inform a central command system that calculates the probability of this person committing some crime and then delivering a drone that preemptively kills that target. That is assuming this is a bad person I don’t know let's assume it was a horrible criminal. But what if the company that handles this decision-making gets hacked and the automated system decides EVERYONE is a threat and launches swarms of suicide drones to kill targets? It will be an increasingly complex and hard scenario to deal with.
This chilling scenario becomes even more complex when we consider the "alignment problem": have AI systems been programmed with our values? Are we handing them weapons capable of killing thousands before ensuring they share our morality?
This is not a matter to be taken lightly. Before proliferating further, we must embark on a global conversation about the ethical implications and potential dangers of AI-powered weaponry. As with any powerful technology, the path forward demands careful consideration, lest we tread a precipice with unforeseen consequences.
What does this all mean?
I am not an expert in warfare and Geopolitics, nor am I unbiased (as you can probably tell by this point) I am not in favor of wars but sadly I believe them to be an inevitability of our times and the systems that we live in (no, it’s not capitalism the one to blame for everything here). I think that automated systems are an interesting alternative and innovation, no doubt. The problem is that it centralizes power and decision-making in vulnerable systems. Systems that are too complex for us to fully understand and that haven’t been able to be developed in a way that aligns with human values.
It could of course optimize the logistics of war to reduce waste, it can deliver more efficient bombs to reduce civilian casualties and collateral damage. But is that uptake worth putting even more humans at risk from nonhuman intelligence if they misalign from our orders or the interpretations of them?
I think it is a very complex topic but it also can not NOT be developed it is again this prisoner dilemma that we are kinda stuck with AI “If we don’t develop it first THEY might develop one as well” and when on the other you got malicious authoritarian regimes with no respect for personal freedoms, the rule of law and democracy it kinda puts you on your toes to be on the lookout and tirelessly develop solutions and technology to countermeasure what your opponent might have. But then again this can be an argument to further accelerate us towards a mutually assured destruction. Again War is complicated complex violent and overall weird as a concept for me. But let me know what you think. Will wars be automated sooner rather than later or will we continue to have humans pulling the triggers?