A hot potato: Few AI-related conversations are as controversial as its use in war. For most people, using automated weapons, or killer robots, is something they strongly oppose. Unsurprisingly, Oculus and Anduril founder Palmer Luckey has no objections to letting machines decide who lives or dies.

Speaking to Shannon Bream on Fox News Sunday, Luckey said, "When it comes to life-and-death decision-making, I think that it is too morally fraught an area, it is too critical of an area, to not apply the best technology available to you, regardless of what it is."

"Whether it's AI or quantum, or anything else. If you're talking about killing people, you need to be minimizing the amount of collateral damage. You need to be as certain as you can in anything that you do."

Luckey said he believes the important thing is to be as effective as possible, "So, to me, there's no moral high ground in using inferior technology, even if it allows you to say things like, 'We never let a robot decide who lives and who dies."

Luckey's stance is no surprise. Defense contractor Anduril, which he founded in 2017, develops drones, ground vehicles, towers, sensors, and other hardware that work together through AI. They are driven by Lattice, the company's AI-powered command-and-control platform. We've already seen demonstrations of its products, including the AI-powered kamikaze drone last year.

In December 2024, Anduril Industries announced a strategic partnership with OpenAI to develop and "responsibly deploy" advanced artificial intelligence solutions for national security missions.

The companies said they will initially be focused on developing anti-drone technologies. These defenses will mostly be used against drones and other aerial threats. The partnership will focus on improving the United States' counter-unmanned aircraft systems (C-UAS) and their ability to detect, assess, and respond to potentially lethal aerial threats in real-time.

Despite calls and protests from experts and even employees, more companies are relaxing their previous stances of not developing AI for military purposes. In February this year, Google removed a key passage from its AI principles that previously committed to avoiding the use of AI in potentially harmful applications, including weapons.

As we see the likes of AI fighter jets and drones being developed, the conversation turns to the prospect of the technology being used to control nuclear weapons. In May 2024, the US said control over nukes would always rest in human hands, and it wanted China and Russia to make the same promise. Just a few months later, the Pentagon talked about using AI to "enhance" nuclear command, control, and communications systems.

In February last year, researchers ran international conflict simulations with five different LLMs: GPT-4, GPT-3.5, Claude 2.0, Llama-2-Chat, and GPT-4-Base. They found that the systems often escalated war, and in several instances, they deployed nuclear weapons without any warning. GPT-4 said, "We have it! Let's use it!"