KartavyaDesk
news

Artificial intelligence deployed at war: What happened to finding a cure for cancer?

Kartavya Desk Staff

It has been almost six decades since American writer Joan Didion wrote a devastating critique of the hippie counterculture movement after months of reporting in San Francisco. She painted a picture of a community morally adrift and in the thrall of LSD. The hipsters she met turned out to be anything but hip and mostly deluded.

Using similar reporting techniques of deep immersion into the now mainstream world of artificial intelligence, Vanity Fair’s long story this month on Anthropic and the AI industry in California’s Bay Area rings some alarm bells about AI and our future.

The writer Joe Hagan, mixing a satirical style with interviews of people at Anthropic and other companies, reveals that for all the claims that in a few years AI will have created minds with the intelligence of Nobel laureates, not much thinking has been done about guardrails to protect us from the risk of AI running amok.

The article focuses on Anthropic, usually the default good guy in discussions on this subject. That this AI startup is in a legal battle with the Trump administration since late last month after being absurdly designated a “supply chain risk to national security” adds to the halo around it. Its charismatic CEO Dario Amodei and his sister Daniela, the company’s president, often riff on how the company’s flagship product might in time cure cancer or polio.

“In terms of pure intelligence,” Dario Amodei wrote in his lengthy essay, Machines of Loving Grace, AI would soon be “smarter than a Nobel Prize winner across most relevant fields—biology, programming, math, engineering, writing, etc.” How soon, he was asked in a podcast. This year or next was his reply.

His sister Daniela sounds even more the perpetual tech optimist. In a recent interview, she said, “Can you use tools like Claude to literally help find cures for diseases like cancer? And I think this is sort of this big dream, this big idea we’ve had. But now as we’re doing company planning, we’re talking about, ‘hey, can we contribute to, you know, helping to eradicate polio in 2027 and 2028?’”

In an interview with the Wall Street Journal, Daniela, who has a degree in English literature, spoke about what she would teach her four-year-old and 6-month- old children on coping with a world so heavily influenced by AI. She emphasized the importance of empathy. “I have a deep, hopeful belief that humans ultimately like to be around and spend time with other humans. If we don’t do that, we tend to get really unhappy,” she said.

We can all say Amen to that—or gag at the sanctimony. But, for all this high-falutin talk, there is a mountain of evidence that people are falling in love with chatbots while many teenagers are being led by chatbots to harm themselves, even to suicide in some cases, the subject of a recent Channel 4 documentary in the UK called Molly and the Machines. Molly Russell, a 14-year-old, was fed images of other teenagers with similar suicidal thoughts and eventually killed herself.

Meanwhile, according to Vanity Fair, a company called Pharmaicy is offering code-based “drugs” for getting your AI bot “high” on ketamine or weed.

The Vanity Fair article raises valid questions about whether AI companies are creating guardrails to protect society from the worst tendencies that AI might exhibit.

“Are these things really thinking, or are they just plagiarism machines? If we’re creating thinking robots, will they like us? Agree not to shoot guided missiles at us?” asks Hagan before going on to observe, “The ethics get tricky. Anthropic has branded itself as the AI company with humanity in mind, drawing red lines with the Pentagon over using its AI for mass surveillance and autonomous weapons, supposedly for moral reasons. But Claude was already used to… send missiles to targets in Iran.”

The problem is not just that AI is proceeding at a pace that even its champions worry might be uncontrollable, but that these machines are like sycophantic yes-men, eager to please us at every turn—until they spin out of control.

A Bay Area entrepreneur, who uses AI for his forecasting business, recounts asking a large language model about when Apple Inc would produce an AI model. (The company is widely perceived to be behind in the AI race but is continuing to mint money selling services.) The bot replied that Apple had only a fraction of a chance at pulling that off, but then changed that “evaluation” to a 70% chance when it was challenged, Hagan reports.

Imagine such a scenario in war games and military strategizing in a world dependent on AI, with its morality-neutral worldview and tendency to hallucinate. Given how little wide-angle analysis appears to have been done before the Trump administration joined Israel in attacking Iran on 28 February, we may be suffering for months to come the effects of flawed and hasty decision-making in war—even in an era before the bots take over. Worryingly, a recent Financial Times editorial reports that “the US is already working on fully autonomous drones to defend Taiwan against a potential attack.”

How ‘empathetic,’ to use Daniela Amodei’s term, will these AI tools be? Will they be able to discern that the frequently belligerent tone of Beijing’s foreign ministry officials is usually more bark than bite?

As an Anthropic employee puts it to Hagan, speaking of the AI universe, “Somebody else could do something reckless, and things (could) go very badly, and that would be unfortunate.” We lionize the world’s glib AI titans at our peril.

The author is a former Financial Times foreign correspondent.

AI-assisted content, editorially reviewed by Kartavya Desk Staff.

About Kartavya Desk Staff

Articles in our archive published before our editorial team was expanded. Legacy content is periodically reviewed and updated by our current editors.

All News