Speedysnail

Horrors of War

An op-ed in Haaretz recently discussed Israel’s unconscious drive for self-destruction. There doesn’t seem much that’s unconscious, however, in an article that appeared recently on its use of AI to direct the bombing in Gaza, which must be the most horrifying article on AI this year. +972 Magazine, an independent site run by a group of Palestinian and Israeli journalists, reports that Israel’s Lavender targeting system has been deployed with “little human oversight and a permissive policy for casualties”. What oversight there is typically involves “twenty seconds to authorize a bombing … just to make sure the target is male”.

According to six Israeli intelligence officers, who have all served in the army during the current war on the Gaza Strip and had first-hand involvement with the use of AI to generate targets for assassination, Lavender has played a central role in the unprecedented bombing of Palestinians, especially during the early stages of the war. In fact, according to the sources, its influence on the military’s operations was such that they essentially treated the outputs of the AI machine “as if it were a human decision.”

It gets worse.

Additional automated systems, including one called “Where’s Daddy?” also revealed here for the first time, were used specifically to track the targeted individuals and carry out bombings when they had entered their family’s residences.

The result, as the sources testified, is that thousands of Palestinians—most of them women and children or people who were not involved in the fighting—were wiped out by Israeli airstrikes, especially during the first weeks of the war, because of the AI program’s decisions.

This appeared a day after articles about how Amazon has been dropping “just walk out” technology from its brick-and-mortar stores which it turns out had “more than 1,000 workers in India monitoring cameras to review customer purchases”. The checkouts that seem automated are actually humans and the humans choosing which people to bomb are actually automatons.

Choosing to let the automatons make those choices is by far the bigger crime, of course. If someone comes into a crowded room with their Automatic Whirling Dervish of a Thousand Blades and presses the On button, no one’s going to be blaming the Dervish for the resulting carnage. Apart, perhaps, from the guy pressing the button.

And the guys pressing the buttons are extremely disturbing. The founder of Oculus, who is working on AI weaponry, made a VR headset that kills the user if they die in the game just for funsies (via Mefi), to which the only rational response is holy fucking shit. What’s wrong with keeping the worst things that we’ve imagined safely within the pages of a horror novel? Imagine if Stephen King had actually brought a few “thought-provoking reminders” into the real world. You know, like a driverless car that goes around running people over. (Okay, maybe not the best example.)

 

Meanwhile, the risible stories of AI and the ongoing ruination of the internet keep on coming.

SEO operators are mocking Google for deindexing their AI-generated sites.

Shrimp Jesus takes over Facebook.

Meta’s AI image generator can’t imagine an Asian man with a white woman.

You don’t have to type anymore+, provided you don’t mind “violent and unsavory hallucinations, as well as those that perpetuate harmful stereotypes”.

One of the must-read critics of AI at the moment is Ed Zitron: Have we reached peak AI? They’re looting the Internet. He also has little time for Elon Musk.

The staggering ecological impacts of computation and the cloud.

We need to rewild the Internet.

21 April 2024 · Events