Can Advocates Ban a New and Lethal Threat?
In early March, a photo of metallic wreckage in Kyiv was identified by analysts as a possible Russian drone with autonomous capabilities. Lethal autonomous weapons systems are not the killer robots of science fiction. They are weapons for use in the air, at sea, and on land that employ artificial intelligence to identify and attack objects and people without human intervention. These systems exist now and, if reports out of Ukraine are true, they may already be in use.
Before Stuart Russell got involved in advocating for a ban on such autonomous weapons, he thought the way to prevent harm could be a code of conduct for computer scientists. A sensible rule—Don’t design algorithms that can decide to kill humans, for example—that any normal person could agree with might resolve the issue.
“Yet I soon learned,” he writes, “that ‘sensible’ and ‘normal’ are not words commonly associated with the geopolitical and diplomatic realm where arms control issues are discussed.” As he began to navigate this world, it became clear to him that a ban on autonomous weapons systems was going to require much more creativity and effort than the standard policy script.
Read more about one professor’s education in international policy advocacy.
|