There are two things that I worry about when I write one of these posts. The first is that I repeat myself. When going through the reading for this chapter, my take was basically the same as the post I wrote on fascism in cyberpunk: that the bad thing doesn’t really work because it’s still designed and created by humans. We can build incredibly lethal robots that can annihilate people in a split second, but they’re basically going to have the same level of competency as Siri or something like that. The killbot is going to fuck up and kill the wrong people if it’s left to it’s own devices. Then again, the US military has been known to murder people via drone strike if they’re slightly tall, so it’s probably matching the modus operandi there.
My other main concern is that I’m too cynical. You could probably ascertain that from the last part of the previous paragraph. I like to think of myself as a humanist: someone who believes in the amazing potential of humanity, in the sanctity of all human life, and the people are generally better and smarter and kinder than we often give them credit. I try to square that belief with the takes I have on inept and dangerous leadership by targeting the ideologies present, and the way things tend to work out in the real world.
Looking at Robocop and the ED 209 for example, that’s basically what I was thinking of when I was writing that first paragraph. That, and the fact that my phone keeps on resetting for some reason, and I hate it, but I can’t afford a new one. We will undeniably build a robot that can murder stuff with the best of them, but there’s basically no practical version in the foreseeable future that is better at ascertaining threats than a real life person. And that’s a very low bar, considering how bad humans are at deciding how threatening something is. Mostly that’s informed by real life biases, and machines are only as smart as we can make them. We pick values for the machine to look for, and that’s unavoidably informed by real world bias. Machines can’t fully operate on logic and reason because humans can’t fully operate on logic and reason.
Although, I think that’s a really really good thing. I don’t know why empathy and emotionality is considered such a negative quality. There’s one bit in “Do Androids Dream of Electric Sheep” that sticks out in my mind, and it’s towards the end where a replicant is pulling the legs off a spider. It’s a chapter designed to show how terrifying the idea of an entity without empathy is, someone who looks at suffering and pain with nothing more than curiosity. It makes sense that Dick was inspired to write about this after researching nazis for his other book “The Man in the High Tower.” The movie blade runner has different goals, and I respect that, but I think it would’ve been nifty to include.
In the introduction to “War in the Age of Intelligent Machines” the author talks about how in testing for nuclear scenarios, the humans were always much more cautious about the nuclear option, compared to the computers. I should think that being shy about killing millions if not billions of people in nuclear hellfire would be an admirable trait in a person. But I guess that it’s mostly fear about what the other side will do, that they are monsters who do not share your value for human life, and so you must become a monster in turn.