Sunday, August 01, 2004

Danger, Will Robinson!

A week or so ago, after having seen the movie I, Robot, I mentioned that it'd caused me to have some thoughts about Asimov and Asimov's Laws of Robotics, and about how the movie touched on some things that follow logically from Asimov's postulates but that Asimov himself never really dealt with. And I said I might ramble on about these topics at some point. Well, hey, now seems to be as good a time as any! I don't think I'm actually going to talk about the movie in specific, but at least a couple of the points I've been thinking about are very relevant to the way the movie works out. I'll leave those who've seen it to identify which points those are.

So, right. The Three Laws of Robotics. You know, I've heard occasional bits of speculation (though how serious, I really don't know) about whether Asimov's Three Laws might be useful to incorporate into real-world robots. Personally, I rather doubt it. They're too vague (a point I am definitely going to come back to). But it's easy to understand where Asimov was coming from when he thought them up and why he formulated them as he did. He'd read one too many stories about crazed robots turning on their creators and, frankly, thought that idea was stupid. Robots, he reasoned, were tools, nothing more, nothing less, and tools can and should be designed to specifications that don't include homicidal rampages and plans for world domination.

The most important feature for any tool, he figured, is that it must be safe to use. And the more potentially dangerous a tool is, the more safeguards inevitably get built into it. There's a reason, he said, why you build a saw with a handle, and there's absolutely no reason why we wouldn't build robots with something fundamentally equivalent. Thus, Law 1: A robot may not injure a human being or, through inaction, allow a human being to come to harm.

The second thing is that if you build a tool, well, hey, you want it to do what it's designed to do. We build robots to do our bidding, thus, Rule 2: A robot must obey orders given it by human beings except where such orders would conflict with the First Law. That "except where" proviso makes sense, because, after all, safety is paramount over efficiency. No machine that performs its job by ripping its operators to shreds is going to be tolerated for long.

Third, a robot is going to be a pretty expensive and valuable tool, and when a tool is expensive and valuable, you want to build safeguards into it to keep it from being damaged. So, Rule 3: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. Again, safety is still paramount, and the phrasing of the rule recognizes that there may well be times when the machine itself may be expendable in the performance of its duty.

So. Those are the Laws. Now, the problems with them...

Problem #1: These laws assume that people will deal with robots fairly intelligently, not giving them conflicting orders or ordering them to do self-destructive things without realizing that they'll be self-destructive. Asimov, like most writers of the era, was mainly writing about extremely intelligent people, and even they occasionally got themselves into trouble with this stuff. Call me a cynic, but I doubt it would work in the real world at all. (Actually, even Asimov seemed to realize this, and allowed for a considerable amount of flexibility in the Second Law, with most robots appearing to have complex additional rules about which orders would be considered to take precedence over which other orders. Still, I don't see any way an Asimovian robot could ever be anything remotely resembling idiot-proof.)

Problem #2: How the heck do you program this stuff in? Asimov, for the most part, was writing puzzle stories: Given these three rules, how can Our Heros figure out why Robot X is doing Strange Activity Y? For this purpose, he had to assume that the Three Laws were absolute and immutable, built directly into the very structure of a robot's brain. They have to exist on a simple, basic level, but the truth is, they're not simple, basic concepts at all. This leads us into:

Problem #3: These rules rely on a hell of a lot of interpretation. This really is the big problem, and it leaves the door open for all kinds of scary nastiness. In particular, there's a couple of words in the First Law that we really need to know the strict definitions of in order to understand exactly what the law is even saying. To begin with, what, exactly, is the definition of "human"? Asimov did touch on this question in his story "Evidence," which considers a robot who is outwardly indistinguishable from a human being, and eventually comes to the conclusion that if such a creature really can't be told apart from a human being, it might as well be treated like one. Which, as far as I'm concerned, is fine. But there's a sinister flip side to that, which is the question of what happens if you can convince a robot to exclude someone, or some group of people, from its definition of "human." Admittedly, it's been a while since I read the stories, but I don't think Asimov ever really deals with that possibility.

The other problem is with the definitions of "injure" and "harm." Does a paper cut constitute harm? How about elective surgery (with the small but very real possibility of something going wrong on the operating table)? What about emotional harm? Taken to its logical extreme (and Asimov's robots are nothing if not logical), the First Law seems like it ought to require robots to do everything in their power to keep humans from ever doing anything dangerous, regardless of the humans' own wishes. Asimov never deals with this, either, but Jack Williamson tackles it head-on in his brilliant and nightmarish story "With Folded Hands." (For the Star Trek fans in the audience, the original series also address this issue, in a much more light-hearted vein, in "I, Mudd.")

So those are the biggest problems with the Three Laws, as far as I can see. But it doesn't end there. Because, much later in his career, Asimov introduced yet another law, one that certain extraordinary robots were able to develop for themselves. He called it the "Zeroth Law": A robot may not injure humanity or, through inaction, allow humanity to come to harm. As might be surmised by its numbering, this law was given precedence over the other three, being deemed more fundamental and important than even the preservation of individual human lives. Now, the way Asimov presents this, it's all very noble and moral, and his robotic hero applies it in a gentle and benevolent fashion. But there's no reason to assume that must be the case, and there are some very disturbing possible implications to the Zeroth Law, especially when combined with the aforementioned fuzzy definitions of "injure" and "harm." For instance, given humanity's problems with pollution, over-population, war, etc., etc., might it not, from a strictly logical point of view, be argued that the best way to preserve humanity might be to cull out 95% of us and stick the rest on nature preserves? And, really, how many atrocities have humans propagated based on the firm belief that what they were doing was in the ultimate best interests of humanity as a whole, and that that noble goal far outweighed the value placed on individual human life? This is scary, scary stuff, and I really don't think (as Asimov perhaps did) that the fact that robots are clear-headed and unemotional is going to render them exempt from this kind of response. "Garbage in, garbage out" is one of the oldest precepts of computer science, and all it takes is the right sort of faulty or biased input and you're right back to homicidal rampages and world domination plans.

So, um, yeah. Those are my thoughts on Asimov's Laws of Robotics. Anybody who's actually read this far have anything they'd like to add?

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.