Never mind the zombies and vampires. Worry about the cyborgs and nanobots—the real things, in other words.
So how do we keep such creatures from killing us in our sleep? That’s a question that is occupying the attention of not just sci-fi writers, but also ethicists such as Wallach (Interdisciplinary Center for Bioethics/Yale Univ.; co-author: Moral Machines: Teaching Robots Right from Wrong, 2008), who works the rich vein explored by Edward Tenner’s and Donald Norman’s looks at the Murphy’s Law–ish world of unintended consequences wrought by human design. Tinkering with the deepest levels of subatomic particles may produce a big bang sufficient to end our existence; building ever smarter robots may produce one so smart that the robots decide that humans are pests. On that score, Wallach notes that though Isaac Asimov’s laws of robots assert that robots may not hurt us, “in story after story, Asimov illustrates how difficult it would be to design robots that follow these simple ethical rules.” Lest none of the current generation of robot designers even thinks about these things, Wallach looks at some of the overall ethical problems regarding complex systems, reviews hopeful developments in the field of resilience engineering, and generally advocates a more careful approach to building and thinking about things that may kill us, whether meant to do so or not. Figuring nicely in his discussion is London’s “Wibbly Wobbly Bridge,” which illustrates the point that “mechanical systems are naturally prone to move from orderly to chaotic behavior.” Alas, human systems do as well, which occasions his call for better monitoring, modeling, and imagining the what-ifs.
Wallach describes himself as a “friendly skeptic” with respect to some aspects of technology, but readers may incline to gloom after reading all the ways things technological can go south. A well-mounted argument that deserves wide consideration.