For the second time now a student specializing in robotics has expressed to me the sentiment that 'robots should look like robots.' The context of this comment was a discussion of what's being done today with humanoid robots, which can apprehend, speak with, and interact with humans. Most of the computer science/automated reasoning/robotics students I've spoken with over the years are attracted to these fields and enthusiastic about their work in large part because of the amazing things they can program robots to do. The idea that you could program a machine to hold even a basic conversation with a human is for many people exceptionally cool. At the same time, however, this very fascination with how advanced, intuitive, and interactive we can make robots has also proven disquieting. The same students who marvel at how well we can make robots interface with humans would also seem to balk at a robot whose appearance too closely resembles that of a real human. In other words, robots without metal and gears showing, designed instead to look like organic beings while mimicking the behavior and capabilities of organic beings, have a propensity to give people the creeps. The reason for this is, presumably, that without any kind of overt visual confirmation that the being in question is artificial rather than organic, the line between artificial and organic 'life forms' becomes disconcertingly blurry. At present, we humans can handle the shock and awe engendered by increasingly advanced humanoid robots so long as it's easy for us to classify them as unproblematically artificial, just as we tend to classify ourselves as unproblematically organic and human.
What intrigues me about this form of suspension of disbelief is what it suggests about the relationship between human advancement and humanity. On one hand, we're happy to reap the benefits of technological advancement, which are perhaps the fruits of a predictable if not unavoidable trajectory of human progress. On the other, our ability to manipulate the so-called 'natural world' and its materials--including our own organic bodies--poses a threat to our humanity as we know it: though human evolution could certainly involve evolution out of or beyond what we understand today to be the fundamentals of our humanity, such a long transition would demand significant grappling with which aspects of our humanity are most valuable and worth preserving, which we could never preserve even if we wanted to, and which we could never discard even if we wanted to. One theory of human progress is determinative and circular: no matter what we do, we'll unavoidably end up where we end up because of our nature, and the nature of our 'progress.' Another would suggest that human progress will be shaped by the kinds of choices we make, by humans as agents of our own steady change. I suspect there may come a time when humanoid robots that look identical to humans will be commonplace enough that they'll cease to give people the creeps; but the more interesting question is not whether and at what point will robots look more like humans, but whether and at what point will humans become more like robots.