Friday, October 7, 2011

Fund My Study on Aliens?

A fascinating op-ed by Notre Dame professor of philosophy Gary Gutting discusses a Penn State and NASA study about the potential outcomes--good, bad, or neutral--of making contact with intelligent extraterrestrial life forms. Gutting makes a good argument against such pursuits, noting the strong possibility (as he sees it) that extraterrestrials won't be the nice kind of aliens, but the nasty kind that may want to enslave us or use us, like lab rats, for research purposes. Gutting's essay prompts some important questions.

For one, what's the difference between doing a study on what aliens might be like and doing a study on what god might be like? Gutting draws the comparison between the question of the existence of a good or evil god and that of good or evil aliens, and frames the question of whether to pursue contact with aliens in terms of Pascal's wager about the existence and temperament of god. But the opening line of Gutting's essay is suspect, especially for a philosophy professor. He writes:

The probability that there is intelligent life somewhere other than earth increases as we discover more and more solar systems that seem capable of sustaining life.


Is this statement true? Whether it's god or aliens, to what extent can we calculate the probability of great unknowns? Within the sphere of human knowledge, which understands there to be certain conditions for producing life, knowing that there are worlds out there that could theoretically sustain life as we know it might convince us that this increases the probability of the existence of extraterrestrial life. But what if life exists other than we know it? Or, what exists out there that isn't life at all?

Tied to these questions is the smaller question of whether it makes sense to fund such studies that rely so substantially on things we already know enough to recognize as wild speculations. Would we fund a study aimed at determining whether we're watched over by a benevolent or evil god? Of course this comparison is flawed--while we have no evidence that would lead us to the existence of a supreme being of any sort, but do have some evidence that places beyond our planet could theoretically sustain life--the great leap from the mere existence of extraterrestrial life to assumption that such life would be not only particularly advanced, but also positively or negatively interested in humans, is not terribly different from leaping from the possibility of god (the question of god is unfalsifiable) to the notion that if there is a god, it would be an anthropomorphized one who has positive or negative interest in humans.

What is most interesting about Gutting's article, however, is the way he frames the relationship between technological advancement and cruelty. Gutting writes:

But we do know this: for the foreseeable future, contact with ETI would have to result from their coming here, which would in all likelihood mean that they far surpassed us technologically. They would be able to enslave us, hunt us as prey, torture us as objects of scientific experiments, or even exterminate us and leave no trace of our civilization. They would, in other words, be able to treat us as we treat animals — or as our technologically more advanced societies have often treated less advanced ones.


The argument here is difficult to deny: an observable characteristic of technological advancement is its ability to move us in various directions away from our humanity, whether in a transhumanist sense, or by replacing human labor with mechanized labor, or by replacing human contact with digital contact, or by replacing human reasoning with automated reasoning, etc. While technological advancement benefits humans in uncountable ways, it also comes with a potentially dark externality: a tendency to replace and sometimes overshadow humanity. Many argue rightly that we have the ability to humanize technology, rather than simply allowing technology to 'technologize' (cyborgify?) humanity; but as we progress technologically, will we be able to sustain our ability to retain humanity through technological advancement? This is a legitimate and important question. It raises the attendant question of whether, as the transhumanists have it, transcending our humanity somehow, or becoming something different, would be beneficial, or whether this would be the calamitous end of humanity as we know it.

One thing is sure: technological progress has no intrinsic ethics, and is only regulated by the ethical limitations we, as humans, impose upon it. Removing the human component from technological advancement means necessarily removing ethical guidance. From such a scenario, it is not at all difficult to understand why Gutting assumes that, because it would take a much more technologically advanced society to travel with facility between universes to make contact with humans on earth, and because technological advancement, conceived of in this extreme, bears no trace of what we understand as human ethical concerns, it's sensible to assume that such aliens would indeed be, in human terms, cruel, with a propensity to enslave us, hunt us, or use us experimentally to further their scientific and technological advancement beyond us. Though we possess, as humans, an understandable drive to transcend our human frailties, and see technology as a means of such transcendence, we should be careful about what we bargain for. Absent our humanity and the ethical concerns that come with it, we open ourselves up to the possibility of unthinkable worlds of suffering. What sense does it make to alienate our own species?