The idea that the study of the humanities is unrigorous, or that humanities subjects are easier than social science or science, technology, engineering, and math (STEM) subjects, is for proponents of the humanities one of the most insulting ideas around. The reason is threefold: one, because this idea is pervasive and at the same time demonstrably false; two, because it presupposes that the people who study humanities fields are unintelligent, or not intelligent enough to study STEM fields (i.e. that those trained in STEM are smarter); three, it leads to the related idea that many of the finest products of the human mind are not things to be taken seriously in any systematic way--not things to be studied, nor things we can learn anything from, nor things that play any role in human progress.
When pressed to it, very few people, however adamant they are as detractors from the humanities, or as so-called pragmatists or futurists or techies or transhumanists or plain, self-avowed philistines, will admit to thinking that things like literature and visual art do not enrich humanity, or are not important. In fact, it's often those who despise the study of the humanities the most--who find it the most useless--who actually embody the greatest love of the various products, 'technological' and otherwise, of human creativity, or the creative mind in action.
The problem, rather, is this idea that to study such 'creative objects' is both unrigorous, involving little more than subjective opinion-forming, and, as such, unproductive, because many of the questions raised by literature, art, history, and certain types of philosophy are unfalsifiable questions (applying the scientific method). In simple terms, when people think that studying the humanities is easy, or that anyone who can form an opinion can also, by virtue of that, partake of humanities study at high levels, they question the study of it altogether. Why should some professor tell the average opinion-former how to read a text? The blame for the pervasiveness of this suite of misguided ideas, and all of the false and misleading notions that attend it, lies squarely with those of us in the humanities. It is our fault.
As I've mentioned before, we have long since been at fault for failing to participate actively in the political processes that result in funding allocation, and, more importantly, we have failed to engage the lay public in the work we do, as well as the value of our work. This is a difficult task, to be sure--it's a lot harder to explain value that isn't easily quantified, or consolidated in a consumer product--but we can do much better. But where we've been most negligent, and perhaps most meaningfully negligent, is in our most important task as humanities scholars: our teaching. And it is through our failure in teaching and evaluating students in humanities courses that we invite suspicion at best, and outright confirmation at worst, of the malicious ideas surrounding the 'unrigorousness' of our subjects.
As a friend of mine pointed out recently, students often drop out of STEM courses in order to take humanities courses that they believe are easier. I find it difficult to argue with that perception. Whereas a professor of mathematics has reasonable and easily expressed cause to give a student a lower grade if s/he completes a proof or solves a problem incorrectly, a professor of literature typically has a much harder time explaining to a student why they deserve a C instead of an A-. There's no question that the evaluative process in literary study is partly subjective. There is no way to 'prove,' in the scientific sense, a particular reading or insight from a literary text. Literary texts raise questions, in other words, that are not falsifiable. While the evaluative process for literary texts--the practice of literary criticism--is not wholly subjective, as many believe it to be--which is to say, it's not just an exercise in unrigorous opinion-forming--the subjective element of literary criticism most certainly opens doors, in the teaching and evaluation of students in literature and other humanities courses, for very uncomfortable grading scenarios. As a consequence, students of literature, for example, develop the view that they are being graded on entirely subjective grounds (including whether or not the professor likes them), and that a grade is something attributed to them, rather than something they have earned through the quality of their work. Professors of literature, under pressure from both their institutions and from students' hovering parents, who expect that after paying so much in tuition, their child 'deserves' a good grade, tend to evaluate students' work less rigorously, and then to assign higher grades than the student has actually earned.
In so doing, we fail to properly educate students in the rigors of humanities scholarship, as well as the crucial differences between an argument from opinion (or from logical fallacy) and an argument from evidence (and attendant reason). Along with ethical concerns, these are among the most important things that come out of rigorous study in the humanities (one could of course argue that STEM and social science courses teach this too; they do; but it is precisely the ambiguity found in the objects of humanities study, not found to such an extent in objects of scientific study, that make such objects ideal for doing the difficult work of separating good arguments from poor ones). Yet instead of engaging students in this difficult work, and being prepared to stand up for the very analytical methods we stand behind, we humanities scholars retreat to the bureaucracy of higher education, abandoning our duty while blaming other institutions for encouraging grade inflation.
The result, then, is both sustained grade inflation (which happens less in STEM subjects, where greater objectivity in problem solving produces greater objectivity and clarity in evaluation of students) and, accordingly, a sense that humanities courses are actually easier, mere opportunities to bolster one's GPA, or avoid learning the subject matter at hand as thoroughly as might be required to do well in a biology class.
Of course, the study of a literary text, for example, is extremely rigorous. Part of that rigor comes with the fact that, because there is no way of arriving scientifically at one distinct conclusion through testing or experimentation, one must be extra diligent in crafting a clear, logical, and plausible argument, and supporting it with sound textual evidence, historical evidence, evidence from secondary scholarship, all with attention to some degree of linguistic accuracy (one can't simply decide that the word 'car' is 'symbolic' of consumer culture, for example; or that 'to throw the ball' really maens 'to jump the fence,' at least not without building a logical and plausible argument for such a reading, such that if the reading isn't valid, the argument, however elegant, is unlikely to stand up, to peer review or otherwise). Naturally, while no humanities scholar can claim to have proven or falsified a literary question in the scientific sense, it is also the case that no literary scholar has ever had the luxury of relying solely on an empirical fact as a justification in itself, without erecting (usually in painstaking fashion) the scaffolding of an argument. How much easier it would be, many of us would say, were we able to simply point to something and say 'look, it's proven,' and move on.
Making sound arguments with strong evidence is much easier said than done in humanities study. If students were actually given the grades that they earn in humanities courses, it's likely that we would not think the study of the humanities is so easy, or so unrigorous. It's more likely that a normal skill or performance distribution would emerge, akin to those in STEM fields, where grades are more properly earned, and success or failure are more readily demonstrated and communicated. A nicely written but sophistic argument shouldn't garner a B+, nor should an incoherently expressed notion with incisive potential. An elaborate argument about Sophocles in an assignment on Shakespeare misses the point--to learn and argue something valid about Shakespeare--and yet in too many cases this type of slippage is rewarded in humanities courses. It's as if we're all going out of our way, idealistically, to see potential for progress and liberation everywhere, including in lousy or intentionally deceptive student work.
This is all to suggest that if teachers, professors, schools, and universities get serious about evaluating students properly in the humanities, without fear of the feeble arguments for 'a better grade' that students and parents too frequently launch without sufficient basis, we will not only be able to teach students better in these fields (and in their written and oral communication); we'll also do our part to work against the false impression that just because the evaluation of humanities work has been softer, the study of humanities objects themselves is somehow easier or less rigorous than that of other fields. This view lies at the heart of the so-called 'crisis in the humanities' today.
Thursday, November 10, 2011
Saturday, October 22, 2011
I look at my hands
I'm working on a short deadline, on a talk I'm giving next week, and decided to take a break to read a little. In a New York Times essay by a mother whose infant son had been diagnosed with Tay-Sachs, and would die by the age of three, I read this line:
Involuntarily I begin to hear myself breathing. I start huffing as my chest expands beyond the vertical plane of my downward-angled chin. My eyes become wet. I feel my lips pursing, my teeth locking, and my countenance turning sour. I pick up a hand, turning it over and back, studying it like foreign object, a baffling attribute: what sort of creature am I, are we? Why do I feel this?
I look up at Law, Labor, and Ideology in the Early American Republic, which is stacked on other books in my tiny office, and I am confused. This is the place where I come to work and to worry about matters pertaining to my future. The only thing I'm supposed to feel in here is crushing anxiety, profound self-centeredness, and the need to advance my professional career. And yet, despite how future-oriented I must be at this stage in my life, I have been moved by a story about a person who has no future at all.
In these moments I look at my hands, because they are the parts of me that I can see that remind me of my humanity. When I look at my hands I can also see my brother's hands when he was just born and, at three years old, I was deeply afraid of losing my relevance. And I can see my mother's hands, and my father's hands, my grandparents' hands, the hands of women I've loved; all the hands I've held in my hands. My favorite parts are the tops of the palms, just before the fingers take form, which are slightly puffy, and resemble pads or paws. I can look at my pads and see that this is precisely the type of creature I am: like every other creature that has once crawled: finite.
But today Ronan is alive and his breath smells like sweet rice.
Involuntarily I begin to hear myself breathing. I start huffing as my chest expands beyond the vertical plane of my downward-angled chin. My eyes become wet. I feel my lips pursing, my teeth locking, and my countenance turning sour. I pick up a hand, turning it over and back, studying it like foreign object, a baffling attribute: what sort of creature am I, are we? Why do I feel this?
I look up at Law, Labor, and Ideology in the Early American Republic, which is stacked on other books in my tiny office, and I am confused. This is the place where I come to work and to worry about matters pertaining to my future. The only thing I'm supposed to feel in here is crushing anxiety, profound self-centeredness, and the need to advance my professional career. And yet, despite how future-oriented I must be at this stage in my life, I have been moved by a story about a person who has no future at all.
In these moments I look at my hands, because they are the parts of me that I can see that remind me of my humanity. When I look at my hands I can also see my brother's hands when he was just born and, at three years old, I was deeply afraid of losing my relevance. And I can see my mother's hands, and my father's hands, my grandparents' hands, the hands of women I've loved; all the hands I've held in my hands. My favorite parts are the tops of the palms, just before the fingers take form, which are slightly puffy, and resemble pads or paws. I can look at my pads and see that this is precisely the type of creature I am: like every other creature that has once crawled: finite.
Friday, October 7, 2011
Fund My Study on Aliens?
A fascinating op-ed by Notre Dame professor of philosophy Gary Gutting discusses a Penn State and NASA study about the potential outcomes--good, bad, or neutral--of making contact with intelligent extraterrestrial life forms. Gutting makes a good argument against such pursuits, noting the strong possibility (as he sees it) that extraterrestrials won't be the nice kind of aliens, but the nasty kind that may want to enslave us or use us, like lab rats, for research purposes. Gutting's essay prompts some important questions.
For one, what's the difference between doing a study on what aliens might be like and doing a study on what god might be like? Gutting draws the comparison between the question of the existence of a good or evil god and that of good or evil aliens, and frames the question of whether to pursue contact with aliens in terms of Pascal's wager about the existence and temperament of god. But the opening line of Gutting's essay is suspect, especially for a philosophy professor. He writes:
Is this statement true? Whether it's god or aliens, to what extent can we calculate the probability of great unknowns? Within the sphere of human knowledge, which understands there to be certain conditions for producing life, knowing that there are worlds out there that could theoretically sustain life as we know it might convince us that this increases the probability of the existence of extraterrestrial life. But what if life exists other than we know it? Or, what exists out there that isn't life at all?
Tied to these questions is the smaller question of whether it makes sense to fund such studies that rely so substantially on things we already know enough to recognize as wild speculations. Would we fund a study aimed at determining whether we're watched over by a benevolent or evil god? Of course this comparison is flawed--while we have no evidence that would lead us to the existence of a supreme being of any sort, but do have some evidence that places beyond our planet could theoretically sustain life--the great leap from the mere existence of extraterrestrial life to assumption that such life would be not only particularly advanced, but also positively or negatively interested in humans, is not terribly different from leaping from the possibility of god (the question of god is unfalsifiable) to the notion that if there is a god, it would be an anthropomorphized one who has positive or negative interest in humans.
What is most interesting about Gutting's article, however, is the way he frames the relationship between technological advancement and cruelty. Gutting writes:
The argument here is difficult to deny: an observable characteristic of technological advancement is its ability to move us in various directions away from our humanity, whether in a transhumanist sense, or by replacing human labor with mechanized labor, or by replacing human contact with digital contact, or by replacing human reasoning with automated reasoning, etc. While technological advancement benefits humans in uncountable ways, it also comes with a potentially dark externality: a tendency to replace and sometimes overshadow humanity. Many argue rightly that we have the ability to humanize technology, rather than simply allowing technology to 'technologize' (cyborgify?) humanity; but as we progress technologically, will we be able to sustain our ability to retain humanity through technological advancement? This is a legitimate and important question. It raises the attendant question of whether, as the transhumanists have it, transcending our humanity somehow, or becoming something different, would be beneficial, or whether this would be the calamitous end of humanity as we know it.
One thing is sure: technological progress has no intrinsic ethics, and is only regulated by the ethical limitations we, as humans, impose upon it. Removing the human component from technological advancement means necessarily removing ethical guidance. From such a scenario, it is not at all difficult to understand why Gutting assumes that, because it would take a much more technologically advanced society to travel with facility between universes to make contact with humans on earth, and because technological advancement, conceived of in this extreme, bears no trace of what we understand as human ethical concerns, it's sensible to assume that such aliens would indeed be, in human terms, cruel, with a propensity to enslave us, hunt us, or use us experimentally to further their scientific and technological advancement beyond us. Though we possess, as humans, an understandable drive to transcend our human frailties, and see technology as a means of such transcendence, we should be careful about what we bargain for. Absent our humanity and the ethical concerns that come with it, we open ourselves up to the possibility of unthinkable worlds of suffering. What sense does it make to alienate our own species?
For one, what's the difference between doing a study on what aliens might be like and doing a study on what god might be like? Gutting draws the comparison between the question of the existence of a good or evil god and that of good or evil aliens, and frames the question of whether to pursue contact with aliens in terms of Pascal's wager about the existence and temperament of god. But the opening line of Gutting's essay is suspect, especially for a philosophy professor. He writes:
The probability that there is intelligent life somewhere other than earth increases as we discover more and more solar systems that seem capable of sustaining life.
Is this statement true? Whether it's god or aliens, to what extent can we calculate the probability of great unknowns? Within the sphere of human knowledge, which understands there to be certain conditions for producing life, knowing that there are worlds out there that could theoretically sustain life as we know it might convince us that this increases the probability of the existence of extraterrestrial life. But what if life exists other than we know it? Or, what exists out there that isn't life at all?
Tied to these questions is the smaller question of whether it makes sense to fund such studies that rely so substantially on things we already know enough to recognize as wild speculations. Would we fund a study aimed at determining whether we're watched over by a benevolent or evil god? Of course this comparison is flawed--while we have no evidence that would lead us to the existence of a supreme being of any sort, but do have some evidence that places beyond our planet could theoretically sustain life--the great leap from the mere existence of extraterrestrial life to assumption that such life would be not only particularly advanced, but also positively or negatively interested in humans, is not terribly different from leaping from the possibility of god (the question of god is unfalsifiable) to the notion that if there is a god, it would be an anthropomorphized one who has positive or negative interest in humans.
What is most interesting about Gutting's article, however, is the way he frames the relationship between technological advancement and cruelty. Gutting writes:
But we do know this: for the foreseeable future, contact with ETI would have to result from their coming here, which would in all likelihood mean that they far surpassed us technologically. They would be able to enslave us, hunt us as prey, torture us as objects of scientific experiments, or even exterminate us and leave no trace of our civilization. They would, in other words, be able to treat us as we treat animals — or as our technologically more advanced societies have often treated less advanced ones.
The argument here is difficult to deny: an observable characteristic of technological advancement is its ability to move us in various directions away from our humanity, whether in a transhumanist sense, or by replacing human labor with mechanized labor, or by replacing human contact with digital contact, or by replacing human reasoning with automated reasoning, etc. While technological advancement benefits humans in uncountable ways, it also comes with a potentially dark externality: a tendency to replace and sometimes overshadow humanity. Many argue rightly that we have the ability to humanize technology, rather than simply allowing technology to 'technologize' (cyborgify?) humanity; but as we progress technologically, will we be able to sustain our ability to retain humanity through technological advancement? This is a legitimate and important question. It raises the attendant question of whether, as the transhumanists have it, transcending our humanity somehow, or becoming something different, would be beneficial, or whether this would be the calamitous end of humanity as we know it.
One thing is sure: technological progress has no intrinsic ethics, and is only regulated by the ethical limitations we, as humans, impose upon it. Removing the human component from technological advancement means necessarily removing ethical guidance. From such a scenario, it is not at all difficult to understand why Gutting assumes that, because it would take a much more technologically advanced society to travel with facility between universes to make contact with humans on earth, and because technological advancement, conceived of in this extreme, bears no trace of what we understand as human ethical concerns, it's sensible to assume that such aliens would indeed be, in human terms, cruel, with a propensity to enslave us, hunt us, or use us experimentally to further their scientific and technological advancement beyond us. Though we possess, as humans, an understandable drive to transcend our human frailties, and see technology as a means of such transcendence, we should be careful about what we bargain for. Absent our humanity and the ethical concerns that come with it, we open ourselves up to the possibility of unthinkable worlds of suffering. What sense does it make to alienate our own species?
Sunday, October 2, 2011
The Emptiness of 'Technology, Math, and Science'...and Poetry
Yes, the title is deliberately provocative. The point of this essay is not to suggest that technology, math, and science are themselves empty (I'm not even sure what it would mean to suggest as much), but to note the sheer emptiness with which 'technology, math, and science' are invoked by politicians and media types as buzzwords and panaceas.
You get the sense that something must be wrong when the latest way to allay fears about unstable and failing economies, joblessness, social uprising, terrorist threats, and natural disasters is to deliver some kind of bromide about math and science. When Elmo appears on TV encouraging children to learn their math and science, or Obama pins the future of US global leadership on math and science, we're given the impression that math and science are kind of like comic-book superheroes, who, once adequately funded and foisted upon every child born in the naughties, will rid the world of all its problems, leading us into an enlightened future. Curiously, math and science are evoked by politicians with little more substance than the stating of the words themselves, such that these varied and complex fields of study have become the most trivial of talking points.
While math and science are popularly understood as panaceas for the world's problems, technology is understood more like a god, or a divine muse. Perhaps the dumbest article I've ever read appears in today's edition of the Independent. The ridiculous title, 'Facebook is Muse to Today's Young Poets,' draws a sweeping conclusion based on a single, uninformed quote from a woman named Judith Palmer, who the chair of something called the Poetry Society. Commenting on an increase in entries for a young persons' poetry contest sponsored by the Society, Palmer suggests offhandedly that...
Well, Judith, that's an interesting opinion. But if I were a decently responsible journalist interested in writing about something other than hideous platitudes, I wouldn't take such an uninformed opinion as a basis to assert, as the article's author Jonathan Owen has, that...
Nor would I be pleased if my editors took Palmer's offhanded comment to draw the entirely fallacious and unsupported conclusion, stated in the kicker, that...
In fact, the 'record number of entries' could plausibly be the result of any number of things in addition to or instead of the existence of Facebook and Twitter, random proxies here for 'technology.'
What's going on here, of course, is that a journalist has decided to take an unsupported opinion and convert it into a mask of support for a causal link between an increase in applicants to a poetry contest and the 'muse' of technology. Why, I ask, would anyone draw such an arbitrary and absurd conclusion, and treat it as fact? Certainly it's possible that Facebook and Twitter could be making young people more comfortable with sharing their poetry (though I'm, not surprisingly, skeptical); but there is absolutely no demonstration in the article that this is true, no attempt whatsoever to demonstrate a link (as opposed to simply declaring one) between the increase in poetry submissions this year and Facebook and Twitter. Questions abound: why only an increase this year when Facebook and Twitter have been around for years? Is the quality of the submissions higher overall? What are the submission numbers over the last 10 years, and is this year an outlier? Beyond these, Owen's article happily cites 'only one' young poet who cites a 'classic poet' as a poetic inspiration (she cites John Donne, ha); yet Owen includes precisely zero quotes from young poets who entered the contest and cited Facebook or Twitter as their muses. If Facebook and Twitter have so revolutionized the poetry contest, surely Palmer and Owen could have found at least one or two quotes from contestants who were inspired by social media? No? So then, last I checked, my superb math and science education enables me to observe that 'only one' is actually a greater quantity than ZERO.
This is just another instance of a reflexive obsession with the likes of 'technology,' which stands in most pathetically for first-grade-level descriptors such as 'good' or 'nice.' Not only is this misleading, shoddy, irresponsible journalism; it's also an example of, in my estimation, one of the biggest contributions to our problems (and not a story about one of the solutions): people are so bad at basic literacy, textual analysis, and reasoning that we're happy to draw laughably false conclusions based on allusion, suggestion, and coincidence. For example, it's a wonderful coincidence that an article lauding the generalized, blanket greatness of technology--even in the arcane sphere of poetry--stands itself as an example of how not technology, but a better understanding of text would have solved the interpretive problem at hand. I need not get into the specifics, I hope, of the grave dangers of mis-or un-guided scientific or technological pursuit; but I will close with a warning: once something, however important, becomes reduced to a daily buzzword in the mouths of politicians and journalists, it's time to take a closer look between the lines.
You get the sense that something must be wrong when the latest way to allay fears about unstable and failing economies, joblessness, social uprising, terrorist threats, and natural disasters is to deliver some kind of bromide about math and science. When Elmo appears on TV encouraging children to learn their math and science, or Obama pins the future of US global leadership on math and science, we're given the impression that math and science are kind of like comic-book superheroes, who, once adequately funded and foisted upon every child born in the naughties, will rid the world of all its problems, leading us into an enlightened future. Curiously, math and science are evoked by politicians with little more substance than the stating of the words themselves, such that these varied and complex fields of study have become the most trivial of talking points.
While math and science are popularly understood as panaceas for the world's problems, technology is understood more like a god, or a divine muse. Perhaps the dumbest article I've ever read appears in today's edition of the Independent. The ridiculous title, 'Facebook is Muse to Today's Young Poets,' draws a sweeping conclusion based on a single, uninformed quote from a woman named Judith Palmer, who the chair of something called the Poetry Society. Commenting on an increase in entries for a young persons' poetry contest sponsored by the Society, Palmer suggests offhandedly that...
Teenagers have always written poetry but I think there's something to do with the familiarity with Facebook and Twitter that gives a confidence in sharing your thoughts and feelings publicly.
Well, Judith, that's an interesting opinion. But if I were a decently responsible journalist interested in writing about something other than hideous platitudes, I wouldn't take such an uninformed opinion as a basis to assert, as the article's author Jonathan Owen has, that...
Modern technology, rather than literary history, is fueling an upsurge in poetry.
Nor would I be pleased if my editors took Palmer's offhanded comment to draw the entirely fallacious and unsupported conclusion, stated in the kicker, that...
Record number of entries to competition shows new generation finding inspiration in technology.
In fact, the 'record number of entries' could plausibly be the result of any number of things in addition to or instead of the existence of Facebook and Twitter, random proxies here for 'technology.'
What's going on here, of course, is that a journalist has decided to take an unsupported opinion and convert it into a mask of support for a causal link between an increase in applicants to a poetry contest and the 'muse' of technology. Why, I ask, would anyone draw such an arbitrary and absurd conclusion, and treat it as fact? Certainly it's possible that Facebook and Twitter could be making young people more comfortable with sharing their poetry (though I'm, not surprisingly, skeptical); but there is absolutely no demonstration in the article that this is true, no attempt whatsoever to demonstrate a link (as opposed to simply declaring one) between the increase in poetry submissions this year and Facebook and Twitter. Questions abound: why only an increase this year when Facebook and Twitter have been around for years? Is the quality of the submissions higher overall? What are the submission numbers over the last 10 years, and is this year an outlier? Beyond these, Owen's article happily cites 'only one' young poet who cites a 'classic poet' as a poetic inspiration (she cites John Donne, ha); yet Owen includes precisely zero quotes from young poets who entered the contest and cited Facebook or Twitter as their muses. If Facebook and Twitter have so revolutionized the poetry contest, surely Palmer and Owen could have found at least one or two quotes from contestants who were inspired by social media? No? So then, last I checked, my superb math and science education enables me to observe that 'only one' is actually a greater quantity than ZERO.
This is just another instance of a reflexive obsession with the likes of 'technology,' which stands in most pathetically for first-grade-level descriptors such as 'good' or 'nice.' Not only is this misleading, shoddy, irresponsible journalism; it's also an example of, in my estimation, one of the biggest contributions to our problems (and not a story about one of the solutions): people are so bad at basic literacy, textual analysis, and reasoning that we're happy to draw laughably false conclusions based on allusion, suggestion, and coincidence. For example, it's a wonderful coincidence that an article lauding the generalized, blanket greatness of technology--even in the arcane sphere of poetry--stands itself as an example of how not technology, but a better understanding of text would have solved the interpretive problem at hand. I need not get into the specifics, I hope, of the grave dangers of mis-or un-guided scientific or technological pursuit; but I will close with a warning: once something, however important, becomes reduced to a daily buzzword in the mouths of politicians and journalists, it's time to take a closer look between the lines.
Friday, September 23, 2011
Three Arguments Against The Death Penalty, Which Is An Abomination
The death penalty figures to become a prominent issue in US national politics in the months to come, thanks to two events: the very probably wrongful execution of Troy Davis in Georgia (US), despite heaps of evidence that tarnishes the reliability of his conviction, and the boistrously positive (and highly publicized) reaction of a conservative audience to Texas Governor and leading Republican presidential candidate Rick Perry's boasting, during a debate, that his Texas under his governorship executed 234 people. Though some people will tell you that the death penalty has a deterrent effect on murder rates, and others will tell you it actually increases murder rates, there is a strong case to be made that these data are, in either direction, reliant upon too many assumptions to be conclusive. It makes sense, therefore, to consider other, more powerful reasons beyond the utilitarian reductionism of deterrence measurments. Below are a few arguments against the death penality, which is one of the most repulsive abominations of the modern world, and should be ended immediately, unequivocally, and without remorse.
1) Constitutional. Shortsighted people, including many judges, will often argue that there is not enough precedent available for us to understand the death penalty as a Federal issue. They argue instead that it's the right of states to decide whether to have the death penalty or not (hence, some do and some do not). Of course there is constitutional-law merit to this argument; but it's not the most valid argument, even along constitutional lines. For one, before any challenge to the constitutionality of a Federal law banning the death penality throughout the US, it's of course well within the constitutional remit of Congress to legislate such a law (which means constitutionality is already a weak excuse for Federal government inaction on this issue).
But beyond this, the Eighth Amendment to the Constitution does explicitly prohibit 'cruel and unusual punishment' (and the Supreme Court has subsequently ruled that this also applies to the states). The phrase 'cruel and unusual punishment, taken from the English Bill of Rights of 1689, has traditionally referred to brutal executions like disembowelment or boiling to death, for example. Today, we assume that execution by electric chair and lethal injection are neither cruel nor unusual. By medeival standards, that's a fair assumption. By contemporary standards, we have already begun to phase out the electric chair because of its brutality. Beyond this, however, lies the very legitimate question of whether 'cruel and unusual' ought only to apply to matters of physical or sensory pain. Is it not both cruel and unusual to systematically execute a human being, even (and perhaps especially) by subjecting them to the ultimate authority of the state, strapping them down to a bed, and pronouncing that they, in such a position, will be executed? Peter Moskos, a professor at the John Jay College of Criminal Justice, has recently caused a stir by proffering the notion that, given the choice, one would be more likely to accept five lashes with a cane (a la Singapore) than to endure five years in prison. A fundamental aspect of this logic is that the most painful punishment is not necessarily the most cruel. Applied crossways to the logic that underpins our understanding of 'cruel and unusual' as a matter of physical pain, it would be hard for a reasonable person to deny two things: one, the cruelest punishment is death (hence, 'upon pain of death'); two, though a caning or a beating cause more physical pain, theoretically, than a lethal injection, the horrific nature of systematic execution--the pain of death--has as much to do with the fact of execution itself as the means by which the execution is carried out. If boiling to death is cruel and unusual because of the physical pain and duress it causes, it is also cruel and unusual because of the finality of that pain, as well as the anticipation that one's last living moments will be spent suffering. Simply because the physical pain is reduced in more 'humane' means of execution, should we then disregard the cruelty of the very act of putting to death?
2) Limitations on the powers of the state. By assuming that the state has the sovereignty to execute one of its citizens, we put more faith in the state and its competence than perhaps we'd like to believe. Conservatives frequently support the death penalty from 'tough on crime,' religious, or retributive justice standpoints; but how do conservatives square placing the ultimate authority--the authority to execute--in the hands of the state while denying that the state should have much authority otherwise? What does it mean to suggest that the state has the right to kill citizens, but not provide them with a health insurance option; to execute, but not to levy taxes on carbon emissions; to carry out 'final justice,' but not to regulate the corporate consumption of natural resources? If there's one power with which we ought not to entrust the government, it's the systematic execution of its citizens. Once execution happens, there is of course no turning back. And there are plenty of instances in which the government gets it wrong.
3) The function of the law. Some believe that the function of the law is to punish people. Others believe it is to carry out revenge or provide retribution (these first two beliefs often go hand in hand). Still others believe that the law can provide parameters for the rehabilitation of criminals. All of these positions are seriously flawed, and lead to significant problems in the criminal justice system. For one, the medieval-religious notion of punitive or retributive justice lends us to make emotional and irrational decisions about crime.
In the case of the death penalty, such views feed bloodlust and an archaic 'eye for an eye' mentality. We condemn these kinds of measures when they take place abroad, in countries we presume to be inferior and barbaric: how can death by stoning persist in Iran or Saudi Arabia, canings in Singapore, or torture in China? Yet, when we apprehend crime (especially violent crime) on our own soil, we direct our attention not to the suffering of the victims of crimes and their families, but to the desire that the criminal suffer as payback (instead of five lashes, five years of sanctioned beatings and rape in US prisons...but 'they deserve it,' we say). The same mentality drives the notion, commonly held by USonians, that if you take a life, you deserve to have your own life taken from you (or you 'forfeit your right to live')--an eye for an eye. This is sometimes self-righteous religiosity (Biblical or Sharia approaches to criminal justice), but it's also religious self-righteousness--the believe that we can and should be arbiters of justice.
The fact is, we are not arbiters of justice, but generally well meaning people who want to keep those who harm others off the streets and out of the way of those who abide by the law and play fair. When you strip away our emotional responses to crime, what's left is the general sentiment that we just don't want anyone to hurt us, steal from us, kill us, etc. We therefore create a justice system not to deliver final justice or judgment, nor to take retribution into the hands of the state on behalf of the wronged, but to keep one another safe by separating those who do criminal harm from those who do not. Taken this way, the death penalty, and all the costs it entails to make sure it's done with adequate due process (even though this is often not even enough to assure accuracy and rightful convictions), is not worth the cost of locking criminals up, making them work in the process to earn their keep until they've demonstrated an ability to rejoin broader society without criminally harming others.
The reasons for abolishing the death penalty are sensible and manifold, but they are not easy to digest when we take emotional and archaic approaches to criminal justice. We ought to be beyond such solutions as state-sponsored execution; and until we are, we fall well short of the degree of civilization whose lack we decry in our foreign enemies, and whose benefits we tout for ourselves as a supposed global exemplar.
1) Constitutional. Shortsighted people, including many judges, will often argue that there is not enough precedent available for us to understand the death penalty as a Federal issue. They argue instead that it's the right of states to decide whether to have the death penalty or not (hence, some do and some do not). Of course there is constitutional-law merit to this argument; but it's not the most valid argument, even along constitutional lines. For one, before any challenge to the constitutionality of a Federal law banning the death penality throughout the US, it's of course well within the constitutional remit of Congress to legislate such a law (which means constitutionality is already a weak excuse for Federal government inaction on this issue).
But beyond this, the Eighth Amendment to the Constitution does explicitly prohibit 'cruel and unusual punishment' (and the Supreme Court has subsequently ruled that this also applies to the states). The phrase 'cruel and unusual punishment, taken from the English Bill of Rights of 1689, has traditionally referred to brutal executions like disembowelment or boiling to death, for example. Today, we assume that execution by electric chair and lethal injection are neither cruel nor unusual. By medeival standards, that's a fair assumption. By contemporary standards, we have already begun to phase out the electric chair because of its brutality. Beyond this, however, lies the very legitimate question of whether 'cruel and unusual' ought only to apply to matters of physical or sensory pain. Is it not both cruel and unusual to systematically execute a human being, even (and perhaps especially) by subjecting them to the ultimate authority of the state, strapping them down to a bed, and pronouncing that they, in such a position, will be executed? Peter Moskos, a professor at the John Jay College of Criminal Justice, has recently caused a stir by proffering the notion that, given the choice, one would be more likely to accept five lashes with a cane (a la Singapore) than to endure five years in prison. A fundamental aspect of this logic is that the most painful punishment is not necessarily the most cruel. Applied crossways to the logic that underpins our understanding of 'cruel and unusual' as a matter of physical pain, it would be hard for a reasonable person to deny two things: one, the cruelest punishment is death (hence, 'upon pain of death'); two, though a caning or a beating cause more physical pain, theoretically, than a lethal injection, the horrific nature of systematic execution--the pain of death--has as much to do with the fact of execution itself as the means by which the execution is carried out. If boiling to death is cruel and unusual because of the physical pain and duress it causes, it is also cruel and unusual because of the finality of that pain, as well as the anticipation that one's last living moments will be spent suffering. Simply because the physical pain is reduced in more 'humane' means of execution, should we then disregard the cruelty of the very act of putting to death?
2) Limitations on the powers of the state. By assuming that the state has the sovereignty to execute one of its citizens, we put more faith in the state and its competence than perhaps we'd like to believe. Conservatives frequently support the death penalty from 'tough on crime,' religious, or retributive justice standpoints; but how do conservatives square placing the ultimate authority--the authority to execute--in the hands of the state while denying that the state should have much authority otherwise? What does it mean to suggest that the state has the right to kill citizens, but not provide them with a health insurance option; to execute, but not to levy taxes on carbon emissions; to carry out 'final justice,' but not to regulate the corporate consumption of natural resources? If there's one power with which we ought not to entrust the government, it's the systematic execution of its citizens. Once execution happens, there is of course no turning back. And there are plenty of instances in which the government gets it wrong.
3) The function of the law. Some believe that the function of the law is to punish people. Others believe it is to carry out revenge or provide retribution (these first two beliefs often go hand in hand). Still others believe that the law can provide parameters for the rehabilitation of criminals. All of these positions are seriously flawed, and lead to significant problems in the criminal justice system. For one, the medieval-religious notion of punitive or retributive justice lends us to make emotional and irrational decisions about crime.
In the case of the death penalty, such views feed bloodlust and an archaic 'eye for an eye' mentality. We condemn these kinds of measures when they take place abroad, in countries we presume to be inferior and barbaric: how can death by stoning persist in Iran or Saudi Arabia, canings in Singapore, or torture in China? Yet, when we apprehend crime (especially violent crime) on our own soil, we direct our attention not to the suffering of the victims of crimes and their families, but to the desire that the criminal suffer as payback (instead of five lashes, five years of sanctioned beatings and rape in US prisons...but 'they deserve it,' we say). The same mentality drives the notion, commonly held by USonians, that if you take a life, you deserve to have your own life taken from you (or you 'forfeit your right to live')--an eye for an eye. This is sometimes self-righteous religiosity (Biblical or Sharia approaches to criminal justice), but it's also religious self-righteousness--the believe that we can and should be arbiters of justice.
The fact is, we are not arbiters of justice, but generally well meaning people who want to keep those who harm others off the streets and out of the way of those who abide by the law and play fair. When you strip away our emotional responses to crime, what's left is the general sentiment that we just don't want anyone to hurt us, steal from us, kill us, etc. We therefore create a justice system not to deliver final justice or judgment, nor to take retribution into the hands of the state on behalf of the wronged, but to keep one another safe by separating those who do criminal harm from those who do not. Taken this way, the death penalty, and all the costs it entails to make sure it's done with adequate due process (even though this is often not even enough to assure accuracy and rightful convictions), is not worth the cost of locking criminals up, making them work in the process to earn their keep until they've demonstrated an ability to rejoin broader society without criminally harming others.
The reasons for abolishing the death penalty are sensible and manifold, but they are not easy to digest when we take emotional and archaic approaches to criminal justice. We ought to be beyond such solutions as state-sponsored execution; and until we are, we fall well short of the degree of civilization whose lack we decry in our foreign enemies, and whose benefits we tout for ourselves as a supposed global exemplar.
Saturday, August 20, 2011
Cut Now, Think Later: An Undiscussed Problem
It's no secret that in a struggling or contracting economy, people want government to scale back on spending, and government responds, sometimes rightfully, with an assessment of which services and spending items are absolutely necessary and useful, and which are frivolous. In the present situation, in which government has balanced its budget irresponsibly, and racked up a not-so-healthy amount of debt in the process, calls for a no-frills assessment of spending and spending priorities are especially relevant. And on top of these circumstances, beyond the general, bipartisan understanding that the US government needs to do a better job of maintaining its finances, whether by reducing spending, closing tax loopholes and raising taxes, or some combination of the two, conservatives and Republicans have almost monolithically taken the 'tea party' position that we need to cut, cut, and cut some more. No tax increases, just cuts.
There are many, frequently discussed problems with this mentality: it's economically nonsensical, it directly contributes to job reduction, it places an unreasonable burden of 'sacrifice' on the lower and middle classes, and, given the unyielding nature of its proponents, it has put in jeopardy the full faith and credit of the US government. You might wholly disagree with PMB's assessments here, but you've certainly heard or read about these issues, and their surrounding debates.
What no one seems to be discussing, however, is a separate set of unforeseen consequences of a cut now, think later approach to spending cuts.
What PMB is talking about here, more explicitly, is crude valuation, the process by which we decide whether something is worth it or not. With a cut now, think later approach, we'll surely take a hasty, oversimplified approach to valuation. In higher education, this type of hasty approach takes the predictable turn of funding science, technology, math, and engineering programs (with each of these being construed in such blanket, general terms as to be entirely useless in the valuation process), and cutting the arts and humanities (incidentally, an unintended consequence of this policy in Britain has been an oversupply of workers in these fields, contributing to unemployment rates for many STEM-field graduates being higher than those of graduates in other fields). In fiscal policy, the equivalent of these hastily designated mainstays is perhaps defense spending, along with some science and medical institutes (the NIH, for example); but more or less everything else, for the rabid budget-cutters, is 'on the chopping block' (from large and expensive programs like social security and Medicare to tiny and inexpensive ones like NPR, the NEA, and the NEH).
The problem with making the same old assumptions about what's a staple and what isn't, or what's useful and what's not, especially when we have the economic gun pressed to our heads (or when we drum up that kind of hyperbolic reaction to our current economic difficulties), is that we can't predict what our future will hold, and what we'll need to address its challenges. We can't predict whether tearing down the National Endowment for the Humanities, perhaps the only organization with any Federal advocacy influence for nearly half of what we esteem as 'the liberal arts and sciences,' will deal a final blow to the teaching and learning of foreign languages in the US in an era of globalization. We can't predict whether de-funding NASA will deprive us of future morale-boosting (and residual-economic-benefit-producing) endeavors, like landing on the moon, or sending commercial aircraft beyond the atmosphere. These kinds of things can sound ridiculous, but, as Gregory Petsko, professor of biochemistry at Brandeis, has so convincingly argued, it also seemed ridiculous to continue funding virology programs after we figured out vaccines but before HIV came around, just at it seemed ridiculous to think that anyone would need to know anything about Arabic or cultures of the Middle East before September 11, 2001.
There are many, frequently discussed problems with this mentality: it's economically nonsensical, it directly contributes to job reduction, it places an unreasonable burden of 'sacrifice' on the lower and middle classes, and, given the unyielding nature of its proponents, it has put in jeopardy the full faith and credit of the US government. You might wholly disagree with PMB's assessments here, but you've certainly heard or read about these issues, and their surrounding debates.
What no one seems to be discussing, however, is a separate set of unforeseen consequences of a cut now, think later approach to spending cuts.
What PMB is talking about here, more explicitly, is crude valuation, the process by which we decide whether something is worth it or not. With a cut now, think later approach, we'll surely take a hasty, oversimplified approach to valuation. In higher education, this type of hasty approach takes the predictable turn of funding science, technology, math, and engineering programs (with each of these being construed in such blanket, general terms as to be entirely useless in the valuation process), and cutting the arts and humanities (incidentally, an unintended consequence of this policy in Britain has been an oversupply of workers in these fields, contributing to unemployment rates for many STEM-field graduates being higher than those of graduates in other fields). In fiscal policy, the equivalent of these hastily designated mainstays is perhaps defense spending, along with some science and medical institutes (the NIH, for example); but more or less everything else, for the rabid budget-cutters, is 'on the chopping block' (from large and expensive programs like social security and Medicare to tiny and inexpensive ones like NPR, the NEA, and the NEH).
The problem with making the same old assumptions about what's a staple and what isn't, or what's useful and what's not, especially when we have the economic gun pressed to our heads (or when we drum up that kind of hyperbolic reaction to our current economic difficulties), is that we can't predict what our future will hold, and what we'll need to address its challenges. We can't predict whether tearing down the National Endowment for the Humanities, perhaps the only organization with any Federal advocacy influence for nearly half of what we esteem as 'the liberal arts and sciences,' will deal a final blow to the teaching and learning of foreign languages in the US in an era of globalization. We can't predict whether de-funding NASA will deprive us of future morale-boosting (and residual-economic-benefit-producing) endeavors, like landing on the moon, or sending commercial aircraft beyond the atmosphere. These kinds of things can sound ridiculous, but, as Gregory Petsko, professor of biochemistry at Brandeis, has so convincingly argued, it also seemed ridiculous to continue funding virology programs after we figured out vaccines but before HIV came around, just at it seemed ridiculous to think that anyone would need to know anything about Arabic or cultures of the Middle East before September 11, 2001.
Tuesday, August 16, 2011
Time For Welfare
There are strong arguments to be made for having a smaller government versus a larger one, but these arguments are rarely made, even, and perhaps especially, among Republicans who want smaller government.
Libertarians argue powerfully for the virtues of a small government that allows for the least possible interference in the lives of individual citizens going about their business freely. They argue that government is, as James Madison suggested ('If Men were Angels, no Government would be necessary'), a necessary evil, one that is best kept on as small a scale as possible, and only intervenes in the lives of citizens in order to protect the unbridled self-interest of one against that of another: to provide baseline order for the State of Nature such that individuals cannot trample upon the Natural Rights of others.
Republicans, on the other hand, typically skip the nuts and bolts of the libertarian argument in order to arrive at a general understanding that government is bad. This is manifested frequently in the talking point 'government ruins everything,' a sibling of 'the private sector does everything better than the government,' and 'when was the last time the government did something well' (you get the picture). As Milton Friedman once said, 'If you put the Federal government in charge of the Sahara Desert, in five years there'd be a shortage of sand.'
Republicans use this partially developed argument to justify a wider ideology, that which proclaims that because government is bad, less government is good. As you can see, this is blatantly tautological: the assumption that government is bad, that it's worse at solving certain problems than the private sector, and that it poses a greater threat to liberty than certain institutions within the private sector, is rarely substantiated. This assumption that government is bad is then taken by Republicans as the foundational reason for arguing that the absence of government is good. The tautology of this thinking is actually put into practice via the Republican political strategy of 'starving the beast': by fighting to cut government revenues and slash government funding, programs, and provisions, Republicans limit the effectiveness of the government in doing the jobs we assign it, thereby producing many of the very ill effects, inefficiencies, and inadequacies that constitute the Republican charge against government. This process is further complicated by the 'soft' but influential relationship between individuals in government and the private sector: the blame for problems created by and in the private sector can be easily deferred to the government, since the government is ultimately pushed (via lobbying and financial contributions) in many cases to do the bidding of powerful private-sector interests. Republicans can plausibly blame the mortgage crisis, for example, on government policies to get more Americans to own their own homes than was ever reasonable, despite the fact that such a policy was precisely what the private-sector real estate and lending lobbies asked for.
That background aside, the purpose of this article is to address another Republican strategy aimed at dismantling and discouraging the government's role in providing unemployment and welfare benefits to underserved (not undeserved) Americans.
When we engage in partisan debates about welfare, we hear two basic positions. Democrats argue that the government should play a leading role in providing for society's poor, given the lack of systematic, private-sector care for the poor. They argue that individual charitable contributions aren't enough, because individuals often don't donate their charity dollars efficiently or systematically. Republicans argue that private and faith-based charitable endeavors are not just adequate, but more effective than government welfare and social support programs. For Republicans, government handouts only breed laziness and dependence on more government handouts.
In a time of high unemployment and plenty of civil unrest, it's time to come to terms with a few things, and then reframe the welfare debate.
For one, we must realize that, while many things are indeed better handled by the private sector, some kinds of things are better left to government. These things usually involve systematizing something required on a large and relatively uniform scale. Regulating industries like air travel and food, for example, are jobs for which the profit incentive isn't always the most efficient way to ensure safety. Basic environmental protection (and resource protection), as well, is something that private companies will fail to do, such that government agencies are well positioned to provide guidelines and enforcement to make sure the water tables are toxin-free, or a company can't dump biohazardous materials in a local park. Government need not control all aspects of these endeavors; but government is the best entity to take the lead on safety and regulation where other entities fail to do so, to the detriment of the people.
Providing systematic welfare is another one of these tasks that is best led by government. The profit motive does not provide incentive to provide for those who, by the very definition of their neediness, have already fallen through the system. Further, a private-sector meritocracy is a fine way to distribute resources to a point; but it provides no guidelines whatsoever for tending to the baseline needs of individuals who, for lack of a better way of putting it, 'lose.' Because very few would be so callous as to suggest that the 'losers' in such a system should simply wither away and die--that they somehow deserve it because they are either incapable or lazy--it makes sense to develop systematic charitable solutions for the inevitable problem of caring for those who, for whatever circumstances, fail to care adequately for themselves. This is perhaps the sine qua non of a civilization, a civil society. And no matter how many church groups and individual philanthropists we have to help the needy and contribute charitable funds, such a slapdash method does not reach everyone. More importantly, however, it does not provide the framework for everyone who needs to be reach to gain clear-cut access to assistance. The Republican-hijacked virtue of self-reliance is jeopardized when needy individuals, without internet access or even a phone book (if they still make those), have no clear-cut, systematized, widely located place they can go for help in the first instance.
At the same time, we need to understand that government assistance doesn't have to come in the form of handouts. It's perfectly reasonable to require needy individuals who are physically and mentally capable to work for their welfare checks, for example, by cleaning up and organizing within their communities. And for those who are not capable--the mentally ill, the chronically sick, for example--how could anyone dare say that government provisions are only making these people 'lazy,' or that their dependence comes form government, rather than personal circumstances, disability, illness, etc.?
A crucial mark of a truly rich and advanced society is the ability to care for its poorest citizens. Anything less than that isn't civilization.
Libertarians argue powerfully for the virtues of a small government that allows for the least possible interference in the lives of individual citizens going about their business freely. They argue that government is, as James Madison suggested ('If Men were Angels, no Government would be necessary'), a necessary evil, one that is best kept on as small a scale as possible, and only intervenes in the lives of citizens in order to protect the unbridled self-interest of one against that of another: to provide baseline order for the State of Nature such that individuals cannot trample upon the Natural Rights of others.
Republicans, on the other hand, typically skip the nuts and bolts of the libertarian argument in order to arrive at a general understanding that government is bad. This is manifested frequently in the talking point 'government ruins everything,' a sibling of 'the private sector does everything better than the government,' and 'when was the last time the government did something well' (you get the picture). As Milton Friedman once said, 'If you put the Federal government in charge of the Sahara Desert, in five years there'd be a shortage of sand.'
Republicans use this partially developed argument to justify a wider ideology, that which proclaims that because government is bad, less government is good. As you can see, this is blatantly tautological: the assumption that government is bad, that it's worse at solving certain problems than the private sector, and that it poses a greater threat to liberty than certain institutions within the private sector, is rarely substantiated. This assumption that government is bad is then taken by Republicans as the foundational reason for arguing that the absence of government is good. The tautology of this thinking is actually put into practice via the Republican political strategy of 'starving the beast': by fighting to cut government revenues and slash government funding, programs, and provisions, Republicans limit the effectiveness of the government in doing the jobs we assign it, thereby producing many of the very ill effects, inefficiencies, and inadequacies that constitute the Republican charge against government. This process is further complicated by the 'soft' but influential relationship between individuals in government and the private sector: the blame for problems created by and in the private sector can be easily deferred to the government, since the government is ultimately pushed (via lobbying and financial contributions) in many cases to do the bidding of powerful private-sector interests. Republicans can plausibly blame the mortgage crisis, for example, on government policies to get more Americans to own their own homes than was ever reasonable, despite the fact that such a policy was precisely what the private-sector real estate and lending lobbies asked for.
That background aside, the purpose of this article is to address another Republican strategy aimed at dismantling and discouraging the government's role in providing unemployment and welfare benefits to underserved (not undeserved) Americans.
When we engage in partisan debates about welfare, we hear two basic positions. Democrats argue that the government should play a leading role in providing for society's poor, given the lack of systematic, private-sector care for the poor. They argue that individual charitable contributions aren't enough, because individuals often don't donate their charity dollars efficiently or systematically. Republicans argue that private and faith-based charitable endeavors are not just adequate, but more effective than government welfare and social support programs. For Republicans, government handouts only breed laziness and dependence on more government handouts.
In a time of high unemployment and plenty of civil unrest, it's time to come to terms with a few things, and then reframe the welfare debate.
For one, we must realize that, while many things are indeed better handled by the private sector, some kinds of things are better left to government. These things usually involve systematizing something required on a large and relatively uniform scale. Regulating industries like air travel and food, for example, are jobs for which the profit incentive isn't always the most efficient way to ensure safety. Basic environmental protection (and resource protection), as well, is something that private companies will fail to do, such that government agencies are well positioned to provide guidelines and enforcement to make sure the water tables are toxin-free, or a company can't dump biohazardous materials in a local park. Government need not control all aspects of these endeavors; but government is the best entity to take the lead on safety and regulation where other entities fail to do so, to the detriment of the people.
Providing systematic welfare is another one of these tasks that is best led by government. The profit motive does not provide incentive to provide for those who, by the very definition of their neediness, have already fallen through the system. Further, a private-sector meritocracy is a fine way to distribute resources to a point; but it provides no guidelines whatsoever for tending to the baseline needs of individuals who, for lack of a better way of putting it, 'lose.' Because very few would be so callous as to suggest that the 'losers' in such a system should simply wither away and die--that they somehow deserve it because they are either incapable or lazy--it makes sense to develop systematic charitable solutions for the inevitable problem of caring for those who, for whatever circumstances, fail to care adequately for themselves. This is perhaps the sine qua non of a civilization, a civil society. And no matter how many church groups and individual philanthropists we have to help the needy and contribute charitable funds, such a slapdash method does not reach everyone. More importantly, however, it does not provide the framework for everyone who needs to be reach to gain clear-cut access to assistance. The Republican-hijacked virtue of self-reliance is jeopardized when needy individuals, without internet access or even a phone book (if they still make those), have no clear-cut, systematized, widely located place they can go for help in the first instance.
At the same time, we need to understand that government assistance doesn't have to come in the form of handouts. It's perfectly reasonable to require needy individuals who are physically and mentally capable to work for their welfare checks, for example, by cleaning up and organizing within their communities. And for those who are not capable--the mentally ill, the chronically sick, for example--how could anyone dare say that government provisions are only making these people 'lazy,' or that their dependence comes form government, rather than personal circumstances, disability, illness, etc.?
A crucial mark of a truly rich and advanced society is the ability to care for its poorest citizens. Anything less than that isn't civilization.
Sunday, July 31, 2011
On 'Storytelling Science': How To Communicate Science To The Lay Public
PMB has been fortunate to attend several talks in a series at Oxford University called 'Storytelling Science,' the premise of which is that scientists give 30-minute science talks, usually about an aspect of their own research, pitched to a general audience. After each talk, the audience is encouraged to ask questions from a variety of viewpoints (lay and specialist alike).
The series is great for a number of reasons, not least of which is that, even among those whose delivery or presentation is lacking, virtually every talk covers material that is absolutely fascinating. And, given the composition of the audience, the talks seem to be engaging for specialists who can relate to the material on a more sophisticated level, specialist academics in non-science fields, and generalists and nonacademics as well. Beyond the immediate benefits of the talks, communicating science concepts and research is important because...science concepts and research are often important (as is generating public interest in science, and encouraging youth to engage with science)! Further, when done well, the process of demystifying any kind of specialist knowledge for non-specialists is a rewarding exercise unto itself. For these reasons PMB is an avid supporter of the Storytelling Science series, and other endeavors like it.
Nonetheless, after hearing a number of 'generalist' science talks, PMB has a few observations, and some advice, for scientists engaged in public outreach projects like Storytelling Science.
Consider, firstly, the following phrases (as quoted), used recurrently by scientists at Storytelling Science talks, and beyond:
'Taking a skeptical view--because that's what scientists are, we're paid skeptics...'
'As a scientist I have an analytical mind.'
'And this part is quite technical--it's science, so it's quite difficult...'
'As a scientist it's often quite difficult to explain to people what I do.'
This is just a small sample of comments, the likes of which many of you will have heard before. Generally, comments of this nature presuppose several things about science and scientists, including: scientists, especially if not exclusively, are skeptical, analytical, work with more intrinsically difficult material than do non-scientists, and work with material that is inherently more difficult to explain than that of non-scientists.
There is considerable truth in all of these assumptions; however, all of these assumptions also contain a great deal of falsehood, and not without tinges of condescension and self-satisfied mystification. Put simply, when a scientist makes comments like those listed above, it's not without some genuine and wholly innocent sense that scientists are special people, and science is the work of the intellectual equivalent of the elect, work that is above and beyond non-scientists. Further complicating this innocent but not innocuous belief is the fact that it often takes the form of genuine self-deprecation. 'Nerd,' and, especially, 'geek,' are badges of honor, proudly and sometimes smugly worn, almost always self-assigned, yet proudly passed off as soft commentary on one's allegedly antisocial preoccupation with the the rigorously technical. A 'geek' interrupts your unscientific conversation with a snippet of specialized, peer-reviewed knowledge, then excuses himself as a geek, someone whose social awkwardness in the given moment is but a small price to pay for intellectual superiority (at least, that's the sine qua non of geekdom).
Sometimes, with respect to the complex economics of scientist identity, PMB protesteth too much, and he knows it. On one end of the scientist identity spectrum is certainly the simple sense that after amassing tons of detailed, technical knowledge, the task of teaching a non-specialist seems especially daunting. In the middle is the sense that without science there can be no skepticism, no questioning of the apparent and the given, and no use of 'scientific thinking' by non-scientists outside the realm of 'doing science.' On the extreme end of smugness is the belief that scientists are, as such, simply smarter than everyone else, and are, as such, humanity's last hope.
For the purposes of science communication, wherever one sits along this spectrum, the first step toward demystification of science research and concepts--the first step toward good communication to the lay audience--is a genuine belief that your audience is capable of understanding what it is you want to say. Thus, the most important thing for good science communication is the purging of one's mind of all of these myths about the supreme difficulty of science and the heightened understanding of scientists. PMB has seen excellent scientists do just this, opening up to a generalist audience aspects of fascinating scientific research that otherwise might not have seen the light of day (at least, beyond regular readers of specialist science journals).
The fact is, all specialist knowledge is difficult to explain to non-specialist audiences (the literary scholars reading this might well have experienced the difficulty of talking to a lay audience about a novel, even, and maybe even especially, if all members of the lay audience have read the novel). Just as well, of course, no one will be able to teach the sum of specialist understanding to a non-specialist in a 30-minute talk, whatever the subject matter. But the task of the specialist in non-specialist communication is not to understand such communication as 'dumbing down' and proceed from there, but to maintain in the talk a strong sense of the real complexity of the topic while framing that complexity in non-jargon or non-specialist terms--to communicate in clear and compelling metaphor, in other words. The best Storytelling Science talks use metaphor and analogy to bring the minute to the level of the general without losing too much complexity, just as the best literary generalist talks bring the esoteric and abstract to the level of the concrete without losing too much complexity. The material itself--be it a convoluted process at the intracellular level, or an explanation of why sometimes plagiarism isn't plagiarism--is not the thing. The thing is giving enough credit to your audience that you expect them to understand, and enough credit to yourself that you can understand your own research enough on a conceptual level to teach it without falling back on the specialisms--the jargon, the notation, etc.--that make what you do sound to the layperson a lot harder and more intimidating than it really is. This is absolutely central to the art of storytelling, which, despite its childhood connotations, ain't as easy as it sounds.
Addendum: It's a shame that colleagues in literature departments haven't come up with a parallel lecture series called something like 'Storytelling Stories,' given the difficulty and awkwardness with which most of us seem to discuss our work with non-specialists.
The series is great for a number of reasons, not least of which is that, even among those whose delivery or presentation is lacking, virtually every talk covers material that is absolutely fascinating. And, given the composition of the audience, the talks seem to be engaging for specialists who can relate to the material on a more sophisticated level, specialist academics in non-science fields, and generalists and nonacademics as well. Beyond the immediate benefits of the talks, communicating science concepts and research is important because...science concepts and research are often important (as is generating public interest in science, and encouraging youth to engage with science)! Further, when done well, the process of demystifying any kind of specialist knowledge for non-specialists is a rewarding exercise unto itself. For these reasons PMB is an avid supporter of the Storytelling Science series, and other endeavors like it.
Nonetheless, after hearing a number of 'generalist' science talks, PMB has a few observations, and some advice, for scientists engaged in public outreach projects like Storytelling Science.
Consider, firstly, the following phrases (as quoted), used recurrently by scientists at Storytelling Science talks, and beyond:
'Taking a skeptical view--because that's what scientists are, we're paid skeptics...'
'As a scientist I have an analytical mind.'
'And this part is quite technical--it's science, so it's quite difficult...'
'As a scientist it's often quite difficult to explain to people what I do.'
This is just a small sample of comments, the likes of which many of you will have heard before. Generally, comments of this nature presuppose several things about science and scientists, including: scientists, especially if not exclusively, are skeptical, analytical, work with more intrinsically difficult material than do non-scientists, and work with material that is inherently more difficult to explain than that of non-scientists.
There is considerable truth in all of these assumptions; however, all of these assumptions also contain a great deal of falsehood, and not without tinges of condescension and self-satisfied mystification. Put simply, when a scientist makes comments like those listed above, it's not without some genuine and wholly innocent sense that scientists are special people, and science is the work of the intellectual equivalent of the elect, work that is above and beyond non-scientists. Further complicating this innocent but not innocuous belief is the fact that it often takes the form of genuine self-deprecation. 'Nerd,' and, especially, 'geek,' are badges of honor, proudly and sometimes smugly worn, almost always self-assigned, yet proudly passed off as soft commentary on one's allegedly antisocial preoccupation with the the rigorously technical. A 'geek' interrupts your unscientific conversation with a snippet of specialized, peer-reviewed knowledge, then excuses himself as a geek, someone whose social awkwardness in the given moment is but a small price to pay for intellectual superiority (at least, that's the sine qua non of geekdom).
Sometimes, with respect to the complex economics of scientist identity, PMB protesteth too much, and he knows it. On one end of the scientist identity spectrum is certainly the simple sense that after amassing tons of detailed, technical knowledge, the task of teaching a non-specialist seems especially daunting. In the middle is the sense that without science there can be no skepticism, no questioning of the apparent and the given, and no use of 'scientific thinking' by non-scientists outside the realm of 'doing science.' On the extreme end of smugness is the belief that scientists are, as such, simply smarter than everyone else, and are, as such, humanity's last hope.
For the purposes of science communication, wherever one sits along this spectrum, the first step toward demystification of science research and concepts--the first step toward good communication to the lay audience--is a genuine belief that your audience is capable of understanding what it is you want to say. Thus, the most important thing for good science communication is the purging of one's mind of all of these myths about the supreme difficulty of science and the heightened understanding of scientists. PMB has seen excellent scientists do just this, opening up to a generalist audience aspects of fascinating scientific research that otherwise might not have seen the light of day (at least, beyond regular readers of specialist science journals).
The fact is, all specialist knowledge is difficult to explain to non-specialist audiences (the literary scholars reading this might well have experienced the difficulty of talking to a lay audience about a novel, even, and maybe even especially, if all members of the lay audience have read the novel). Just as well, of course, no one will be able to teach the sum of specialist understanding to a non-specialist in a 30-minute talk, whatever the subject matter. But the task of the specialist in non-specialist communication is not to understand such communication as 'dumbing down' and proceed from there, but to maintain in the talk a strong sense of the real complexity of the topic while framing that complexity in non-jargon or non-specialist terms--to communicate in clear and compelling metaphor, in other words. The best Storytelling Science talks use metaphor and analogy to bring the minute to the level of the general without losing too much complexity, just as the best literary generalist talks bring the esoteric and abstract to the level of the concrete without losing too much complexity. The material itself--be it a convoluted process at the intracellular level, or an explanation of why sometimes plagiarism isn't plagiarism--is not the thing. The thing is giving enough credit to your audience that you expect them to understand, and enough credit to yourself that you can understand your own research enough on a conceptual level to teach it without falling back on the specialisms--the jargon, the notation, etc.--that make what you do sound to the layperson a lot harder and more intimidating than it really is. This is absolutely central to the art of storytelling, which, despite its childhood connotations, ain't as easy as it sounds.
Addendum: It's a shame that colleagues in literature departments haven't come up with a parallel lecture series called something like 'Storytelling Stories,' given the difficulty and awkwardness with which most of us seem to discuss our work with non-specialists.
Thursday, July 21, 2011
Why PMB Hates Corporations
PMB hates corporations. Usually when someone says 'I hate corporations,' the presumption is that such hatred is fueled or animated by a problem with red-in-tooth-and-claw capitalism, or a distaste for profiteering, or some general, amorphous leftyish hippie sentimentalism of the sort you'd encounter at music festivals. Down with the man, man.
It's not unreasonable to criticize corporations--especially the largest and most unfeeling of them--for putting profit ahead of people, or for behaving in some cases like reckless authoritarians, or for purchasing intimate access to governors and policymakers that the rest of us can't afford. But surely these are primarily the faults of large corporations like Wal Mart and Google, not small businesses or medium-sized manufacturers who provide us with access to so many comforts and commodities.
But PMB doesn't just hate large corporations. PMB hates corporations in general. Because one of the greatest constants of corporations, large and small, domestic and international, for-profit and nonprofit virtually alike, is that they set the standards for workplace culture; and 'workplace culture' is really just a euphemism for 'controlling as much of your entire life as is humanly or legally possible.'
And this goes beyond the old 'we'll pay for your Blackberry!' (subtext: we expect you to check and respond to e-mail 24 hours/day); this includes inexcusably invasive policies even before they hire you. All of which is nothing, mind you, compared to the least-questioned and arguably most-oppressive facet of modern industrial society: the fact that working adults have to report to an office every day, remain there all day, and, regardless of productivity, rely on maybe two or three weeks in a given year during which we might be 'excused' by our in loco parentis employers to 'go on vacation' (the explicit purpose of which, mind you, is not so much to enjoy your life as it is to 'refresh' yourself for the work you have when you return).
The problem here isn't that societies require work and productivity to grow and provide for everyone; so the argument here is not necessarily that we should be able to work less. It's just that, when we become grownups, we should be able to work like grownups, on deadlines, but without being rounded up into a workplace whose 'culture' is primarily oppressive, stifling, Orwellian, insufferable, and, in many cases, completely unnecessary. With the technology that we have--cheap wireless networks, teleconferencing, the good ol' fashioned telephone, transportation devices like the subway, the bicycle, the bus, and the automobile, and the ever-important coffee shop, many of us don't need an office to report to in order to get our work done. And when we need to have meetings (as opposed to when a critical mass of people occupying an office, bored out of their minds, decides that its time to have a meeting because there's nothing better to do), can't we initiate and coordinate them ourselves, between ourselves and our colleagues? Shouldn't we just abandon this whole notion of reporting to the office like a child reports to homeroom every morning on a school day?
This isn't exactly a novel idea--Jason Fried, of a web-based company called 37signals, gave a TED talk about it. Probably most people reading this have thought, many times over, that if only they didn't have their day at the office compartmentalized into 30-minute bits, with people asking for this and that simply because they're there and you're there and this is how the 'workplace culture' works, they could actually get some work done. Certainly when corporations advertise jobs with statements indicating preference for 'motivated' individuals, 'self-starters,' with 'the ability to work independently,' etc., they're not envisioning an employee who needs a boss sipping coffee in the next room (or cubicle; or open space) to the left occasionally checking in, micromanaging, or simply working independently a few feet away from you...for what?
Yes, some people will argue that 'if I didn't have an office with set work hours, I wouldn't have separation between my work life and my life life.' To this PMB would say that if you honestly think about the idea of compartmentalizing your 'work life' and 'life life,' you'd be depressed to find out that your 'work life' takes up so much of the sum total of your life that sectioning off your 'life life' is just kind of pathetic. A better approach, as far as PMB is concerned, would be to admit that spending much of our lives working can actually be very natural and very fulfilling, just not under the conditions that presently constitute 'working' in a corporate or corporate-influenced environment. Time after time after time we report higher levels of satisfaction from doing our work in a self-directed manner, and, accordingly, feeling some sense of ownership over it. Rather than letting an employer decide for you how to compartmentalize your life and on what terms and in what environment to complete your work, why not control it yourself? On the surface, it might be easier to accept the readymade boundaries handed you, just as drawing new boundaries between work and leisure might be harder to do at first when the artifice of the workplace is taken away; but certainly we want more agency over our lives and our schedules, not less.
It seems what set out to be a screed against the corporate influence over workplace culture has turned out to be a general condemnation of workplace culture itself. Though we shouldn't forget that corporate interests are indeed pushing for greater control over employees' lives, even before they become employees. Imagine, for a moment, living in a society in which your government told you where to be at what time every day; that your government compiled files of your internet activity, text messages, and photos, and used them to evaluate your social worth; that your government required you to piss in a cup every month to screen you for drug use; that your government told you when you could and couldn't leave the country, go the beach, or spend a few hours in the afternoon in the park with your children; that your government controls who you buy health insurance from, and which doctors you're allowed to see. It's probably not too difficult to imagine, actually, if you simply replace the word 'government' with 'employer.'
It's not unreasonable to criticize corporations--especially the largest and most unfeeling of them--for putting profit ahead of people, or for behaving in some cases like reckless authoritarians, or for purchasing intimate access to governors and policymakers that the rest of us can't afford. But surely these are primarily the faults of large corporations like Wal Mart and Google, not small businesses or medium-sized manufacturers who provide us with access to so many comforts and commodities.
But PMB doesn't just hate large corporations. PMB hates corporations in general. Because one of the greatest constants of corporations, large and small, domestic and international, for-profit and nonprofit virtually alike, is that they set the standards for workplace culture; and 'workplace culture' is really just a euphemism for 'controlling as much of your entire life as is humanly or legally possible.'
And this goes beyond the old 'we'll pay for your Blackberry!' (subtext: we expect you to check and respond to e-mail 24 hours/day); this includes inexcusably invasive policies even before they hire you. All of which is nothing, mind you, compared to the least-questioned and arguably most-oppressive facet of modern industrial society: the fact that working adults have to report to an office every day, remain there all day, and, regardless of productivity, rely on maybe two or three weeks in a given year during which we might be 'excused' by our in loco parentis employers to 'go on vacation' (the explicit purpose of which, mind you, is not so much to enjoy your life as it is to 'refresh' yourself for the work you have when you return).
The problem here isn't that societies require work and productivity to grow and provide for everyone; so the argument here is not necessarily that we should be able to work less. It's just that, when we become grownups, we should be able to work like grownups, on deadlines, but without being rounded up into a workplace whose 'culture' is primarily oppressive, stifling, Orwellian, insufferable, and, in many cases, completely unnecessary. With the technology that we have--cheap wireless networks, teleconferencing, the good ol' fashioned telephone, transportation devices like the subway, the bicycle, the bus, and the automobile, and the ever-important coffee shop, many of us don't need an office to report to in order to get our work done. And when we need to have meetings (as opposed to when a critical mass of people occupying an office, bored out of their minds, decides that its time to have a meeting because there's nothing better to do), can't we initiate and coordinate them ourselves, between ourselves and our colleagues? Shouldn't we just abandon this whole notion of reporting to the office like a child reports to homeroom every morning on a school day?
This isn't exactly a novel idea--Jason Fried, of a web-based company called 37signals, gave a TED talk about it. Probably most people reading this have thought, many times over, that if only they didn't have their day at the office compartmentalized into 30-minute bits, with people asking for this and that simply because they're there and you're there and this is how the 'workplace culture' works, they could actually get some work done. Certainly when corporations advertise jobs with statements indicating preference for 'motivated' individuals, 'self-starters,' with 'the ability to work independently,' etc., they're not envisioning an employee who needs a boss sipping coffee in the next room (or cubicle; or open space) to the left occasionally checking in, micromanaging, or simply working independently a few feet away from you...for what?
Yes, some people will argue that 'if I didn't have an office with set work hours, I wouldn't have separation between my work life and my life life.' To this PMB would say that if you honestly think about the idea of compartmentalizing your 'work life' and 'life life,' you'd be depressed to find out that your 'work life' takes up so much of the sum total of your life that sectioning off your 'life life' is just kind of pathetic. A better approach, as far as PMB is concerned, would be to admit that spending much of our lives working can actually be very natural and very fulfilling, just not under the conditions that presently constitute 'working' in a corporate or corporate-influenced environment. Time after time after time we report higher levels of satisfaction from doing our work in a self-directed manner, and, accordingly, feeling some sense of ownership over it. Rather than letting an employer decide for you how to compartmentalize your life and on what terms and in what environment to complete your work, why not control it yourself? On the surface, it might be easier to accept the readymade boundaries handed you, just as drawing new boundaries between work and leisure might be harder to do at first when the artifice of the workplace is taken away; but certainly we want more agency over our lives and our schedules, not less.
It seems what set out to be a screed against the corporate influence over workplace culture has turned out to be a general condemnation of workplace culture itself. Though we shouldn't forget that corporate interests are indeed pushing for greater control over employees' lives, even before they become employees. Imagine, for a moment, living in a society in which your government told you where to be at what time every day; that your government compiled files of your internet activity, text messages, and photos, and used them to evaluate your social worth; that your government required you to piss in a cup every month to screen you for drug use; that your government told you when you could and couldn't leave the country, go the beach, or spend a few hours in the afternoon in the park with your children; that your government controls who you buy health insurance from, and which doctors you're allowed to see. It's probably not too difficult to imagine, actually, if you simply replace the word 'government' with 'employer.'
Sunday, July 3, 2011
When Did Conservatives Become So Thin-Skinned?
I used to have a lot of respect for American conservatism. There was a time, in my lifetime, when it seemed like the thing that most riled a conservative was victimhood. Conservatives couldn't stand the idea of someone taking a welfare check instead of getting a job, someone claiming exceptional treatment because of their race or ethnicity, or someone wanting clemency for crimes committed. You didn't have to agree with the conservative take on these issues--that welfare 'handouts' only breed idleness, racial or ethnic discrimination is a thing of the past, and justice should be swift and retributive--in order to respect the basic worldview that underlies these positions: we should all take personal responsibility for ourselves, our wellbeing, and our actions.
And before my lifetime, when notable conservatives like Phyllis Schlafly railed against the Equal Rights Amendment out of fear that such a law might preclude our ability to 'deny a homosexual the right to teach in the schools, or to adopt children,'you could kind of respect conservatives for standing tall on their worldviews, however repulsive they sometimes were. Schlafly fought against equal rights for women with a force and vitality that make Sarah Palin and Michele Bachmann look like PBS telethon hosts on a slow night. You get the sense that if Facebook were around in ol' Phyllis' heyday, her wall wouldn't exactly be filled with passive-aggressive and sometimes weepy e-scrawlings amounting to 'why is errybody always pickin on me (and Bristol).'
The fact is, at some point, conservatives started conceiving of themselves as that which they've always loathed: victims.
It was most notable for me as an undergraduate. At the university, the caucasian majority, regardless of political orientation, was generally bright and openminded, and, even if at times lacking in understanding of particular histories that might make certain kinds of jokes or comments inappropriate or even racist, was not a racist bunch. Our generation came of age in a political and legal environment that had long since granted equal rights and equal treatment under law to all people (except homosexuals), regardless of race or creed. So, I gather, it was often difficult for bunches of smart, mostly well meaning white kids who harbored no conscious prejudice to square the fact that in the university environment, there were special offices, clubs, resources, scholarships, etc. for seemingly everyone but them. Such were the preconditions for the conservative fight against multiculturalism: white conservatives felt simultaneously neglected and deracinated, and minority conservatives were rightfully sick and tired of being placed in identity boxes, or looked at askance, as though they only managed success in the admissions tournament because they got some kind of preferential treatment or racial boost as an historical corrective. Both groups of conservatives had reasonable claims against the multiculturalist agenda: no one likes to be blanco anymore than s/he likes to be reduced to any color at all. So a generation of young conservatives, mostly white, mostly of educated and privileged classes, began thinking of themselves as victims.
On top of this notion of 'white victimhood' that arose in large part, as I see it, as a reaction to the rise of multiculturalism, conservatives came to understand, with some validity, that, at least in the academy, in Hollywood, and in the establishment media, their politics were also out of fashion (owning Wall Street, the Chamber of Commerce, and most of the Supreme Court apparently wasn't enough). Left-leaning professors challenged the assumed supremacy of neocapitalism, the idea of American exceptionalism, the myth of the 'welfare queen,' and the profit motive in healthcare. Movies and sitcoms mocked Reaganites as stodgy, dorky, and sexually inept. And the major media outlets--network news and big urban papers like the New York Times--were crawling with skinny-tied sophisticates who thought and ridiculed liberally. So conservatives took a page out of the liberals' playbook, further playing up their marginalization, their victimhood.
Today, conservatives are perhaps our whiniest victims. Over at Brainstorm, Naomi Schaefer Riley succumbs to verbal hyperventilation over the fact that the liberal Rebecca Mead--for Riley, representing 'the attitude of the establishment' [!!!!!!!!]--had the nerve to write snidely about Wal-Mart heiress Alice Walton for buying up a bunch of American art and displaying it in Arkansas. Riley's problem boils down to the fact that the patriotic Alice Walton, who has 'never...considered collecting anything but American art,' might vaguely stand in for conservatism, and the 'establishment' liberal Mead (nevermind Riley's Harvard education) has taken a mild shot at Walton and the company she represents. So what? Aside: between Rebecca Mead--whom you've probably never heard of--and Wal-Mart, who represents the establishment, again?
Meanwhile, as congressional conservatives fight to maintain a series of corporate tax loopholes, among them one that enables tax breaks for the corporate use of private jets, the apparent victimization of those who occupy the world of corporate private jetting has interrupted discussions about how the country might try to pull itself out from under its crushing debt.
And let's not forget about new author of Not Afraid of Life, Bristol Palin, who fell victim recently to a Bill Maher joke about her prudish explanation of how she 'accidentally' got black-out drunk on wine coolers (which she didn't know contained alcohol) and conceived a child with her boyfriend. Bristol, who is evidently mature enough to have a sexual relationship and a child and a memoir, is so much the victim that Fox News brought on a psychiatrist to remote-analyze Maher for mental illness.
Arguably, today's conservative worldview necessarily comes with an orientation toward victimhood, or the feeling of being constantly embattled. The Phyllis Schlaflys and Richard Nixons--the types that weren't too prudish to grab you by the balls and squeeze if it meant winning the issue--have been replaced by a bunch of big softies. It's as if they're compensating for something when they take photo ops with large rifles. It's as if the world is continually moving on, and the progress has got them down.
And before my lifetime, when notable conservatives like Phyllis Schlafly railed against the Equal Rights Amendment out of fear that such a law might preclude our ability to 'deny a homosexual the right to teach in the schools, or to adopt children,'you could kind of respect conservatives for standing tall on their worldviews, however repulsive they sometimes were. Schlafly fought against equal rights for women with a force and vitality that make Sarah Palin and Michele Bachmann look like PBS telethon hosts on a slow night. You get the sense that if Facebook were around in ol' Phyllis' heyday, her wall wouldn't exactly be filled with passive-aggressive and sometimes weepy e-scrawlings amounting to 'why is errybody always pickin on me (and Bristol).'
The fact is, at some point, conservatives started conceiving of themselves as that which they've always loathed: victims.
It was most notable for me as an undergraduate. At the university, the caucasian majority, regardless of political orientation, was generally bright and openminded, and, even if at times lacking in understanding of particular histories that might make certain kinds of jokes or comments inappropriate or even racist, was not a racist bunch. Our generation came of age in a political and legal environment that had long since granted equal rights and equal treatment under law to all people (except homosexuals), regardless of race or creed. So, I gather, it was often difficult for bunches of smart, mostly well meaning white kids who harbored no conscious prejudice to square the fact that in the university environment, there were special offices, clubs, resources, scholarships, etc. for seemingly everyone but them. Such were the preconditions for the conservative fight against multiculturalism: white conservatives felt simultaneously neglected and deracinated, and minority conservatives were rightfully sick and tired of being placed in identity boxes, or looked at askance, as though they only managed success in the admissions tournament because they got some kind of preferential treatment or racial boost as an historical corrective. Both groups of conservatives had reasonable claims against the multiculturalist agenda: no one likes to be blanco anymore than s/he likes to be reduced to any color at all. So a generation of young conservatives, mostly white, mostly of educated and privileged classes, began thinking of themselves as victims.
On top of this notion of 'white victimhood' that arose in large part, as I see it, as a reaction to the rise of multiculturalism, conservatives came to understand, with some validity, that, at least in the academy, in Hollywood, and in the establishment media, their politics were also out of fashion (owning Wall Street, the Chamber of Commerce, and most of the Supreme Court apparently wasn't enough). Left-leaning professors challenged the assumed supremacy of neocapitalism, the idea of American exceptionalism, the myth of the 'welfare queen,' and the profit motive in healthcare. Movies and sitcoms mocked Reaganites as stodgy, dorky, and sexually inept. And the major media outlets--network news and big urban papers like the New York Times--were crawling with skinny-tied sophisticates who thought and ridiculed liberally. So conservatives took a page out of the liberals' playbook, further playing up their marginalization, their victimhood.
Today, conservatives are perhaps our whiniest victims. Over at Brainstorm, Naomi Schaefer Riley succumbs to verbal hyperventilation over the fact that the liberal Rebecca Mead--for Riley, representing 'the attitude of the establishment' [!!!!!!!!]--had the nerve to write snidely about Wal-Mart heiress Alice Walton for buying up a bunch of American art and displaying it in Arkansas. Riley's problem boils down to the fact that the patriotic Alice Walton, who has 'never...considered collecting anything but American art,' might vaguely stand in for conservatism, and the 'establishment' liberal Mead (nevermind Riley's Harvard education) has taken a mild shot at Walton and the company she represents. So what? Aside: between Rebecca Mead--whom you've probably never heard of--and Wal-Mart, who represents the establishment, again?
Meanwhile, as congressional conservatives fight to maintain a series of corporate tax loopholes, among them one that enables tax breaks for the corporate use of private jets, the apparent victimization of those who occupy the world of corporate private jetting has interrupted discussions about how the country might try to pull itself out from under its crushing debt.
And let's not forget about new author of Not Afraid of Life, Bristol Palin, who fell victim recently to a Bill Maher joke about her prudish explanation of how she 'accidentally' got black-out drunk on wine coolers (which she didn't know contained alcohol) and conceived a child with her boyfriend. Bristol, who is evidently mature enough to have a sexual relationship and a child and a memoir, is so much the victim that Fox News brought on a psychiatrist to remote-analyze Maher for mental illness.
Arguably, today's conservative worldview necessarily comes with an orientation toward victimhood, or the feeling of being constantly embattled. The Phyllis Schlaflys and Richard Nixons--the types that weren't too prudish to grab you by the balls and squeeze if it meant winning the issue--have been replaced by a bunch of big softies. It's as if they're compensating for something when they take photo ops with large rifles. It's as if the world is continually moving on, and the progress has got them down.
Saturday, July 2, 2011
Maid in Manhattan: How PMB Would Turn the Tables on DSK
By now we all know about former IMF head and leading French politician Dominique Strauss-Kahn's alleged sexual assault of a New York hotel maid. Predictably, DSK's defense team has begun to dig up information about the maid's past that is designed to call her credibility (and, transitively, the credibility of her accusation) into question. In this disheartening article in the NYTimes, which details the apparent 'collapse' of the prosecution's case against DSK, the maid and alleged victim of sexual assault is accused of having a drug-dealer boyfriend, being connected with charges of money laundering and drug dealing, and discussing with a criminal the potential remunerative benefits of pursuing a case against DSK the day after the incident between DSK and the maid took place (that a sexual encounter took place between the two is not in dispute; the nature of that encounter is). Among other things, these bits of information would, indeed, seem to endanger the maid's case against her alleged assailant, striking at the heart of her credibility.
Well, actually, not really.
The fact is, this case is about more than the conventional problem of he said/she said in cases of rape and sexual assault. This case is a prime example of one of the most prevalent and under-acknowledged injustices in the US and similarly developed countries: the widespread exploitation, sexual and otherwise, of female asylum seekers, illegal or conditional immigrants, etc. And the under-acknowledgement of this problem is so systematically ingrained that what would be its starkest manifestation in the internationally prominent DSK case--the fact that the accuser is a documented Guinean asylum-seeker in the US--has been used by DSK's defense team, and not the prosecution, to damage the maid's credibility. The NYTimes' coverage of this aspect of the story plays right into the hands of DSK's defense, treating information about the maid's asylum application as a blow to her credibility (in light of inconsistencies between the text of her application and subsequent comments she made to police after the DSK affair about her asylum bid). Instead, both the prosecution and the media need to acknowledge the very real circumstances of disenfranchisement faced by female asylum seekers and trafficked and illegal immigrants.
We know that DSK's accuser may have lied about some of the details of her past and her asylum bid to authorities, may affiliate with a drug dealer, and may have actually tried to benefit financially from her accusation against DSK.
We also know, however, that many women seeking asylum, a better life, etc. in countries like the US come from places where police and other authorities are corrupt and untrustworthy. Many women facing harsh circumstances in their former countries, many of them places where women are systematically raped, tortured, sold, enslaved, disenfranchised, etc., will lie and withhold information in order to avoid being deported back to harsh circumstances.
Likewise, many such women have dark pasts and drug-dealer boyfriends, and will have latched onto powerful and exploitative figures thinking it a means to get out or get away (as many places in the world are largely controlled by people like drug dealers). Many will have tolerated and endured sexual abuse from the men who traffic them, bring them to the US, keep them, look after them, etc. Many will have been willing to go through hell for the prospect of a new life.
And many women who have been systematically disenfranchised, both home and abroad in the US--women who have had to live and survive by their wiles and sometimes by their bodies--will certainly turn to opportunism when they are exploited by rich and powerful men. Would you blame them?
The point here is that, even if not much or all of this applies directly to DSK's maid (though, from what we do know about her, an asylum seeker, she fits the profile pretty damn well), the kind of behavior that journalists and defense attorneys are calling damaging to the maid's credibility is actually very common and very justifiable behavior for women in like circumstances of disenfranchisement, or existence outside the protection of the law. When such women have their circumstances of poverty and desperation leveraged against them for sex by exploitative men, the lying, equivocation, unsavory liaisons, and even opportunism are all not just predictable, but justifiable pasts and modes of behavior. Rather than treating this reality like a character assessment in a vacuum, we need to consider it within the broader context of the systematic and widespread exploitation and sexual abuse of women in harsh circumstances by predatory men who know that their victims would rather endure abuse in the US than seek legal recourse, risking deportation, imprisonment, further threats and abuse, etc.
And this is something the 'near-collapsing' prosecution needs to take into serious account.
Well, actually, not really.
The fact is, this case is about more than the conventional problem of he said/she said in cases of rape and sexual assault. This case is a prime example of one of the most prevalent and under-acknowledged injustices in the US and similarly developed countries: the widespread exploitation, sexual and otherwise, of female asylum seekers, illegal or conditional immigrants, etc. And the under-acknowledgement of this problem is so systematically ingrained that what would be its starkest manifestation in the internationally prominent DSK case--the fact that the accuser is a documented Guinean asylum-seeker in the US--has been used by DSK's defense team, and not the prosecution, to damage the maid's credibility. The NYTimes' coverage of this aspect of the story plays right into the hands of DSK's defense, treating information about the maid's asylum application as a blow to her credibility (in light of inconsistencies between the text of her application and subsequent comments she made to police after the DSK affair about her asylum bid). Instead, both the prosecution and the media need to acknowledge the very real circumstances of disenfranchisement faced by female asylum seekers and trafficked and illegal immigrants.
We know that DSK's accuser may have lied about some of the details of her past and her asylum bid to authorities, may affiliate with a drug dealer, and may have actually tried to benefit financially from her accusation against DSK.
We also know, however, that many women seeking asylum, a better life, etc. in countries like the US come from places where police and other authorities are corrupt and untrustworthy. Many women facing harsh circumstances in their former countries, many of them places where women are systematically raped, tortured, sold, enslaved, disenfranchised, etc., will lie and withhold information in order to avoid being deported back to harsh circumstances.
Likewise, many such women have dark pasts and drug-dealer boyfriends, and will have latched onto powerful and exploitative figures thinking it a means to get out or get away (as many places in the world are largely controlled by people like drug dealers). Many will have tolerated and endured sexual abuse from the men who traffic them, bring them to the US, keep them, look after them, etc. Many will have been willing to go through hell for the prospect of a new life.
And many women who have been systematically disenfranchised, both home and abroad in the US--women who have had to live and survive by their wiles and sometimes by their bodies--will certainly turn to opportunism when they are exploited by rich and powerful men. Would you blame them?
The point here is that, even if not much or all of this applies directly to DSK's maid (though, from what we do know about her, an asylum seeker, she fits the profile pretty damn well), the kind of behavior that journalists and defense attorneys are calling damaging to the maid's credibility is actually very common and very justifiable behavior for women in like circumstances of disenfranchisement, or existence outside the protection of the law. When such women have their circumstances of poverty and desperation leveraged against them for sex by exploitative men, the lying, equivocation, unsavory liaisons, and even opportunism are all not just predictable, but justifiable pasts and modes of behavior. Rather than treating this reality like a character assessment in a vacuum, we need to consider it within the broader context of the systematic and widespread exploitation and sexual abuse of women in harsh circumstances by predatory men who know that their victims would rather endure abuse in the US than seek legal recourse, risking deportation, imprisonment, further threats and abuse, etc.
And this is something the 'near-collapsing' prosecution needs to take into serious account.
Monday, May 30, 2011
'Your an Idiot': Why Literacy Standards Need To Change
After receiving the link from several friends, PMB has had the pleasure of viewing this, a depressing but hilarious blog that archives Facebook responses to articles from The Onion from people who are apparently taking the Onion articles as real news. The most prevalent example is a series of Facebook posts calling for Americans to repent, lamenting the fake Onion headline 'Planned Parenthood Opens $8 Billon Abortionplex' (one of these includes a further lamentation over the comic material in the Onion article: 'They will give pedicures to the moms after their abortion! Federally funded and all! So sad!!!!').
Most of the people PMB comes across--people who are generally very educated, well informed, and operate with high levels of literacy--react the same way to the (it must be said again) absolutely hilarious inability of the Facebook posters to comprehend that The Onion is a comic publication for entertainment purposes, and its 'news' is not real news, but satirical fiction about news. We tend to laugh at first, and then, after a little reflection, deem this misapprehension a very sad phenomenon. We think it's generally sad that there are 'stupid people' out there; that, in the abstract sense, 'people are stupid.' We might blame failed education systems, or the peurility of popular culture, or socioeconomic disparity, or, crudely, some 'inherent' inequality of aptitude among humans. In any case, there's something depressing about the fact that these people didn't get the joke.
What not getting the joke boils down to in this case, however, is not necessarily individual aptitude, pop culture, or lack of educational opportunity, but rather a misplaced educational focus, along with a horrifically dated understanding of what literacy really means in the 21st-century, industrialized world.
It's not that we're not aware of multiple layers and types of literacy. That someone could read an Onion article as a serious news peice shows potential deficits in several of these layers and types. Assuming a baseline ability to read--to undestand an alphabet, to have a sizeable vocabulary, and to make reasonably accurate meaning of the symbols on the page--it would also take some degree of cultural literacy to make it easy to properly comprehend the Onion article: it would certainly help to know what The Onion is, what kind of articles it produces, what kind of audience it writes for, etc. As a subset of this kind of cultural literacy, it would certainly aid comprehension to have some political literacy--to know that, for example, Planned Parenthood has recently been under attack by conservative politicians. This would give helpful context to what might appear to be exaggerations of actual claims made recently by politicians in the actual news. And to gain some sense of what constitutes exaggeration, or hyperbole, or metaphor, or satirical tone, a certain level of (for lack of a better term) 'literary' literacy is necessary for good comprehension.
One would think that such an awkward phrase as 'literary literacy' is silly and redundant, but, in fact, attention to literary devices and effects--subtle ways of producing meaning--has lately been minimized in favor of information design and information management skills that align more ostensibly with the needs and features of the so-called Information Age. Here again is a prime site for the divergence of information from meaning, the consequences of which produce the kinds of hilarious but also sad misapprehensions featured on the 'Literally Unbelievable' blog.
The point here is that, after the vast, vast majority of people in the developed and industrialized world gained literacy in its most basic, narrow, and traditional sense--the baseline-functional ability to read and write--we've begun to take literacy for granted. We've lost track of what a meaningful, up-tp-date definition of literacy would be for our present situation. And we've failed one another in so doing. PMB (and the majority of his readers) loves to have a good laugh, from a position of extreme educational privilege, at these 'idiots' who really think that Kansas is building a multi-billion-dollar abortion megaplex where women can get pedicures after their abortions, but we should consider that these kinds of misreadings are far more widespread, and occur in more serious contexts (Obama is a secret muslim who faked his US birth certificate; the Qur'an says women should be covered from head to toe) than we'd like to admit.
We love to obsess over technological advancement and pretend like our societies, filled with people who can read and write, have evolved beyond the need for serious consideration of literacy, rhetoric, etc. We like to claim that such advancements have democratized information and education. But we'll struggle in myriad ways until we update our understanding of what it means to be literate, and shape our curricula accordingly. Because, strange as it may sound, true literacy today remains a relatively elite prerogative.
Most of the people PMB comes across--people who are generally very educated, well informed, and operate with high levels of literacy--react the same way to the (it must be said again) absolutely hilarious inability of the Facebook posters to comprehend that The Onion is a comic publication for entertainment purposes, and its 'news' is not real news, but satirical fiction about news. We tend to laugh at first, and then, after a little reflection, deem this misapprehension a very sad phenomenon. We think it's generally sad that there are 'stupid people' out there; that, in the abstract sense, 'people are stupid.' We might blame failed education systems, or the peurility of popular culture, or socioeconomic disparity, or, crudely, some 'inherent' inequality of aptitude among humans. In any case, there's something depressing about the fact that these people didn't get the joke.
What not getting the joke boils down to in this case, however, is not necessarily individual aptitude, pop culture, or lack of educational opportunity, but rather a misplaced educational focus, along with a horrifically dated understanding of what literacy really means in the 21st-century, industrialized world.
It's not that we're not aware of multiple layers and types of literacy. That someone could read an Onion article as a serious news peice shows potential deficits in several of these layers and types. Assuming a baseline ability to read--to undestand an alphabet, to have a sizeable vocabulary, and to make reasonably accurate meaning of the symbols on the page--it would also take some degree of cultural literacy to make it easy to properly comprehend the Onion article: it would certainly help to know what The Onion is, what kind of articles it produces, what kind of audience it writes for, etc. As a subset of this kind of cultural literacy, it would certainly aid comprehension to have some political literacy--to know that, for example, Planned Parenthood has recently been under attack by conservative politicians. This would give helpful context to what might appear to be exaggerations of actual claims made recently by politicians in the actual news. And to gain some sense of what constitutes exaggeration, or hyperbole, or metaphor, or satirical tone, a certain level of (for lack of a better term) 'literary' literacy is necessary for good comprehension.
One would think that such an awkward phrase as 'literary literacy' is silly and redundant, but, in fact, attention to literary devices and effects--subtle ways of producing meaning--has lately been minimized in favor of information design and information management skills that align more ostensibly with the needs and features of the so-called Information Age. Here again is a prime site for the divergence of information from meaning, the consequences of which produce the kinds of hilarious but also sad misapprehensions featured on the 'Literally Unbelievable' blog.
The point here is that, after the vast, vast majority of people in the developed and industrialized world gained literacy in its most basic, narrow, and traditional sense--the baseline-functional ability to read and write--we've begun to take literacy for granted. We've lost track of what a meaningful, up-tp-date definition of literacy would be for our present situation. And we've failed one another in so doing. PMB (and the majority of his readers) loves to have a good laugh, from a position of extreme educational privilege, at these 'idiots' who really think that Kansas is building a multi-billion-dollar abortion megaplex where women can get pedicures after their abortions, but we should consider that these kinds of misreadings are far more widespread, and occur in more serious contexts (Obama is a secret muslim who faked his US birth certificate; the Qur'an says women should be covered from head to toe) than we'd like to admit.
We love to obsess over technological advancement and pretend like our societies, filled with people who can read and write, have evolved beyond the need for serious consideration of literacy, rhetoric, etc. We like to claim that such advancements have democratized information and education. But we'll struggle in myriad ways until we update our understanding of what it means to be literate, and shape our curricula accordingly. Because, strange as it may sound, true literacy today remains a relatively elite prerogative.
Saturday, May 28, 2011
An Aimless Rant
Hey cyclist who thinks that, just because she's not operating a car, things like traffic lights do not apply to her. Shouldn't it have occurred to you that the nature of the very means of transportation that apparently justifies your exceptionality is also what makes it very probable that, should you encounter one of those larger, heavier metal objects in the wrong sort of way as a result of your inability to follow traffic laws, it's you who will end up dead?
When you use a shared or public toilet, do you enjoy forcing your hand up into the toilet paper dispenser to fish out the end of the roll because some asshole who was in there before you carlessly tugged the paper from the bottom, causing it to break off from up inside the dispenser? Assuming the answer is no, why do you leave it that way for other people?
Do you think you're adept at walking and texting at the same time? Because you're not.
If you're walking along the sidewalk (pavement) abreast with two or more people, and I'm walking toward you, it's incumbent upon one or more of you to move out of my way by tucking behind one another in single file. Why? Because we both have the same right to the shared space, whereas you and your friends have no right to monopolize it by walking side-by-side. I understand that sometimes, unthinkingly, you expect me to step out into the road to get hit by a cyclist so you can all pass by. And that's why, when I didn't move, you ran into my shoulder, winced, and thought I was the one being an asshole.
If you don't satisfy any of the following conditions, you have no business wearing a New York Yankees hat: 1) You live or lived in the Bronx. 2) Your friend, partner, spouse, or family member plays or played for the Yankees. 3) You play or played for the Yankees.
Have you noticed how academics, you know, sort of, come up with our own sort of verbal fillers to replace 'like' and 'um,' which we roundly despise in the speech of others?
I don't like cake. For the love of god, stop offering me cake.
Double-wide baby strollers should be illegal. People who carelessly thrust their single-infant-occupied strollers out in front of you as a way of forcing themselves through a crowd are obnoxious enough; the double-stroller pushers should be shamed off the sidewalk (pavement) and relegated to the cycle lane until we can pass due legislation, an international ban.
If you accuse me of spending my time 'figuring out what the author menas,' I write you off as an idiot then and there.
If you represent a corporation or a society of people interested in joining or representing corporations, don't ask me for favors. Ever. Once I asked my boss if I could, like, have something for nothing, and she said that's not how corporations work. Well, that's not how I work either.
Too many people go around feeling smug about all they do to save and improve lives. Too few of them have ever stopped to think about what makes lives worth saving and improving.
When you use a shared or public toilet, do you enjoy forcing your hand up into the toilet paper dispenser to fish out the end of the roll because some asshole who was in there before you carlessly tugged the paper from the bottom, causing it to break off from up inside the dispenser? Assuming the answer is no, why do you leave it that way for other people?
Do you think you're adept at walking and texting at the same time? Because you're not.
If you're walking along the sidewalk (pavement) abreast with two or more people, and I'm walking toward you, it's incumbent upon one or more of you to move out of my way by tucking behind one another in single file. Why? Because we both have the same right to the shared space, whereas you and your friends have no right to monopolize it by walking side-by-side. I understand that sometimes, unthinkingly, you expect me to step out into the road to get hit by a cyclist so you can all pass by. And that's why, when I didn't move, you ran into my shoulder, winced, and thought I was the one being an asshole.
If you don't satisfy any of the following conditions, you have no business wearing a New York Yankees hat: 1) You live or lived in the Bronx. 2) Your friend, partner, spouse, or family member plays or played for the Yankees. 3) You play or played for the Yankees.
Have you noticed how academics, you know, sort of, come up with our own sort of verbal fillers to replace 'like' and 'um,' which we roundly despise in the speech of others?
I don't like cake. For the love of god, stop offering me cake.
Double-wide baby strollers should be illegal. People who carelessly thrust their single-infant-occupied strollers out in front of you as a way of forcing themselves through a crowd are obnoxious enough; the double-stroller pushers should be shamed off the sidewalk (pavement) and relegated to the cycle lane until we can pass due legislation, an international ban.
If you accuse me of spending my time 'figuring out what the author menas,' I write you off as an idiot then and there.
If you represent a corporation or a society of people interested in joining or representing corporations, don't ask me for favors. Ever. Once I asked my boss if I could, like, have something for nothing, and she said that's not how corporations work. Well, that's not how I work either.
Too many people go around feeling smug about all they do to save and improve lives. Too few of them have ever stopped to think about what makes lives worth saving and improving.
Thursday, May 26, 2011
Tenure-Track/Partner-Track: What Law Firms Can Learn From Academics' Mistakes
There has been much talk lately of what is euphemistically called a 'restructuring' of law-firm labor: the growth of full-time lawyer positions that are off the 'partner-track,' or for which there is no expectation that one be considered for partnership at the firm. As the recent NY Times article explains, non-partner-track lawyers are paid significantly less (around 60K/year) than their partner-track colleagues, largely for the same amount and type of work.
Those who would favor this two-track system argue that the non-partner-track allows more flexibility for those who aren't sure if they want to dedicate 8 years of their lives to the long and arduous slog toward partnership at a big law firm.
What proponents of the two-track system are missing, however, is that, beyond the lower salary, the podunk work locations, and the contingency of the labor (a non-partner-track attorney will certainly have less job security and would be easier to replace on shorter notice), such a system will create an especially nasty effect: non-partner-track lawyers will become a kind of legal underclass, looked down upon (and sometimes pitied) by their partner-track colleagues, abused by those in positions of managerial and budget-setting power, and, ultimatly, multiplied to create a growing army of lower-paid lawyers with fewer benefits that will become the standard of the profession, rather than the exception. Why does PMB think this?
Just look at academia.
When managerial and administrative types in higher education figured out that it's cheaper to take advantage of the market glut of competent people vying to work in academia--and the concomitant 'price' inelasticity of demand for academic work--by hiring adjunct, non-tenure-track faculty to carry the burden of university teaching, instead of hiring tenure-track faculty (who, unlike adjuncts, actually get health and retirement benefits), the number of adjunct professors has risen dramatically while the number of tenure-track faculty has declined dramatically.
The effects of this shift in academic labor are manifold. For one, adjuncts are often taken advantage of, and treated like an underclass by tenured professors. Two, their abundance in relation to tenured professors, combined with their lack of jub security and high degree of expendibility, means that large numbers of faculty in a given department can disappear at any given time (not so great for students, or for building a better department, without continuity). Three, the dignity of the profession, and the quality of higher education as a whole, have taken a serious hit. The holy grail of tenure is virtually the last aspect of the profession that attracts the brightest and best people to choose to make less money as educators and pass up more lucrative jobs in industry; and that last incentive is rapidly eroding.
What would it mean for our economy, our political discourse, our ability to compete with other countries, and to preserve our knowledge and traditions and ways of life, if, as with primary and secondary school teaching (at which we are, on a global scale, pretty awful), the brightest and best were scared off to do something else?
The answer to this question is tied in many ways with the legal profession. It, too, is one of our most important institutions. And we can't afford to gut it from the inside out the way we've already begun to gut academia. Lawyers need to be incentivized and rewarded properly for their high-skilled and (cynics turn away!) crucial labor. Building a poorly compensated underclass of lawyers to replace what has been for so long one of our most dignified and desirable professions won't just harm lawyers--it will have negative consequences for broader society.
Astute consumers of what is loosly and perilously called 'culture,' and what is tellingly called 'history,' will recognize that when budgets are tight and economic times are tough, the short-sighted instincts of managerial types point to budgetary slicing and dicing and cost-cutting all over the place. Some things, however, need to be preserved through the tough times. Just because savvy law firms have figured out how to keep up their profits by 'restructuring' their labor doesn't mean this is good for the profession long-term. Lawyers, take it from an academic: fight for what you deserve before it's too late.
Those who would favor this two-track system argue that the non-partner-track allows more flexibility for those who aren't sure if they want to dedicate 8 years of their lives to the long and arduous slog toward partnership at a big law firm.
What proponents of the two-track system are missing, however, is that, beyond the lower salary, the podunk work locations, and the contingency of the labor (a non-partner-track attorney will certainly have less job security and would be easier to replace on shorter notice), such a system will create an especially nasty effect: non-partner-track lawyers will become a kind of legal underclass, looked down upon (and sometimes pitied) by their partner-track colleagues, abused by those in positions of managerial and budget-setting power, and, ultimatly, multiplied to create a growing army of lower-paid lawyers with fewer benefits that will become the standard of the profession, rather than the exception. Why does PMB think this?
Just look at academia.
When managerial and administrative types in higher education figured out that it's cheaper to take advantage of the market glut of competent people vying to work in academia--and the concomitant 'price' inelasticity of demand for academic work--by hiring adjunct, non-tenure-track faculty to carry the burden of university teaching, instead of hiring tenure-track faculty (who, unlike adjuncts, actually get health and retirement benefits), the number of adjunct professors has risen dramatically while the number of tenure-track faculty has declined dramatically.
The effects of this shift in academic labor are manifold. For one, adjuncts are often taken advantage of, and treated like an underclass by tenured professors. Two, their abundance in relation to tenured professors, combined with their lack of jub security and high degree of expendibility, means that large numbers of faculty in a given department can disappear at any given time (not so great for students, or for building a better department, without continuity). Three, the dignity of the profession, and the quality of higher education as a whole, have taken a serious hit. The holy grail of tenure is virtually the last aspect of the profession that attracts the brightest and best people to choose to make less money as educators and pass up more lucrative jobs in industry; and that last incentive is rapidly eroding.
What would it mean for our economy, our political discourse, our ability to compete with other countries, and to preserve our knowledge and traditions and ways of life, if, as with primary and secondary school teaching (at which we are, on a global scale, pretty awful), the brightest and best were scared off to do something else?
The answer to this question is tied in many ways with the legal profession. It, too, is one of our most important institutions. And we can't afford to gut it from the inside out the way we've already begun to gut academia. Lawyers need to be incentivized and rewarded properly for their high-skilled and (cynics turn away!) crucial labor. Building a poorly compensated underclass of lawyers to replace what has been for so long one of our most dignified and desirable professions won't just harm lawyers--it will have negative consequences for broader society.
Astute consumers of what is loosly and perilously called 'culture,' and what is tellingly called 'history,' will recognize that when budgets are tight and economic times are tough, the short-sighted instincts of managerial types point to budgetary slicing and dicing and cost-cutting all over the place. Some things, however, need to be preserved through the tough times. Just because savvy law firms have figured out how to keep up their profits by 'restructuring' their labor doesn't mean this is good for the profession long-term. Lawyers, take it from an academic: fight for what you deserve before it's too late.
Wednesday, March 30, 2011
Cronon Affair Not About Academic Freedom, But Political Persecution
By now you might have heard about the latest national media mini-sensation, the sticky situation over University of Wisconsin-Madison history professor William Cronon's work e-mail.
In case you need a synopsis, the Wisconsin Republican Party has filed a standard Freedom of Information Act request to obtain e-mails that include any of a set of keywords from Professor Cronon's university e-mail account. They did this after Cronon started blogging and writing op-ed pieces critical of Gov. Scott Walker and Wisconsin Republicans. They did this because they want to know if Cronon, as a sort-of-public employee in his capacity as a professor at a public university that receives 20 percent of its total funding from public sources, might have been in violation of the University IT policy on "commercial, political, and non-university activities," which is as follows:
Cronon's supporters claim the FOIA request amounts to a witch-hunt and potentially a violation of academic freedom, while other pundits, mostly conservative, argue correctly that the FOIA request is legally sound.
The FOIA request is legally sound, and academic freedom is a pretty sketchy defense against such a request. For the record, academic freedom would be a relevant defense should the fruits of the FOIA request place Cronon's job in jeopardy on account of his political views. But we're not even there yet, and I doubt we'll get there either.
More importantly, however, this whole affair is not so much about academic freedom or the legality of the FOIA request, but rather the purpose and implications of this request. The government of the state of Wisconsin, which is more or less overrun at this point by the Republican party, has some recourse to oversight in matters of publicly funded institutions; however, it's abundantly clear in this case that Professor Cronon is only being targeted because he has expressed in public, as is his right, some political views that the Republican investigators don't like.
Though they have legal recourse, technically, to request Cronon's e-mails, the Republican investigators in this case aren't actually worried about the possibility that Cronon is somehow operating a vast and secret and well-funded political machine out of his University e-mail account, or that he's in any way abusing his position as a history professor at a public university (civic engagement with contemporary political issues is explicitly part of the job description of a university professor of history). Rather, Republicans are going after Cronon because his personal political views are different than theirs. The Republican investigation of Cronon is an attempt to penalize someone for espousing an opposing ideology, and, accordingly, to make political hay about the fact that Professor Cronon is all at once an academic, a public employee, and a liberal--all things that Wisconsin Republicans can't stand.
What we should take away from this episode is not that there's some erosion of academic freedom going on, or that Cronon is a "tenured radical" for having political views that Republicans wouldn't support, but that American politics has undertaken a considerable shift in the last several years. Though we remain a two-party country, our opposing parties are no longer comprised of what could be called "liberals" and "conservatives," left and right.
What we have now is pluralists and singularists.
Some Americans, pluralists, believe that freedom includes the permission of dissent and the toleration of ideological diversity. For the pluralists, it's not OK to go after someone for espousing views with which you might disagree. For the pluralists, the very core of American history and the American experience is polyvocality, grown out of this mishmash of people from all ends of the world. Pluralists are more likely to find commonalities in what America is than instances of what America is not.
Some Americans, singularists, believe that freedom is the privilege only of a specific group of people--Americans--who think a certain way about what America is and what specifically constitutes American values. Singularists are willing to use financial, legal, and military means to enforce their ideological position on what's best for America and Americans. Singularists are more likely to understand diversity and dissent as instances of anti-Americanism and antagonism than as part and parcel of the American experience.
The Republican attack on Cronon is a singularist attack, waged by people who are, at the very least, uncomfortable with the idea of a professor at a state university publicly owning a political position that is critical of their party in general, and its state leader specifically.
In case you need a synopsis, the Wisconsin Republican Party has filed a standard Freedom of Information Act request to obtain e-mails that include any of a set of keywords from Professor Cronon's university e-mail account. They did this after Cronon started blogging and writing op-ed pieces critical of Gov. Scott Walker and Wisconsin Republicans. They did this because they want to know if Cronon, as a sort-of-public employee in his capacity as a professor at a public university that receives 20 percent of its total funding from public sources, might have been in violation of the University IT policy on "commercial, political, and non-university activities," which is as follows:
Persons may not use University IT resources to sell or solicit sales for any goods, services or contributions unless such use conforms to UW-Madison rules and regulations governing the use of University resources. University employees may not use these resources to support the nomination of any person for political office or to influence a vote in any election or referendum. No one may use University IT resources to represent the interests of any non-University group or organization unless authorized by an appropriate University department.
Cronon's supporters claim the FOIA request amounts to a witch-hunt and potentially a violation of academic freedom, while other pundits, mostly conservative, argue correctly that the FOIA request is legally sound.
The FOIA request is legally sound, and academic freedom is a pretty sketchy defense against such a request. For the record, academic freedom would be a relevant defense should the fruits of the FOIA request place Cronon's job in jeopardy on account of his political views. But we're not even there yet, and I doubt we'll get there either.
More importantly, however, this whole affair is not so much about academic freedom or the legality of the FOIA request, but rather the purpose and implications of this request. The government of the state of Wisconsin, which is more or less overrun at this point by the Republican party, has some recourse to oversight in matters of publicly funded institutions; however, it's abundantly clear in this case that Professor Cronon is only being targeted because he has expressed in public, as is his right, some political views that the Republican investigators don't like.
Though they have legal recourse, technically, to request Cronon's e-mails, the Republican investigators in this case aren't actually worried about the possibility that Cronon is somehow operating a vast and secret and well-funded political machine out of his University e-mail account, or that he's in any way abusing his position as a history professor at a public university (civic engagement with contemporary political issues is explicitly part of the job description of a university professor of history). Rather, Republicans are going after Cronon because his personal political views are different than theirs. The Republican investigation of Cronon is an attempt to penalize someone for espousing an opposing ideology, and, accordingly, to make political hay about the fact that Professor Cronon is all at once an academic, a public employee, and a liberal--all things that Wisconsin Republicans can't stand.
What we should take away from this episode is not that there's some erosion of academic freedom going on, or that Cronon is a "tenured radical" for having political views that Republicans wouldn't support, but that American politics has undertaken a considerable shift in the last several years. Though we remain a two-party country, our opposing parties are no longer comprised of what could be called "liberals" and "conservatives," left and right.
What we have now is pluralists and singularists.
Some Americans, pluralists, believe that freedom includes the permission of dissent and the toleration of ideological diversity. For the pluralists, it's not OK to go after someone for espousing views with which you might disagree. For the pluralists, the very core of American history and the American experience is polyvocality, grown out of this mishmash of people from all ends of the world. Pluralists are more likely to find commonalities in what America is than instances of what America is not.
Some Americans, singularists, believe that freedom is the privilege only of a specific group of people--Americans--who think a certain way about what America is and what specifically constitutes American values. Singularists are willing to use financial, legal, and military means to enforce their ideological position on what's best for America and Americans. Singularists are more likely to understand diversity and dissent as instances of anti-Americanism and antagonism than as part and parcel of the American experience.
The Republican attack on Cronon is a singularist attack, waged by people who are, at the very least, uncomfortable with the idea of a professor at a state university publicly owning a political position that is critical of their party in general, and its state leader specifically.
Monday, March 28, 2011
In Defense of Play: A Person's Manifesto
Bears know more about people than people know about themselves, because bears are quite happy to be bears, while people struggle endlessly to dehumanize themselves. While people indulge this peculiar blend of human insecurity and human arrogance, bears observe with the placid bewilderment of creatures that still understand play.
Consider Richard Dawkins.
Richard Dawkins is a leader among a vast and variegated group of people who generally believe that anyone who believes in a god or practices a religion accordingly is an idiot. The basis for Dawkins' belief is science. For Dawkins and people of a similar persuasion, any human behavior that is not driven by scientific knowledge is irrational and may lead to idiocy. What frustrates, enervates, motivates, and ultimately compensates the likes of Richard Dawkins is the tendency of humans to behave in certain ways that do not comport with scientific knowledge.
Were the Dawkinses suddenly and ironically imbued with godly powers, they would undoubtedly order the universe precisely as it is, changing only humans. Instead of making humans human, the Dawkinses would make humans into scientific beings who apprehend with perfect accuracy and adroitness the empirical truth of the world. These Dawkinsian humans would know everything knowable, and lack any desire to know anything more; indeed, the concept of the unknowable would be entirely foreign to these humans, a non-concept. A rigid scientific curiosity for the unknown points toward its own obsolescence, which culminates in Dawkinsian humans. These humans would not have an imagination, for they would have no need for one. They could stand on the shoreline and look out into the sea, and what would they see: a taxonomic cornucopia spread out over a visual field of 2.9 miles or 2.52 nautical miles (depending on the height of the person and the clarity of the sky in the given moment). Dawkinsian humans would not practice religion or believe in gods, as they know all that is knowable. They would never fight or disagree over concepts, as all empirical truths would be evident to all Dawkinsian humans, and no concepts that are not empirical truths would exist. That being the case, there would be no intellectual or ideological diversity among them, which means there would be no ideological wars between them. Instead, their wars would be fought over things that contemporary humans find deeply immoral and disturbing: observations of phenotypical difference, racial difference, and disparities in physical strength or natural fitness. Indeed, all conflicts among Dawkinsian humans would be the result of, as contemporary humans would put it, racists and bigots. Without the ability to espouse differences in what Martin Luther King, Jr. would call "the content of one's character," Dawkinsian humans, red in tooth and claw as all humans, nay all creatures are, would fight, oppress, and enslave those who were, in and of themselves, through and through, empirically different looking (as no character content differences would exist). Dawkinsian humans would be ruthlessly hierarchical, for empirical differentiation necessitates hierarchies (we may not know whether Joe Montana was a better quarterback than Dan Marino, but, given a common set of metrics across-the-board, we know with certainty that David Lekuta Rudisha, the new 800m world record holder, is a faster 800m runner than former world record holder Wilson Kipketer, and is thus higher on the records list).
You may find these assumptions and extrapolations about Dawkinsian humans unlikely or unsubstantiated, in large part because, as a non-Dawkinsian human, your powers of empirical knowing are quite limited. In fact, before scientists began to pretend that the word "empirical" means "evidence-based" and not "based on human sensory perception"--that is, before contemporary humans brought about this clever shift in the etymology of the word "emprical"--human sensory perception was perceived as enough to produce reliable evidence. Now, however, the Dawkinses scorn and ridicule flawed human perception. This is why "empirical" must now mean "evidence-based" instead of "based on human sensory perception": because the transhumanist Dawkinses must elide any traces of human frailty and subjectivity that must necessarily (but unspeakably) be involved in the processes of rendering scientific evidence. In other words, the problem of humans being such unscientific beings--which, for the Dawkinses, produces so many of our disgustingly human problems--is why we need to evolve into as close approximations of Dawkinsian humans as we can. For the Dawkinses, human subjectivity is a stain best rubbed out by striving for scientific objectivity.
This is in large part what is meant by "scientific progress." More practically, "scientific progress" means the patronage of society by scientists, who scoff at any judgment that is not "empirically" derived. Our scientific patrons provide (or consume mounds of resources trying to provide) contemporary humans with various comforts and amenities, from the life-ameliorating (nicer televisions, longer-lasting batteries, etc.) to the life-changing (semiconductors, electronic networks, etc.) to the life-saving (biomedical technologies, vaccines, etc.). These amenities are crucial to "scientific progress," because while scientists are busy providing us with nice things, many are also busy theorizing the complete suffusion of all human qualities and variabilities with scientific knowledge. To put it economically: have this mechanical heart, so that you may live to see the day when we make a computer that writes better than Nabokov.
In case it hasn't become clear by now, there are two ends to scientific progress. One is the forfeiture of human intellectual diversity, creativity, and play; the other is the embarrassing realization that despite all that humans have tried to do to deny, transcend, and forfeit their humanity, it was all a big ruse, a god delusion.
To say nothing of their ethical implications, both of these outcomes sound pretty fucking boring.
By this point in the manifesto the Dawkinses are feeling attacked. The scientists who wouldn't align themselves with the Dawkinses are feeling ill-used and victimized. The armies who daily make disparaging remarks about the arts and humanities from their own positions of societal and academic privilege are incensed about the possibility that their evidence may be of an insufficient standard to convince not gods, not religious nuts, not politicians, but mainstream humans that we should all lie down for this iteration of progress. You who have become oppressors of humanity (and the humanities), who have conditioned yourselves to receive all criticisms of your scientific telos as idiocy, ignorance, anti-scientific ideology, or even an attempted resuscitation of the days when the arts, religion, and philosophy unjustly presided over the kingdom of knowledge, are modern clergy. You boffins, it's no longer you who are marginalized. The victim is yours, and the victim is play.
Claim your bits and pieces of this manifesto as you inevitably will, human as you are. But here PMB affirmeth nothing but play, that delight in endless variability and purposelessness which is the hallmark of all creatures great and small.
Consider Richard Dawkins.
Richard Dawkins is a leader among a vast and variegated group of people who generally believe that anyone who believes in a god or practices a religion accordingly is an idiot. The basis for Dawkins' belief is science. For Dawkins and people of a similar persuasion, any human behavior that is not driven by scientific knowledge is irrational and may lead to idiocy. What frustrates, enervates, motivates, and ultimately compensates the likes of Richard Dawkins is the tendency of humans to behave in certain ways that do not comport with scientific knowledge.
Were the Dawkinses suddenly and ironically imbued with godly powers, they would undoubtedly order the universe precisely as it is, changing only humans. Instead of making humans human, the Dawkinses would make humans into scientific beings who apprehend with perfect accuracy and adroitness the empirical truth of the world. These Dawkinsian humans would know everything knowable, and lack any desire to know anything more; indeed, the concept of the unknowable would be entirely foreign to these humans, a non-concept. A rigid scientific curiosity for the unknown points toward its own obsolescence, which culminates in Dawkinsian humans. These humans would not have an imagination, for they would have no need for one. They could stand on the shoreline and look out into the sea, and what would they see: a taxonomic cornucopia spread out over a visual field of 2.9 miles or 2.52 nautical miles (depending on the height of the person and the clarity of the sky in the given moment). Dawkinsian humans would not practice religion or believe in gods, as they know all that is knowable. They would never fight or disagree over concepts, as all empirical truths would be evident to all Dawkinsian humans, and no concepts that are not empirical truths would exist. That being the case, there would be no intellectual or ideological diversity among them, which means there would be no ideological wars between them. Instead, their wars would be fought over things that contemporary humans find deeply immoral and disturbing: observations of phenotypical difference, racial difference, and disparities in physical strength or natural fitness. Indeed, all conflicts among Dawkinsian humans would be the result of, as contemporary humans would put it, racists and bigots. Without the ability to espouse differences in what Martin Luther King, Jr. would call "the content of one's character," Dawkinsian humans, red in tooth and claw as all humans, nay all creatures are, would fight, oppress, and enslave those who were, in and of themselves, through and through, empirically different looking (as no character content differences would exist). Dawkinsian humans would be ruthlessly hierarchical, for empirical differentiation necessitates hierarchies (we may not know whether Joe Montana was a better quarterback than Dan Marino, but, given a common set of metrics across-the-board, we know with certainty that David Lekuta Rudisha, the new 800m world record holder, is a faster 800m runner than former world record holder Wilson Kipketer, and is thus higher on the records list).
You may find these assumptions and extrapolations about Dawkinsian humans unlikely or unsubstantiated, in large part because, as a non-Dawkinsian human, your powers of empirical knowing are quite limited. In fact, before scientists began to pretend that the word "empirical" means "evidence-based" and not "based on human sensory perception"--that is, before contemporary humans brought about this clever shift in the etymology of the word "emprical"--human sensory perception was perceived as enough to produce reliable evidence. Now, however, the Dawkinses scorn and ridicule flawed human perception. This is why "empirical" must now mean "evidence-based" instead of "based on human sensory perception": because the transhumanist Dawkinses must elide any traces of human frailty and subjectivity that must necessarily (but unspeakably) be involved in the processes of rendering scientific evidence. In other words, the problem of humans being such unscientific beings--which, for the Dawkinses, produces so many of our disgustingly human problems--is why we need to evolve into as close approximations of Dawkinsian humans as we can. For the Dawkinses, human subjectivity is a stain best rubbed out by striving for scientific objectivity.
This is in large part what is meant by "scientific progress." More practically, "scientific progress" means the patronage of society by scientists, who scoff at any judgment that is not "empirically" derived. Our scientific patrons provide (or consume mounds of resources trying to provide) contemporary humans with various comforts and amenities, from the life-ameliorating (nicer televisions, longer-lasting batteries, etc.) to the life-changing (semiconductors, electronic networks, etc.) to the life-saving (biomedical technologies, vaccines, etc.). These amenities are crucial to "scientific progress," because while scientists are busy providing us with nice things, many are also busy theorizing the complete suffusion of all human qualities and variabilities with scientific knowledge. To put it economically: have this mechanical heart, so that you may live to see the day when we make a computer that writes better than Nabokov.
In case it hasn't become clear by now, there are two ends to scientific progress. One is the forfeiture of human intellectual diversity, creativity, and play; the other is the embarrassing realization that despite all that humans have tried to do to deny, transcend, and forfeit their humanity, it was all a big ruse, a god delusion.
To say nothing of their ethical implications, both of these outcomes sound pretty fucking boring.
By this point in the manifesto the Dawkinses are feeling attacked. The scientists who wouldn't align themselves with the Dawkinses are feeling ill-used and victimized. The armies who daily make disparaging remarks about the arts and humanities from their own positions of societal and academic privilege are incensed about the possibility that their evidence may be of an insufficient standard to convince not gods, not religious nuts, not politicians, but mainstream humans that we should all lie down for this iteration of progress. You who have become oppressors of humanity (and the humanities), who have conditioned yourselves to receive all criticisms of your scientific telos as idiocy, ignorance, anti-scientific ideology, or even an attempted resuscitation of the days when the arts, religion, and philosophy unjustly presided over the kingdom of knowledge, are modern clergy. You boffins, it's no longer you who are marginalized. The victim is yours, and the victim is play.
Claim your bits and pieces of this manifesto as you inevitably will, human as you are. But here PMB affirmeth nothing but play, that delight in endless variability and purposelessness which is the hallmark of all creatures great and small.
Sunday, March 27, 2011
Fixing Student Ghettos
A good bit of news lately has touched on the increasingly visible issue of the student ghetto, like this piece on the ghettos in Albany, NY. What is a student ghetto?
It's more or less exactly what it sounds like. In college towns, where students constitute a large enough population relative to the total population, civilian residents generally prefer not to live next to or immediately around students. The reason for this is simple and understandable: student lifestyles typically mean late nights, loud parties, raucous behavior, loud music, irregular waking hours, and other fun things that most non-students would rather not subject themselves or their children to indirectly by living nearby. Once a "student" section of town emerges, other factors contribute toward the deterioration of student neighborhoods and the properties students occupy.
For one, students typically require cheap housing, which gives landlords the opportunity to treat student housing in the same hands-off manner as they might an urban slum. Two, student partying, excessive drinking, and general immaturity contribute to property damage, littering, and abundant housing code violations in student housing sections, making student housing even more slum-like, and the landlords even less likely to enforce code or repair damage year after year, as new students come in and wreck things anew. Three, the partying and code violations attracts police officers, many of whom will have come from an entirely different socioeconomic background than the privileged college students--or at least as much is often presumed by both police and students alike. The combination of all of these things creates, quite literally, a ghetto in the student housing section: low property value, unregulated and neglected destruction of the neighborhood, bad behavior, and an antagonistic relationship between law enforcement and the residents of the student ghetto.
PMB lived in a student ghetto for the better part of 5 years, which was long enough to witness a shocking deterioration of an otherwise fine part of town at the hands of a predominately affluent and intelligent student population. Recalling his experiences in the student ghetto, PMB offers the following suggestions for improving student ghettos, restoring college neighborhoods, and healing town-gown relations:
1) A friendlier approach to policing. Frustrated with rampant drug-dealing on every corner of his district, a station commander in the HBO series The Wire decides to round up all the corner boys and junkies and transport them to one of three sections of town where police will monitor violence, but otherwise turn a blind eye to the drug trade. At first these ghetto zones look disastrous, until community volunteers and public safety workers intervene to pass out clean needles and contraception, and police and volunteers help children organize basketball and boxing groups. Police begin to interact with drug dealers and addicts like human beings, rather than simply going around looking to knock heads.
Because the student ghetto is by definition a self-contained area of student housing, and because students will drink and party regardless of what police do to try and stop them, police would do better to take a friendlier and more realistic approach to policing the student ghetto. Running into and searching every house with a party, often without a complaint call and sometimes in clear violation of constitutional rights, and bashing the heads of drunk and belligerent students has proven widely ineffective. Instead, were police officers to patrol student areas with an eye out for violence, destruction, or brazen violations (public urination, etc.), rather than storming houses, students would likely become more cooperative, and learn to respect their boundaries.
2) Crack down on absentee landlords. When responsible students can't get their landlords to fix basic problems with the house, like a faulty heater or a leaky faucet, students rapidly develop the attitude that because the landlord doesn't care about them or the property, they ought not to bother with keeping it clean and keeping damage to a minimum. The result is usually an accumulation of little damages and a few large ones over the years, all of which go unfixed by the landlord, who lives on the other side of the country and only shows up once a year to collect rent money from the local agency he employs to "look after" his properties in the area. If landlords will live remotely, as they have every right to do, students need better resources and support from both the municipalities and the universities to crack down on landlord violations and slumlord practices. It's a lot easier to trash a property that's long since been in the process of being trashed, and about which nobody seems to care.
3) Affordable university housing. One answer universities have provided to deal with student ghettos is building more university owned and regulated housing for students. The problem: this kind of housing is typically of the revenue-generating mold, meaning they're expensive and aimed at the richest students. Universities then justify the high cost of such accommodations by packing them with amenities like gyms, commercial food courts, expensive cable television plans, and redundant computer labs. Low-income students who might otherwise live in university accommodation end up saving hundreds of dollars per month in rent by living in the student ghetto. The price disparity between university housing and private housing in the ghetto diffuses any sense of competition for the business of large groups of students, which means that university housing can continue to be overpriced (taking money from rich students who can pay) and ghetto landlords continue without any pressure to make their properties more attractive to students by cleaning them up.
4) Enforce basic housing code policies during the day. Rather than tussling with drunk students at night, send police vans around during the day. If houses have indoor couches on the porches and on the roofs, remove them. If a student's front lawn is strewn with beer cans so thick you have to kick them away to reach the front door, fine the house. If the residents ignore the fines, arrest them and bring them to court. Few students will take resistance far enough to get cuffed and dragged out of their house unexpectedly during a game of Madden.
5) Don't conspire with the university. Work with it. In PMB's student ghetto, a common police trick went as follows: university police, who had no jurisdiction outside of university property, would ride by off-campus houses in the student ghetto and hear parties. Since they had no jurisdiction, they would phone in a fake noise complaint to the municipal police who did have jurisdiction, so the municipal police could come in and bash heads. Once students were cited with municipal violations, and punished accordingly by the municipality, that information was shared with the university. The university would then level a second punishment, effectively doubling-up the punishment for what was a non-violation in the first place.
Instead of practicing these kinds of bullying tactics, universities and municipalities should get together and determine in what ways the university can compel (or require) students to complete community projects as part of their studies. PMB was fortunate enough to partake of one such project, a work-study internship with the local (you might have guessed it) housing code administrator. Learning about the inner workings of community building and community policing can be illuminating experiences for students, who too often fail to grasp the notion that they live in a community beyond that of the university.
It's more or less exactly what it sounds like. In college towns, where students constitute a large enough population relative to the total population, civilian residents generally prefer not to live next to or immediately around students. The reason for this is simple and understandable: student lifestyles typically mean late nights, loud parties, raucous behavior, loud music, irregular waking hours, and other fun things that most non-students would rather not subject themselves or their children to indirectly by living nearby. Once a "student" section of town emerges, other factors contribute toward the deterioration of student neighborhoods and the properties students occupy.
For one, students typically require cheap housing, which gives landlords the opportunity to treat student housing in the same hands-off manner as they might an urban slum. Two, student partying, excessive drinking, and general immaturity contribute to property damage, littering, and abundant housing code violations in student housing sections, making student housing even more slum-like, and the landlords even less likely to enforce code or repair damage year after year, as new students come in and wreck things anew. Three, the partying and code violations attracts police officers, many of whom will have come from an entirely different socioeconomic background than the privileged college students--or at least as much is often presumed by both police and students alike. The combination of all of these things creates, quite literally, a ghetto in the student housing section: low property value, unregulated and neglected destruction of the neighborhood, bad behavior, and an antagonistic relationship between law enforcement and the residents of the student ghetto.
PMB lived in a student ghetto for the better part of 5 years, which was long enough to witness a shocking deterioration of an otherwise fine part of town at the hands of a predominately affluent and intelligent student population. Recalling his experiences in the student ghetto, PMB offers the following suggestions for improving student ghettos, restoring college neighborhoods, and healing town-gown relations:
1) A friendlier approach to policing. Frustrated with rampant drug-dealing on every corner of his district, a station commander in the HBO series The Wire decides to round up all the corner boys and junkies and transport them to one of three sections of town where police will monitor violence, but otherwise turn a blind eye to the drug trade. At first these ghetto zones look disastrous, until community volunteers and public safety workers intervene to pass out clean needles and contraception, and police and volunteers help children organize basketball and boxing groups. Police begin to interact with drug dealers and addicts like human beings, rather than simply going around looking to knock heads.
Because the student ghetto is by definition a self-contained area of student housing, and because students will drink and party regardless of what police do to try and stop them, police would do better to take a friendlier and more realistic approach to policing the student ghetto. Running into and searching every house with a party, often without a complaint call and sometimes in clear violation of constitutional rights, and bashing the heads of drunk and belligerent students has proven widely ineffective. Instead, were police officers to patrol student areas with an eye out for violence, destruction, or brazen violations (public urination, etc.), rather than storming houses, students would likely become more cooperative, and learn to respect their boundaries.
2) Crack down on absentee landlords. When responsible students can't get their landlords to fix basic problems with the house, like a faulty heater or a leaky faucet, students rapidly develop the attitude that because the landlord doesn't care about them or the property, they ought not to bother with keeping it clean and keeping damage to a minimum. The result is usually an accumulation of little damages and a few large ones over the years, all of which go unfixed by the landlord, who lives on the other side of the country and only shows up once a year to collect rent money from the local agency he employs to "look after" his properties in the area. If landlords will live remotely, as they have every right to do, students need better resources and support from both the municipalities and the universities to crack down on landlord violations and slumlord practices. It's a lot easier to trash a property that's long since been in the process of being trashed, and about which nobody seems to care.
3) Affordable university housing. One answer universities have provided to deal with student ghettos is building more university owned and regulated housing for students. The problem: this kind of housing is typically of the revenue-generating mold, meaning they're expensive and aimed at the richest students. Universities then justify the high cost of such accommodations by packing them with amenities like gyms, commercial food courts, expensive cable television plans, and redundant computer labs. Low-income students who might otherwise live in university accommodation end up saving hundreds of dollars per month in rent by living in the student ghetto. The price disparity between university housing and private housing in the ghetto diffuses any sense of competition for the business of large groups of students, which means that university housing can continue to be overpriced (taking money from rich students who can pay) and ghetto landlords continue without any pressure to make their properties more attractive to students by cleaning them up.
4) Enforce basic housing code policies during the day. Rather than tussling with drunk students at night, send police vans around during the day. If houses have indoor couches on the porches and on the roofs, remove them. If a student's front lawn is strewn with beer cans so thick you have to kick them away to reach the front door, fine the house. If the residents ignore the fines, arrest them and bring them to court. Few students will take resistance far enough to get cuffed and dragged out of their house unexpectedly during a game of Madden.
5) Don't conspire with the university. Work with it. In PMB's student ghetto, a common police trick went as follows: university police, who had no jurisdiction outside of university property, would ride by off-campus houses in the student ghetto and hear parties. Since they had no jurisdiction, they would phone in a fake noise complaint to the municipal police who did have jurisdiction, so the municipal police could come in and bash heads. Once students were cited with municipal violations, and punished accordingly by the municipality, that information was shared with the university. The university would then level a second punishment, effectively doubling-up the punishment for what was a non-violation in the first place.
Instead of practicing these kinds of bullying tactics, universities and municipalities should get together and determine in what ways the university can compel (or require) students to complete community projects as part of their studies. PMB was fortunate enough to partake of one such project, a work-study internship with the local (you might have guessed it) housing code administrator. Learning about the inner workings of community building and community policing can be illuminating experiences for students, who too often fail to grasp the notion that they live in a community beyond that of the university.
Subscribe to:
Comments (Atom)