Listening to years of bickering over the ACA--and bickering is indeed the word to use here--has left me with a sense that we need to restart this 'conversation' from first principles. And after thinking about health insurance from first principles, I've come to the realization that the biggest problem with health insurance isn't whether it's controlled by the government or private insurers (although there are indeed problems associated with both of these). The biggest problem with health insurance is the very idea of health insurance itself. 'Health' and 'insurance,' as we understand them, are wholly incompatible.
Consider, for example, the bickering over the 'individual mandate' imposed by the ACA. Conservative opponents of the ACA and the mandate argue that it's immoral to force healthy, young people to buy health insurance. Supporters of the individual mandate (notably, a conservative idea) argue that it's necessary to accomplish the broader aim of the ACA: to bring down healthcare costs and keep premiums low by having a broad, 'diverse' pool of insurance customers.
At what point does someone realize that this is a completely stupid argument?
By looking at healthcare from an insurance perspective--which is to say by minute calculations of risk akin to those done by car insurers, hedge fund managers, and casino operators--it would certainly seem silly to force a healthy 25-year-old to buy health insurance. At the same time, it would certainly seem necessary to have that healthy 25-year-old as part of the client pool to contribute favorably to an insurance company's risk profile, to offset all those expensive old people who run higher risk of needing care, and needing it paid for. But this is an ass-backwards way of looking at the situation for one very simple reason: it's both socially and morally irresponsible to take risks with one's life.
To explain this further, consider that healthy 25-year-old. Most people that age don't get sick, don't require hospital care, don't require long-term care...except when some of them do. Young people can get catastrophically sick; hit by cars; break bones; overdose; get pregnant, etc. And when these things happen, as they inevitably do, however atypical they are from a risk assessment standpoint, no reasonable person in our society would say that we should deny these people care if they're not 'insured.' A physician is under ethical obligation to provide care. A hospital can't turn away a gun shot patient with no health insurance. And no one but the most extreme Randian sociopaths would argue that it should. As a society we've already made it clear, on a number of fronts, that even if our policies don't reflect our values, our values favor the preservation of human life. The left concerns itself with the risk to life born by the uninsured; the right concerns itself with the prospect of 'death panels' that decide precisely when care will no longer be administered to the dying. And the private medical sector makes these choices every day. And never without regard for doing the most we can with what we have to preserve life.
And herein lies the fundamental problem: unlike when you wreck your car without auto insurance, you can't just junk a catastrophically sick person, whether they're insured or not. Healthcare reformers have argued for ages now that because of this dynamic, we have to be mindful of the costs to government and taxpayers of 'free riders' who are without insurance but nevertheless must go to hospital to get care when they need it. The risk associated with ill health, then, is not a risk at all. It's always a guarantee of care, at least from a moral standpoint. You insure cars; you can't insure people. It's a moral impossibility.
So if we know that we'll always provide care to the 25-year-old who didn't purchase insurance, why are we treating this situation as a matter of risk management, or as a matter of insurance?
The alternative is to be honest with ourselves about our values, rather than abstracting them (and thus contorting them) by running them through competing risk analysis protocols under the guise of 'efficient' provision of 'insurance.' Access to healthcare, therefore, must be understood as a right, not a risk. In ideological terms plenty of people will lose their respective heads arguing against the idea that healthcare is a right; and yet if you ask any of these people whether they think, therefore, that someone who hadn't the foresight (or the money) to insure themselves should be left to suffer and/or die, none of them will actually say we should just let people die. That would be mortifying, indeed, sociopathic. In practical terms, therefore, we're actually fairly united in understanding access to health care as a pretty basic right. It's time we start treating it that way in policy terms.
Friday, March 28, 2014
Monday, March 24, 2014
Understanding the Politics of Normal in 3 Easy Examples
Despite all the money in politics, a skilled, educated reader is a very powerful thing. Those able to influence policy with their money spend billions of dollars putting words in front of readers, words designed to manipulate the way we think about issues. And yet skilled readers who understand rhetoric can pull apart manipulative words and images, exposing them for the propaganda that they are. One manipulative technique we should be paying more attention to is the rhetorical definition of normal.
In academia, the politics of normal is alive and well. One can take a class in '20th century American Literature,' and one can take a class in '20th century feminist American literature.' The designation 'feminist' added to the subject matter 'American literature' is doing something very important here. By modifying the baseline 'American literature,' 'feminist American literature' reads like a special subset of American literature. In fact, when we title courses this way, this is exactly what we mean. What we perhaps don't mean to do, however, is to define the normal--'American literature'--with an implicit assumption about what constitutes 'American literature' but not 'feminist American literature.' Put simply, you might expect to read authors like F. Scott Fitzgerald, Thomas Pynchon, Philip Roth, T.S. Eliot, etc. in a '20th century American literature' course; you might even expect to read female authors, like Toni Morrison or Joyce Carol Oates, in such a course. But you won't expect this course to have anything to do with 'feminist literature,' even if the American 20th century featured landmark developments in women's rights, as well as an abundance of feminist literatures. The designation of 'feminist' as a special subset of the normal '20th century American literature' makes the implicit assumption that feminist literature is not part of the 20th century American 'normal,' when in fact it very much was. This might seem like a small, even nit-picky detail; but this detail fundamentally changes the expectations we have about what constitutes 'normal' and what is an 'additional' or 'special' category in relation to normal.
A second and more overtly political example of the rhetorical definition of normalcy is the discourse of 'class warfare.' When we invoke 'class warfare' in contemporary American politics, we mean to suggest that those who are in the poorer classes of American society are agitating with jealousy and scorn to bring the rich back to earth. Thus, when a celebrity billionaire complains that he feels persecuted by those who are jealous of his wealth, the plea is to stop the 'class warfare.' The reason such a plea is so effective is that it rhetorically constructs normalcy as the rich already having justly earned their money and the poor already having failed to do so for themselves. The rhetoric of class warfare depicts a scenario in which everybody started from the same line, and those losing the race are just sore losers. If we look at it from this standpoint of normalcy, the poor complaining about what the rich have does indeed look like an act of aggression; hence, 'class warfare.' Looking at it another way, however, we might see redistributive policies that the rich lobby for and that benefit the rich as acts of 'class warfare,' too. If we rhetorically redefine the normal as a competitive market system in which those who already have an advantage work effectively to manipulate that system to solidify further advantage at the expense of the poor, this, too, looks like 'class warfare.' A tax policy that redistributes wealth from the poor to the rich--like mortgage tax deductions and the carried interest loophole--might also look like class warfare. But if you use rhetoric to set the status of 'normal' at a point in the political process AFTER which all of this kind of policy manipulation has happened, then 'class warfare' becomes the poor's aggression, rather than reaction. A similar resetting of normalcy is at work in discussions of who gets 'government welfare,' even if a vast majority of government spending is, via the tax code and via subsidies for large industries and corporations, spent on the rich, not the poor.
A third and final example of resetting the normal is to do with how we think about the 'free market' itself. The current debate over the minimum wage is good place to start with this example. Opponents of minimum wage increases--or of the minimum wage itself--argue that the 'free market' isn't setting the wage increase, therefore such an increase would be a regulatory and 'artificial' imposition on the 'free market.' By defining the 'free market' BEFORE minimum wage adjustments or IN RELATION TO workers' wages as the normal, critics of minimum wage increases can make it seem like anything above this normal is manipulation. But by defining the 'free market' this way, these people are simply aligning market manipulation policies that they like as 'free,' and policies they don't like as 'artificial' interventions into the 'free market.' Of course, the 'free market' is subject to countless manipulations, from tariffs to interest rates (and other kinds of currency manipulations) to government subsidies to governments outright banning some enterprises while awarding government business directly to others (as in New Jersey, whose government is shutting down Tesla sales operations in that state). What we know rhetorically as the 'free market' is really not free, unregulated, or unmanipulated at all. What one person sees as 'artificial' regulation and what another sees as a product of the 'free' market can change entirely depending on how we set and argue for the definitions of 'artificial' and 'free.' So again, by setting 'normal' AFTER the manipulations that give employers leverage over employee wages--which have remained stagnant for decades now, even as the US economy has continued to grow over this time--those with personal and ideological interests in keeping wages low can claim that the minimum wage increase is an 'artificial' act that violates normalcy.
As you can see, then, so much of our political discourse is very crucially affected by how and where we use rhetoric to establish normalcy standards that favor one side over another. Another way to envision how this works in everyday life would be to have a tug-o'-war match in which one side has to pull 3 feet to the other's 7, while the judges are working under the assumption that the distance is 5 to 5. Better reading and rhetorical awareness enables us to judge more accurately in an environment in which we're constantly manipulated to think that there are no ideological advantages, only 'facts.'
In academia, the politics of normal is alive and well. One can take a class in '20th century American Literature,' and one can take a class in '20th century feminist American literature.' The designation 'feminist' added to the subject matter 'American literature' is doing something very important here. By modifying the baseline 'American literature,' 'feminist American literature' reads like a special subset of American literature. In fact, when we title courses this way, this is exactly what we mean. What we perhaps don't mean to do, however, is to define the normal--'American literature'--with an implicit assumption about what constitutes 'American literature' but not 'feminist American literature.' Put simply, you might expect to read authors like F. Scott Fitzgerald, Thomas Pynchon, Philip Roth, T.S. Eliot, etc. in a '20th century American literature' course; you might even expect to read female authors, like Toni Morrison or Joyce Carol Oates, in such a course. But you won't expect this course to have anything to do with 'feminist literature,' even if the American 20th century featured landmark developments in women's rights, as well as an abundance of feminist literatures. The designation of 'feminist' as a special subset of the normal '20th century American literature' makes the implicit assumption that feminist literature is not part of the 20th century American 'normal,' when in fact it very much was. This might seem like a small, even nit-picky detail; but this detail fundamentally changes the expectations we have about what constitutes 'normal' and what is an 'additional' or 'special' category in relation to normal.
A second and more overtly political example of the rhetorical definition of normalcy is the discourse of 'class warfare.' When we invoke 'class warfare' in contemporary American politics, we mean to suggest that those who are in the poorer classes of American society are agitating with jealousy and scorn to bring the rich back to earth. Thus, when a celebrity billionaire complains that he feels persecuted by those who are jealous of his wealth, the plea is to stop the 'class warfare.' The reason such a plea is so effective is that it rhetorically constructs normalcy as the rich already having justly earned their money and the poor already having failed to do so for themselves. The rhetoric of class warfare depicts a scenario in which everybody started from the same line, and those losing the race are just sore losers. If we look at it from this standpoint of normalcy, the poor complaining about what the rich have does indeed look like an act of aggression; hence, 'class warfare.' Looking at it another way, however, we might see redistributive policies that the rich lobby for and that benefit the rich as acts of 'class warfare,' too. If we rhetorically redefine the normal as a competitive market system in which those who already have an advantage work effectively to manipulate that system to solidify further advantage at the expense of the poor, this, too, looks like 'class warfare.' A tax policy that redistributes wealth from the poor to the rich--like mortgage tax deductions and the carried interest loophole--might also look like class warfare. But if you use rhetoric to set the status of 'normal' at a point in the political process AFTER which all of this kind of policy manipulation has happened, then 'class warfare' becomes the poor's aggression, rather than reaction. A similar resetting of normalcy is at work in discussions of who gets 'government welfare,' even if a vast majority of government spending is, via the tax code and via subsidies for large industries and corporations, spent on the rich, not the poor.
A third and final example of resetting the normal is to do with how we think about the 'free market' itself. The current debate over the minimum wage is good place to start with this example. Opponents of minimum wage increases--or of the minimum wage itself--argue that the 'free market' isn't setting the wage increase, therefore such an increase would be a regulatory and 'artificial' imposition on the 'free market.' By defining the 'free market' BEFORE minimum wage adjustments or IN RELATION TO workers' wages as the normal, critics of minimum wage increases can make it seem like anything above this normal is manipulation. But by defining the 'free market' this way, these people are simply aligning market manipulation policies that they like as 'free,' and policies they don't like as 'artificial' interventions into the 'free market.' Of course, the 'free market' is subject to countless manipulations, from tariffs to interest rates (and other kinds of currency manipulations) to government subsidies to governments outright banning some enterprises while awarding government business directly to others (as in New Jersey, whose government is shutting down Tesla sales operations in that state). What we know rhetorically as the 'free market' is really not free, unregulated, or unmanipulated at all. What one person sees as 'artificial' regulation and what another sees as a product of the 'free' market can change entirely depending on how we set and argue for the definitions of 'artificial' and 'free.' So again, by setting 'normal' AFTER the manipulations that give employers leverage over employee wages--which have remained stagnant for decades now, even as the US economy has continued to grow over this time--those with personal and ideological interests in keeping wages low can claim that the minimum wage increase is an 'artificial' act that violates normalcy.
As you can see, then, so much of our political discourse is very crucially affected by how and where we use rhetoric to establish normalcy standards that favor one side over another. Another way to envision how this works in everyday life would be to have a tug-o'-war match in which one side has to pull 3 feet to the other's 7, while the judges are working under the assumption that the distance is 5 to 5. Better reading and rhetorical awareness enables us to judge more accurately in an environment in which we're constantly manipulated to think that there are no ideological advantages, only 'facts.'
Friday, February 21, 2014
Our Crisis of Value (It's not what you think)
We are experiencing at present a crisis of value. This is not a crisis of values, of moral deterioration or the decline of the family or the redefinition of marriage. When we (and the media) talk about values, we tend to gravitate toward binary positions on hot-button issues, like those listed above. We take positions on these issues in two prominent ways. One, we claim affinity with a stance on an issue, signaling our affiliation with a wider political identity: it remains difficult for a conservative to support gay marriage, or a liberal to question aggressive policies to combat climate change, because each of these positions conflict with a broader political identity. Two, we justify such affinity positions by invoking an authority: the Bible says marriage is between a man and a woman, or science says we must act swiftly and decisively to stave off climate disaster. Here my point is not to take issue with either of these authorities, nor to evaluate science or religion as authorities, but simply to point out that this has become our mode of argumentation: argument from authority.
Indeed, on virtually every issue we encounter, we find arguments started (and, curiously, never finished) by statements like 'studies have shown...' or 'Leviticus states...,' statements meant to be conclusive and authoritative, but that nevertheless only generate more arguments from other authorities. While we are wise to rely significantly on scientific expertise for questions of fact about the natural world--like 'to what extent is climate change happening, and to what extent is it anthropogenic'--we are living in an historical moment in which we regularly and, I think, shortsightedly confuse questions of fact with questions of value. In other words, that the scientific consensus on climate change provides an answer to a question of fact is actually not enough to address the associated questions of value: what sacrifices are we willing to make, for whom, how, and why, in light of this fact?
Of course scientific experts or authorities can and should guide us on addressing these followup questions of value; but we must understand that when it comes to value, authorities can only guide. Indeed, no one would be content to accept the dictates of an authority for a question of value when the authority disagrees with our own position. We only appeal to authority when it backs up what we already think or want to think. Otherwise, we rely on other ways of determining our values, like affinity, emotion, or reason.
Consider, further, questions of value for which scientific authority is perhaps more difficult to apply: is this politician a trustworthy and moral character? Is it ethical to slaughter animals for food? Should abortion be legal? Indeed, experts and authorities can tell us a great deal about whether a statistically significant number of people think a politician is trustworthy and moral, whether certain animals are likely to experience consciousness in such a way that we might not want to raise them for food, or whether a fetus can experience pain. But these guiding facts are not solutions to these value problems. We may want to factor in considerations like how politicians might manipulate public opinion (and public opinion studies), whether it's possible to know animal consciousness anymore than to know the experience of another human, or whether the rights and autonomy of the mother mitigate our concerns about fetal pain.
Our crisis of value lies, then, with the very fact that we have largely abandoned rigorous and expert ways of making value distinctions. The unintended and counterintuitive consequence of our desire to drag scientific, quantitative, and 'big data' methods of inquiry into value debates, not as guides, but as authorities, is that we have sterilized our ability to reason through value problems. Hence, the more aggressively our proponents of Scientism promote the authority of science over questions of value, the more thoroughly they push reason and critical thinking out of these debates. The more frequently we turn to scientific studies not as information with which to help make complex value distinctions, but as trump cards that replace reasoned arguments, the more we court that other authority--religion--to fill the void. This is how we arrive at absurd 'debates' between science educator Bill Nye and religious science denialist Congresswoman Marsha Blackburn, conversations that amount to little more than one side stating a fact backed by the authority of science, and the other side stating a belief backed by the authority of religion. When it comes to actually expressing reasoned justifications for our positions, we falter when we can't rely on some authority to back us up.
Prominent social psychologist Jonathan Haidt has recently promoted a similar idea, what he calls 'moral dumbfounding': that we intuitively or instinctively take moral positions on things like why incest is wrong, yet when pressed to explain why, we struggle. If you ask your friends and family why they think incest is wrong (or any number of value questions), I expect you would find, as Haidt has in his work, that people would absolutely struggle to provide reasons. Yet as Paul Bloom argues, it's not that we don't have the capability to reason this way--that we're simply irrational machines programmed instinctively to take moral positions--but rather:
The big question, then, is why we don't apply reason and deliberation when we form positions on larger and perhaps more consequential questions (like abortion, gay marriage, animal rights, environmental policy, etc.). In some ways this question answers itself: these questions are difficult, and it's too often easy to find an authority on each of them, ever ready to take up new followers. Our third kind of authority--experts in the humanities and humanistic social sciences who tackle questions of value with rigor, evidence, reasoned argument, and (I wish) clear and careful articulation--become lost in this perpetual exchange between fact and belief. Perhaps this is because we're very much in the business of teaching how to approach questions of value more rigorously and reliably, rather than what to think or whom to grant authority. But one need not be a humanities scholar to acknowledge or benefit from a more rigorous approach to questions of value. Every semester I tell my students that at any given moment they should be prepared to deliver a thorough, clear, logical, and evidence-based articulation of everything they believe, every political stance they take, every value they hold. The point of this exercise is to prompt them to examine their beliefs not through me as an authority, but in their own minds, for themselves. The great conservative fear of liberal humanities professors indoctrinating students in the classroom is premised on the idea that students can't critically evaluate the positions of themselves and others; that they can only accept value distinctions from figures of authority. But if such passivity is really a problem, it's less a problem of the personal stances of faculty than the wider culture of argument from authority rather than reason and evidence.
As technologies that track our preferences and make our decisions for us emerge as yet another kind of authority that relies on deference rather than active thinking and reasoning (hence 'automated reasoning'), we find ourselves with fewer and fewer inclinations (if not opportunities) for tackling questions of value. From quotidian decisions about which movies to buy or which news will appear on our Facebook feed, to our authority-based denial of the ways in which larger political and policy issues require us to make careful value distinctions, we are becoming entirely too accustomed to the idea that we live in a post-value, post-ideological world, where everyone who agrees with us is a fact-based authority and everyone who disagrees is an 'ideological' person with 'bad data,' 'flawed studies,' or 'bias.' The only way out of this mess is to reclaim reason and critical discourse as ways of making value distinctions; which first requires us to acknowledge that value distinctions still exist between the authorities of fact and opinion.
Indeed, on virtually every issue we encounter, we find arguments started (and, curiously, never finished) by statements like 'studies have shown...' or 'Leviticus states...,' statements meant to be conclusive and authoritative, but that nevertheless only generate more arguments from other authorities. While we are wise to rely significantly on scientific expertise for questions of fact about the natural world--like 'to what extent is climate change happening, and to what extent is it anthropogenic'--we are living in an historical moment in which we regularly and, I think, shortsightedly confuse questions of fact with questions of value. In other words, that the scientific consensus on climate change provides an answer to a question of fact is actually not enough to address the associated questions of value: what sacrifices are we willing to make, for whom, how, and why, in light of this fact?
Of course scientific experts or authorities can and should guide us on addressing these followup questions of value; but we must understand that when it comes to value, authorities can only guide. Indeed, no one would be content to accept the dictates of an authority for a question of value when the authority disagrees with our own position. We only appeal to authority when it backs up what we already think or want to think. Otherwise, we rely on other ways of determining our values, like affinity, emotion, or reason.
Consider, further, questions of value for which scientific authority is perhaps more difficult to apply: is this politician a trustworthy and moral character? Is it ethical to slaughter animals for food? Should abortion be legal? Indeed, experts and authorities can tell us a great deal about whether a statistically significant number of people think a politician is trustworthy and moral, whether certain animals are likely to experience consciousness in such a way that we might not want to raise them for food, or whether a fetus can experience pain. But these guiding facts are not solutions to these value problems. We may want to factor in considerations like how politicians might manipulate public opinion (and public opinion studies), whether it's possible to know animal consciousness anymore than to know the experience of another human, or whether the rights and autonomy of the mother mitigate our concerns about fetal pain.
Our crisis of value lies, then, with the very fact that we have largely abandoned rigorous and expert ways of making value distinctions. The unintended and counterintuitive consequence of our desire to drag scientific, quantitative, and 'big data' methods of inquiry into value debates, not as guides, but as authorities, is that we have sterilized our ability to reason through value problems. Hence, the more aggressively our proponents of Scientism promote the authority of science over questions of value, the more thoroughly they push reason and critical thinking out of these debates. The more frequently we turn to scientific studies not as information with which to help make complex value distinctions, but as trump cards that replace reasoned arguments, the more we court that other authority--religion--to fill the void. This is how we arrive at absurd 'debates' between science educator Bill Nye and religious science denialist Congresswoman Marsha Blackburn, conversations that amount to little more than one side stating a fact backed by the authority of science, and the other side stating a belief backed by the authority of religion. When it comes to actually expressing reasoned justifications for our positions, we falter when we can't rely on some authority to back us up.
Prominent social psychologist Jonathan Haidt has recently promoted a similar idea, what he calls 'moral dumbfounding': that we intuitively or instinctively take moral positions on things like why incest is wrong, yet when pressed to explain why, we struggle. If you ask your friends and family why they think incest is wrong (or any number of value questions), I expect you would find, as Haidt has in his work, that people would absolutely struggle to provide reasons. Yet as Paul Bloom argues, it's not that we don't have the capability to reason this way--that we're simply irrational machines programmed instinctively to take moral positions--but rather:
the existence of moral dumbfounding is less damning that it might seem. It is not the rule. People are not at a loss when asked why drunk driving is wrong, or why a company shouldn’t pay a woman less than a man for the same job, or why you should hold the door open for someone on crutches. We can easily justify these views by referring to fundamental concerns about harm, equity, and kindness. Moreover, when faced with difficult problems, we think about them—we mull, deliberate, argue. I’m thinking here not so much about grand questions such as abortion, capital punishment, just war, and so on, but rather about the problems of everyday life. Is it right to cross a picket line? Should I give money to the homeless man in front of the bookstore? Was it appropriate for our friend to start dating so soon after her husband died? What do I do about the colleague who is apparently not intending to pay me back the money she owes me?Here I think Bloom is correct to point out that when it comes to 'everyday' value distinctions--the kind for which it would be odd and maybe even suffocating to look to an authority to make our decisions for us--we do apply reason and deliberation.
The big question, then, is why we don't apply reason and deliberation when we form positions on larger and perhaps more consequential questions (like abortion, gay marriage, animal rights, environmental policy, etc.). In some ways this question answers itself: these questions are difficult, and it's too often easy to find an authority on each of them, ever ready to take up new followers. Our third kind of authority--experts in the humanities and humanistic social sciences who tackle questions of value with rigor, evidence, reasoned argument, and (I wish) clear and careful articulation--become lost in this perpetual exchange between fact and belief. Perhaps this is because we're very much in the business of teaching how to approach questions of value more rigorously and reliably, rather than what to think or whom to grant authority. But one need not be a humanities scholar to acknowledge or benefit from a more rigorous approach to questions of value. Every semester I tell my students that at any given moment they should be prepared to deliver a thorough, clear, logical, and evidence-based articulation of everything they believe, every political stance they take, every value they hold. The point of this exercise is to prompt them to examine their beliefs not through me as an authority, but in their own minds, for themselves. The great conservative fear of liberal humanities professors indoctrinating students in the classroom is premised on the idea that students can't critically evaluate the positions of themselves and others; that they can only accept value distinctions from figures of authority. But if such passivity is really a problem, it's less a problem of the personal stances of faculty than the wider culture of argument from authority rather than reason and evidence.
As technologies that track our preferences and make our decisions for us emerge as yet another kind of authority that relies on deference rather than active thinking and reasoning (hence 'automated reasoning'), we find ourselves with fewer and fewer inclinations (if not opportunities) for tackling questions of value. From quotidian decisions about which movies to buy or which news will appear on our Facebook feed, to our authority-based denial of the ways in which larger political and policy issues require us to make careful value distinctions, we are becoming entirely too accustomed to the idea that we live in a post-value, post-ideological world, where everyone who agrees with us is a fact-based authority and everyone who disagrees is an 'ideological' person with 'bad data,' 'flawed studies,' or 'bias.' The only way out of this mess is to reclaim reason and critical discourse as ways of making value distinctions; which first requires us to acknowledge that value distinctions still exist between the authorities of fact and opinion.
Thursday, January 30, 2014
Professor to World: If You're Gonna Troll my Job, At Least Get it Right
This article is in response to two different but related events: that I've started watching The Following on Netflix, and that someone recently broke out the old 'those who can, do; those who can't, teach' cliche in an effort to demean my livelihood.
First, The Following: we learn in the pilot episode that the villain is a charismatic and manipulative English professor/serial killer. This character possesses a few very curious details that reflect just how willfully ignorant the public is about the work that professors do. For example, his book, whose poor critical reception triggers a series of vengeful killings, is not a work of scholarship, but of fiction--of creative writing. But of course English professors--with the exception of the rare creative writing professor in an English or literature department--don't publish fiction or poetry, but analytical studies of other people's fiction and poetry. If you want to troll the profession, by all means argue for the irrelevance of scholarly studies of fiction and poetry; but at least demonstrate that you know the difference between a fiction writer and a professor.
We learn also that our English professor/serial killer is an Edgar Allan Poe scholar; an further that he's of British origin (not just the actor; the character speaks with a British accent). In the US we love attributing British accents to two extremes: highly desirable, sexualized men, and intellectuals. Now I love Poe; but the academy doesn't. In fact I don't know a single Poe scholar. Furthermore, I only know a handful of British scholars interested in American literature at all--and that's mainly because I did my PhD in Britain, where, understandably, most of the world's British scholars of American literature reside. I'm not trying to argue that there aren't British scholars of American literature out there, nor that no one is or should be interested in studying Poe, nor that it's not possible for all these things--British origins, scholarly interest in Poe and American literature, and an academic job in the US--to coincide, but rather that this is just a very odd and unlikely combination of traits. The reason for this is because the character in The Following is just a hodgepodge of English-professor stereotypes: charismatic and manipulative, a lover of Romantic poetry, an insecure writer of fiction, a posh British accent, a penchant for praying on naive, young female undergraduates. In other words, this is not simple ignorance, but willful misrepresentation that plays into all the sick fears and fantasies that the average person has about English professors.
Of course, I don't really mind all that much that we have these distortions in the name of fiction. Yes, they're harmful to the reputation of my profession; but it's ultimately the responsibility of people in my profession to correct misconceptions ourselves, rather than to censor or complain about those who create them (hence this bit I'm doing here).
Far more harmful and ignorant is the 'those who can, do; those who can't, teach' mentality (I also find it humorous and not at all ironic that those who typically write this phrase are too ignorant to write it with decent punctuation). The first part of this I should address is that it's insulting both to professors and to teachers when you call professors teachers. This is the case because the two jobs, despite having some overlap, demand very different skill sets, and involve very different kinds of labor. I don't like being called a 'teacher' not because I don't respect teachers--on the contrary, I respect them more than I think every profession there is--but because calling me a 'teacher' devalues my research, which is a very major part of my job. And if I were a teacher (I can only speak for me here), I'd be insulted by someone confusing me with a professor. After all, there's no way in hell I could walk into a K-12 classroom, of any year, with my experience, and teach effectively. I'm trained and experienced in teaching college students college material in a college classroom setting; I'm not trained to handle the needs, preferred modes of instruction, parental/home life concerns, and administrative and learning challenges of K-12 students.
One thing professors and teachers do have in common, though, is the 'doing' part of that stupid cliche. And this is something that even college-educated people, who have spent plenty of time in both high school and college classrooms, are nevertheless quite amazing and willfully ignorant of: in the strictest sense, a class is a product. That class that happens before your eyes every day or every week doesn't just magically appear there anymore than Betty Crocker snaps her fingers and a cherry pie appears out of thin air. Someone had to plan that class--nevermind execute it for you--and that takes a lot of time, effort, and logistical acumen. I'm not even talking about staying up late grading, or answering student e-mails at all hours of the day and night, or meeting with students in office hours or on your own time, or supervising theses and student research projects, or mentoring student clubs, interest groups, and sports teams--I'm just talking about making a class. That takes a lot of 'doing.' To be more precise, it takes about 60-80 hours/week of 'doing,' comparable to such prominent 'doers' as investment bankers, management consultants, code-junkies, and other model citizens of busyness and 'productivity.'
Of course, teaching is realistically only about 40% of my job as an English professor. So when you very kindly ask people like me 'how's teaching?' or 'how're the kiddos' or something like that, what I hear is 'how was your week, specifically on Tuesdays and Thursdays from 12-6pm?' But I'd say I spend another 40% of my labor hours doing research under a lot of pressure to publish in a very competitive situation, and then the remaining 20% of my labor hours serving on administrative committees that design things like, say, the curriculum, or an entire graduate program, or faculty mentoring scheme, etc. So let me remind you, again, that this all takes many hours of 'doing.' You might say this 'doing' is at least as doingy as being in a 4-hour business meeting, going on a company retreat, answering 100 emails/day, giving a 20-minute Power-Point presentation, or networking with a client (although, of course, professors do all of this stuff--literally all of this stuff--as part of our jobs as well; it's just a little bit different). So now I begin to wonder: what exactly are all the 'doers' doing, and what makes your doing more doing than my doing? These are the sort of fucked-up questions you get when you troll a professor with a cliche.
Of course, I'm told that as an English professor my work is not only apparently not a kind of 'doing,' but definitely a kind of 'useless.' Yet again I wonder how we're computing use-value here. Is it useful to wake up at 7am, go to an office, read two-hours of Buzzfeed, start a memo in hackneyed prose, wander dead-eyed into a 2-hour meeting about 'cannibalizing business models,' eat cous-cous at your desk, frantically work your brains out for 2.5 hours in the late afternoon on a 'deliverable,' wait around for 2 more hours until everyone has decided that it's that time where it's socially and professionally acceptable to leave the office, and head home? Is it useful to teach the next generation of national and world leaders how to read carefully, construct evidence-based, reason-driven arguments, write persuasively, master rhetoric, speak publicly with articulacy; to generate new knowledge on a regular basis, sometimes only for itself? I guess it's all so, you know, sub-JECT-ive.
I don't mean to be so harsh, really; but the truth is that the world doesn't like a professor with bravado; and the professor doesn't like it either. But every so often, when you have so many forces manipulating public opinion of what you do, coinciding with such brash and at the same time specious defenses of the world's 'doers,' a scrupulous person must intervene. What a professor does can be very rewarding for the professor; but, even at private universities, it's a major public service. The same politicians who prattle on about the 'knowledge economy,' the same CEOs and business types who complain about how poor their young employees critical thinking and writing and speaking skills are, are also apt to use 'English professor' as a punchline to some joke about uselessness. But let's be honest with ourselves: if you're a biz-world kinda person, and you're standing next to me, that makes two of us who have worked in and maintain contacts in the biz-world. And that makes one of us who knows what it's like to work as a professor. And when I look around, I see many a friend and family member and acquaintance who works a white-collar office job, so I have plenty of examples in my life of both incredibly talented and thoughtful and incredibly shitty and shallow white-collar office employees. But when you look around there just aren't many professors in your life to tug your sleeve when you're watching The Following and let you know that The Following's Joe Carroll would never realistically expect his fiction book to count toward his tenure and promotion reviews.
One last thing, because I owe it to you now: a lot of professors can be really shitty people. Some (though few, in my experience) really are cut-off from mainstream life as we know it. And in the history of the academy, it's only a relatively recent development that professors have become professionalized. In the 18th century, going to college was something that a very few elites with a select few special interests did; and in many cases the children of the gentry went merely to gain social cachet and learn to get along as gentlemen (women not accepted). In the modern era, even in the 20th century, college was still something of an elite enterprise, with only those attending who were either wealthy and wanting to be more marriageable, or specifically interested in a profession that required a college education (in other words, you could be a Wall St. broker with no college degree; now Wall St. firms will barely take a glance at you if you don't know somebody's daddy or you don't have an Ivy League degree). But now college, at least in the US, is something that many people (whether this is good or not) aspire to, and something that is largely expected as a reasonable goal of non-elites and elites alike. Enrollments are huge now. And gone are the days of the gentleman scholar, who slides into a sinecure post at Harvard because he's one of 50 people with a PhD or a Master's in all of Massachusetts. Now there are 50 people with PhDs in one university department in Boston; and another 500 applying for one of those jobs. So it's time we drop the image of the easy-going, Ivory Tower, thinking-man posturing gentleman scholar, and get it right when we're talking and writing about professors. Once you get the basics right, I'm happy to have a civil and most certainly vigorous discussion with you about the value of my profession. But until then, as my students well know, you need to crawl before you can walk.
First, The Following: we learn in the pilot episode that the villain is a charismatic and manipulative English professor/serial killer. This character possesses a few very curious details that reflect just how willfully ignorant the public is about the work that professors do. For example, his book, whose poor critical reception triggers a series of vengeful killings, is not a work of scholarship, but of fiction--of creative writing. But of course English professors--with the exception of the rare creative writing professor in an English or literature department--don't publish fiction or poetry, but analytical studies of other people's fiction and poetry. If you want to troll the profession, by all means argue for the irrelevance of scholarly studies of fiction and poetry; but at least demonstrate that you know the difference between a fiction writer and a professor.
We learn also that our English professor/serial killer is an Edgar Allan Poe scholar; an further that he's of British origin (not just the actor; the character speaks with a British accent). In the US we love attributing British accents to two extremes: highly desirable, sexualized men, and intellectuals. Now I love Poe; but the academy doesn't. In fact I don't know a single Poe scholar. Furthermore, I only know a handful of British scholars interested in American literature at all--and that's mainly because I did my PhD in Britain, where, understandably, most of the world's British scholars of American literature reside. I'm not trying to argue that there aren't British scholars of American literature out there, nor that no one is or should be interested in studying Poe, nor that it's not possible for all these things--British origins, scholarly interest in Poe and American literature, and an academic job in the US--to coincide, but rather that this is just a very odd and unlikely combination of traits. The reason for this is because the character in The Following is just a hodgepodge of English-professor stereotypes: charismatic and manipulative, a lover of Romantic poetry, an insecure writer of fiction, a posh British accent, a penchant for praying on naive, young female undergraduates. In other words, this is not simple ignorance, but willful misrepresentation that plays into all the sick fears and fantasies that the average person has about English professors.
Of course, I don't really mind all that much that we have these distortions in the name of fiction. Yes, they're harmful to the reputation of my profession; but it's ultimately the responsibility of people in my profession to correct misconceptions ourselves, rather than to censor or complain about those who create them (hence this bit I'm doing here).
Far more harmful and ignorant is the 'those who can, do; those who can't, teach' mentality (I also find it humorous and not at all ironic that those who typically write this phrase are too ignorant to write it with decent punctuation). The first part of this I should address is that it's insulting both to professors and to teachers when you call professors teachers. This is the case because the two jobs, despite having some overlap, demand very different skill sets, and involve very different kinds of labor. I don't like being called a 'teacher' not because I don't respect teachers--on the contrary, I respect them more than I think every profession there is--but because calling me a 'teacher' devalues my research, which is a very major part of my job. And if I were a teacher (I can only speak for me here), I'd be insulted by someone confusing me with a professor. After all, there's no way in hell I could walk into a K-12 classroom, of any year, with my experience, and teach effectively. I'm trained and experienced in teaching college students college material in a college classroom setting; I'm not trained to handle the needs, preferred modes of instruction, parental/home life concerns, and administrative and learning challenges of K-12 students.
One thing professors and teachers do have in common, though, is the 'doing' part of that stupid cliche. And this is something that even college-educated people, who have spent plenty of time in both high school and college classrooms, are nevertheless quite amazing and willfully ignorant of: in the strictest sense, a class is a product. That class that happens before your eyes every day or every week doesn't just magically appear there anymore than Betty Crocker snaps her fingers and a cherry pie appears out of thin air. Someone had to plan that class--nevermind execute it for you--and that takes a lot of time, effort, and logistical acumen. I'm not even talking about staying up late grading, or answering student e-mails at all hours of the day and night, or meeting with students in office hours or on your own time, or supervising theses and student research projects, or mentoring student clubs, interest groups, and sports teams--I'm just talking about making a class. That takes a lot of 'doing.' To be more precise, it takes about 60-80 hours/week of 'doing,' comparable to such prominent 'doers' as investment bankers, management consultants, code-junkies, and other model citizens of busyness and 'productivity.'
Of course, teaching is realistically only about 40% of my job as an English professor. So when you very kindly ask people like me 'how's teaching?' or 'how're the kiddos' or something like that, what I hear is 'how was your week, specifically on Tuesdays and Thursdays from 12-6pm?' But I'd say I spend another 40% of my labor hours doing research under a lot of pressure to publish in a very competitive situation, and then the remaining 20% of my labor hours serving on administrative committees that design things like, say, the curriculum, or an entire graduate program, or faculty mentoring scheme, etc. So let me remind you, again, that this all takes many hours of 'doing.' You might say this 'doing' is at least as doingy as being in a 4-hour business meeting, going on a company retreat, answering 100 emails/day, giving a 20-minute Power-Point presentation, or networking with a client (although, of course, professors do all of this stuff--literally all of this stuff--as part of our jobs as well; it's just a little bit different). So now I begin to wonder: what exactly are all the 'doers' doing, and what makes your doing more doing than my doing? These are the sort of fucked-up questions you get when you troll a professor with a cliche.
Of course, I'm told that as an English professor my work is not only apparently not a kind of 'doing,' but definitely a kind of 'useless.' Yet again I wonder how we're computing use-value here. Is it useful to wake up at 7am, go to an office, read two-hours of Buzzfeed, start a memo in hackneyed prose, wander dead-eyed into a 2-hour meeting about 'cannibalizing business models,' eat cous-cous at your desk, frantically work your brains out for 2.5 hours in the late afternoon on a 'deliverable,' wait around for 2 more hours until everyone has decided that it's that time where it's socially and professionally acceptable to leave the office, and head home? Is it useful to teach the next generation of national and world leaders how to read carefully, construct evidence-based, reason-driven arguments, write persuasively, master rhetoric, speak publicly with articulacy; to generate new knowledge on a regular basis, sometimes only for itself? I guess it's all so, you know, sub-JECT-ive.
I don't mean to be so harsh, really; but the truth is that the world doesn't like a professor with bravado; and the professor doesn't like it either. But every so often, when you have so many forces manipulating public opinion of what you do, coinciding with such brash and at the same time specious defenses of the world's 'doers,' a scrupulous person must intervene. What a professor does can be very rewarding for the professor; but, even at private universities, it's a major public service. The same politicians who prattle on about the 'knowledge economy,' the same CEOs and business types who complain about how poor their young employees critical thinking and writing and speaking skills are, are also apt to use 'English professor' as a punchline to some joke about uselessness. But let's be honest with ourselves: if you're a biz-world kinda person, and you're standing next to me, that makes two of us who have worked in and maintain contacts in the biz-world. And that makes one of us who knows what it's like to work as a professor. And when I look around, I see many a friend and family member and acquaintance who works a white-collar office job, so I have plenty of examples in my life of both incredibly talented and thoughtful and incredibly shitty and shallow white-collar office employees. But when you look around there just aren't many professors in your life to tug your sleeve when you're watching The Following and let you know that The Following's Joe Carroll would never realistically expect his fiction book to count toward his tenure and promotion reviews.
One last thing, because I owe it to you now: a lot of professors can be really shitty people. Some (though few, in my experience) really are cut-off from mainstream life as we know it. And in the history of the academy, it's only a relatively recent development that professors have become professionalized. In the 18th century, going to college was something that a very few elites with a select few special interests did; and in many cases the children of the gentry went merely to gain social cachet and learn to get along as gentlemen (women not accepted). In the modern era, even in the 20th century, college was still something of an elite enterprise, with only those attending who were either wealthy and wanting to be more marriageable, or specifically interested in a profession that required a college education (in other words, you could be a Wall St. broker with no college degree; now Wall St. firms will barely take a glance at you if you don't know somebody's daddy or you don't have an Ivy League degree). But now college, at least in the US, is something that many people (whether this is good or not) aspire to, and something that is largely expected as a reasonable goal of non-elites and elites alike. Enrollments are huge now. And gone are the days of the gentleman scholar, who slides into a sinecure post at Harvard because he's one of 50 people with a PhD or a Master's in all of Massachusetts. Now there are 50 people with PhDs in one university department in Boston; and another 500 applying for one of those jobs. So it's time we drop the image of the easy-going, Ivory Tower, thinking-man posturing gentleman scholar, and get it right when we're talking and writing about professors. Once you get the basics right, I'm happy to have a civil and most certainly vigorous discussion with you about the value of my profession. But until then, as my students well know, you need to crawl before you can walk.
Subscribe to:
Comments (Atom)