Legal punishment is normally justified by appeal to Wrongdoing (the criminal act) and Culpability (”the guilty mind”). These are features focusing on the perpetrator, which makes sense as it is he (nearly always a ”he”) who will carry the burden of the punishment. We want to make sure that the punishment is deserved.
But it is also typically justified by appeal to societial well-being. To protect citizens from harm, to promote the sense of safety, to reinforce certain values, to prevent crime by threatening to punish, to rehabilitate or at least contain the dangerous. According to so-called ”Hybrid” theories, punishment is justified when these functions are served, but only when it befalls the guilty, and in proportion to their guilt (this being a function of wrongdoing and culpability). Responsibility/culpability constrain the utilitarian function. Desert-based justification is backward-looking, while the utilitarian, pro-social justification is forward-looking. (Arguably, the pro-social function is dependent on the perceived adherence to the responsibility-constraint.)
Neuroscientist and total media-presence David Eagleman had a very interesting article in The Atlantic a while ago, pointing out that revealing the neural mechanisms behind certain crimes tends to weaken our confidence in assigning culpability. Rather than removing the justification for punishment, Eagleman suggests that we move on from that question:
(Dedicated to 300 year old David Hume, with whom one would have liked to chat, according to widespread scholarly opinion. I sort of think a contemporary version exist in the form of Simon Blackburn)
Normative/evaluative concepts are difficult to analyze all the way down. Attempts to do so tend to leave one with a normative ”residue”.”Value” is one such concept, one that I’ve spent the best part of my youth trying to get to grips with. ”Ought” is another, one that I’ve spent the best part of my youth neglecting. ”Reason”, of course, is the current darling of the moral theory set.
G E Moore, famously, took this difficult residue to as evidence for the fundamental irreducibility of value. Value is a simple notion, one which we grasp but don’t know how we grasp, and nothing more can be said about it. It’s notable (and noted) that this statement comes rather early in Moore’s Principia Ethica and that he then goes on to say quite a lot about value.(Wittgenstein at least had the decency to END his Tractatus with his similar, but more general, claim).
Clearly, as Moore realised, things can be said about simple notions, otherwise how could we distinguish between different simple notions? For instance, notions such as ‘ought’ carry implications. This blogpost, which hasn’t quite started yet, is about the implications of ‘ought’. Goodbye, Youth.
Here are a handful of suggestions/observations, three about the ”implications” of ‘ought’, and one negative about the inferability of ‘ought’
- ‘Ought’ implies ‘If‘: the Hypothetical Imperative, an ”instrumental” ought. IF you want to get to the station in time, you OUGHT to take this short-cut. Some will say this is the ONLY sense of ‘ought’ that makes sense. Even the moral ‘ought’ carries conditionals of this sort.
- ‘Ought’ implies ‘can’: If you ought to do something, you can do it. What you ought to do is, for instance, the act that has the best possible consequences of all the acts that you can perform. These may not be very good, but what can you do? You can’t be blamed for not doing what you cannot do. The complications here regards the scope of the ”can”.
- ‘Ought’ implies ‘Would’: Not a strict implication, but this is an often used, but seldom recognised, method to backward engineer an inductive argument: If Utilitarianism is correct, we ought to put the fat man down on to the tracks to stop the runaway trolley. But I/you/most people wouldn’t. Thus, Utilitarianism cannot be correct. Utilitarians can, and do, reply that what you would do does not imply anything about what you should do, but still: this is awkward, and in need of explanation. If your moral theory implies that you should do something that you are reluctant to do, the theory suffers. It’s basically a sort of reductio argument, but with the ”absurd” replaced by the ”icky”. And then there is my pet peeve:
- You CANNOT infer an ‘ought’ from an ‘is’. Try as you might, the self-styled Humean blurts out, gather all the evidence you can, as long as you have only descriptions, you cannot infer what we ought to do. My main objection against this line of argument is that it is LAZY. People, professional philosophers not excluded, use this argument in order to save themselves from additional work – the find the normative claim, fail to identify any normative premiss, and then they don’t bother with the rest of the reading. Hume was right, you cannot infer an ‘ought’ from an ‘is’, at least not until you know what ‘ought’ means, how it is circumscribed by other normatively charged terms, and, in turn, what they mean and, possibly, refer to.
For the past year or so, I’ve been writing applications to fund my research. Most of these applications concerns a project that I believe holds a lot of promise. In very broad terms, it is about the relation between meta-ethics and psychopathy research. The thing about the project, which I believed was the great thing about it, is that it is not merely a philosopher reading about psychopathy and then works his/hers philosophical magic on the material. Nor is it a narrowly designed experiment to test some limited hypothesis. Both of these modi operandi (I’m sorry if I butcher the latin here) have serious flaws. The former is too isolated an affair as, unless the philosopher holds some additional degree, he/she is bound to misunderstand how the science work. The latter is too limited, in that we have not arrived at the stage where philosophically interesting propositions can be properly said to be empirically tested.
What is needed is careful theoretical and collaborative work, where researchers from the respective disciplines get together and enlighten each other about their peculiarities. This stage is often glossed over, leading to the theoretically overstated ”experiments in ethics” that have gotten so much attention lately. My research proposal, then, was deliberately vague on the testing part, but very vocal on the need for serious inter-disciplinary collaboration. Indeed, establishing such a collaboration, I believe, is the bigger challenge of the project.
Turns out, this is no way to get a post-doc funded, not here at least. There is no market for it. Possibly, I could get funding for doing the theory part at a pure philosophy department, which I could certainly do, but it would be a lot less exciting and important. Or, I could design some experiments and work at the scientific department, which I could currently not do, as I lack the training. The important work, the theoretically interesting work that I happen to be fairly qualified and very eager to perform, can’t get arrested in this town. What I thought was my nice, optimistic, promising and clearly visionary approach to what arguably will become a serious direction in both moral philosophy and psychological research, can’t get started.
I don’t want your pity (alright then, just a little bit, then). I just got a research position in a quite different project, so I’ll be alright. And hopefully, I’ll be able to return to this project later on. It just seems like an opportunity wasted.
So ethics month(s) just ended. On Thursday, I sent around 50 critically acclaimed essays on applied, normative and meta-ethics back to their authors. Leaving me pondering the proposition that there is now a group of people, the sort of university educated people that invariably turn out ruling the world, the media, the arts and so on, that I taught ethics. For future readers of the web-archives: I’m sorry. Alternatively: You’re welcome.
Normatively, they are all over the place. Utilitarianism is probably the strongest contender, but not by a majority vote. Meta-ethically the interest has a clear tendency towards epistemology, and a weaker tendency towards coherentism. In general, they are very much able to relate their moral judgements in particular cases not only to the normative theory they favor, but also to other theories they know are held by other people. Let’s go out on a limb and call it a good thing.
Those propositions are now passed on to you, as I turn my attention to other things, assuming other perspectives. I’m on parental leave. My main objective for the next few months is play. There will be drumming, there will be crawling and toddling, there will be incomprehensible talk and the provision of feedback. There will, in all likelihood, be a sharp decline in vocabulary, grammar and level of abstraction in the blog posts to come.
Can basic moral questions be answered by science? The, oh, how to put this nicely, vocal moral theorist Sam Harris believe so. And so, as I will keep reminding you, do I. But, hopefully unlike me, he seems not to make a very good case for it. The marvelous Kwame Anthony Appiah (whose book ”Experiments in Ethics” is a very good read indeed, if you’re interested in experimental moral philosophy. Good, but somehow non-commital) made that much clear in his review in the New York Times the other day (the equally marvelous Roger Crisp agreed).
I’m very much torn about this issue. First, it’s a good thing that the attempt to address fundamental ethical and metaethical questions with scientific means gets this much attention. But the key issue at this stage is in the justification of this project. If that’s lacking, the attention will just lead to people dismissing it and likewise dismissing any other, better thought through attempts which comes along later. This happens all the time, when something is claimed to be a cancerogen, and the study is shown to be flawed, next time around even if the study is better, people wont heed the warning.
So, while the meta-ethical framework required to justify the scientific approach to moral questions is highly controversial and far from settled, one wishes that Harris would have made at least some effort to provide us with such a framework. So what am I saying? ”Call me”, I guess.
I’m a big fan of october and november, and don’t care who knows it. September is nice to, and has that crispness of air which implies clarity of thought, If you’re into that sort of thing, but then again, there’s all that fuss about the beginning of term and I’m no fan of fuss. October and november means business as usual. Things have achieved a state of being usual, enough for business to adjust accordingly. Oh, David. What are you on about?
Beginning today, we are into what I, assuming that the world pretty much revolve around me and my interests, am calling ethics month. It is the month during which I teach ethics at the department for philosophy, linguistics and theory of science. Today it’s ”introduction to ethics” or, informally: ”What’s all this, then?”. Tomorrow, it’s ”the meaning of life”. The course is very cleverly structured (I didn’t do it, but if I had, I still wouldn’t hesitate to call it clever. Try to keep up): It begins with applied ethics, about selling organs, animal ethics, abortions and so on. When these questions turn difficult, we’ll turn to normative ethics, about what makes things right and wrong. The principles against which background applied questions may be answered. When this turns out difficult, we turn to meta-ethics, dealing with the meaning of moral terms and the nature of moral facts and moral knowledge, if such is to be found. When this turns difficult, which it does quite soon, the course is over and questions will have multiplied. If I’m any good, the students will have learned to cope with that fact.
Philosophy is often like that, as someone tweeted recently: climbing a very high tower, and then looking up.
Teaching this course here is fun for me, for personal reasons. I attended my first philosophy lecture here, at the age of 17 and got to talk to the professor who, merely by being nice, helped me decide to go into philosophy for my self. Secondly, it’s ten years since I first took this course which I’m now teaching. Having spent most of the time in between in metaethics, its great and very useful to become reaquainted with the applied and normative side of ethics. As a meta-ethicist, its often easy to forget that those things exist as well.
Tomorrow, I’m giving a short presentation at a lab meeting with the sinisterly named MERG (Metro Experimental Research Group) at NYU. The title is ”Value-theory meets the affective sciences – and then what happens?”. For once, the question tucked on for effect at the end will be a proper one (normally when using this title, I just go ahead and tell the participants what happens). I really want to know what should happen, and how the ideas I’ve been exploring could be translated into a proper research program. I’m constantly finding experimental ”confirmation” of my pet ideas from every branch of psychology I dip my toes in, but there are obvious risk with this way of doing ”research”. The question is whether, and how, those ideas might actually help design new experiments and studies more suited to confirm (or disconfirm) them.
I believe meta-ethics could and should be naturalized, and I have certain ideas about what would happen if it were. Now, we prepare for the scary part.
What is wrong with psychopaths? Seriously? I’m not asking in a semi-mocking, Seinfield-esque ”what is the deal with X” kind of way. I’m seriously interested in finding out. Is there something they’re not getting, or something they don’t care about? And is caring about something really that different from understanding it? (In the Simpsons episode ”Lisa’s substitute” Homer, trying to comfort Lisa, memorably says ”Hey, just because I don’t care doesn’t mean I don’t understand”).
As most people interested in philosophy, I’ve been accused of being ”too rational” and, by implication, deficient in the feelings department. And, like most people interested in philosophy would, I’ve dealt with this accusation, not by throwing a tantrum, but by taking the argument apart. To the accusers face, if he/she sticks around long enough to hear it. When people tell me I’m a know-it-all, I start off on a ”This is why you’re wrong” list.
So, when it happens that someone compliments me on some human insight or displayed emotional sensitivity, I tend to make the in-poor-taste-sort of-joke ”Psychopath College can’t have been a complete waste of time and money, then”.
Psychopath College, you see, is a fictional institution (Aren’t they all? No.) that I’ve made up. It refers to the things you do when you don’t have the instincts or the normal emotional and behavioral reactions, but still want to fit in. You learn about them by careful observation, you try to find a rationale for them, a mechanism that will help you understand it. In the end, you manage to mimic normal behavior and make the right predictions. (Like all intellectuals, led by the editors of le monde diplomatique, I learned to ”care” about football during the 1998 world cup, not in the ”normal” way, but for, you know, pretentious reasons.)
It’s commonly believed that psychopaths ability to manipulate people depends on precisely this fact: they don’t rely on non-inferred keen instinct and intuition but actually need to possess the knowledge of what makes people behave and react the way they do. And this knowledge can be transferred into power, especially as psychopaths are not as betrayed by unmeditated emotional reactions as the rest of us are.
A recent study reported in the journal ”Psychiatry Research: Neuroimaging” told that psychopathic and non-psychopathic offenders performed equally well on a task judging what someone whose intentions where fulfilled, or non-fulfilled would feel. But when they do, different parts of the brain are more activated. In psychopaths, the attribution of emotions is associated with activity in the orbitofrontal cortex, believed to be concerned with outcome monitoring and attention. (This said, the authors admit that the role of the OFC in psychopathy is highly debated) In non-psychopaths, on the other hand, the attribution is rather correlated with the ”mirror-neuron system”. In short, psychopath don’t do emotional simulation, but rational calculation, and the successful ones reach the right conclusions.
The task described in the paper (”In psychopathic patients emotion attribution modulates activity in outcome-related brain areas”) is a very simple one, and offers no information on which ”method” performs better when the task is complex, or whether they may be optimal under different conditions.
Since knowing and caring about the emotional state of others is, arguably, at the heart of morality, studies like these are of the great interest and importance. What, and how, does psychopaths know about the emotional state of others? And might the reason that they don’t seem to care about it be that they know about it in a non-standard way? Jackson and Pettit argued in their minor classic of a paper Moral functionalism and moral motivation” that moral beliefs are normally motivating because they are normally emotional states. You can have a belief with the same content, but in a non-emotional, ”off-line” way, and then is seems possible not to care about morality. Arguably, this is what psychopaths do, when they seem to understand, but not to care.
As Blair et all (The psychopath) argues, one of the deficiencies associated with psychopathy is emotional learning. This makes perfect sense: if you learn about the feelings of others in a non-emotional way, you don’t get the kind of emphasis on the relevant that emotions usually convey. Since moral learning is arguably based on a long socialization process in which emotional cues plays a central part, no wonder if psychopaths end up deficient in that area.
What can Psychopath College accomplish by way of moving from knowing to caring? It is not that psychopaths doesn’t care about anything; they are usually fairly concerned with their own well-being, for instance. So the architecture for caring is in place, why can’t we bring it to bear on moral issues? Perhaps we can. Due to the emphasis on the anti-social in the psychopathy checklists, we might miss out on a large group of people that actually ”copes” with psychopathy and construes morality with independent means.
One thing that interests me with psychopaths, who clearly care about themselves and, I believe, care about being treated fairly and with respect is this: Why can’t they generalize their emotional reactions? This is highly relevant, seeing how a classic argument for generalising moral values when there is no relevant difference, at least from Mill, Sidgwick and memorably by Peter Singer, is held to be a pure requirement of rationality. The thought is that you establish what’s good by emotional experiences, and then you realise that if it’s good for me, there is no reason why the same experience would not be good for others as well. So the justification of generalisation is a rational one. But the mechanism by which this generalisation gets its force is probably not, and depends on successfull emotional simulation, a direct, non-considered emotional reactivity (then again, whether you manage to ”simulate” animals like slugs or, pace Nagel, bats, might be a matter of imagination, not rationality or emotionality).
So what does this possibility say about the epistemic status of our moral convictions, eh?
(Not friends of mine)
Terracotta, the material, makes me nauseous. Looking at it, or just hearing the word, makes me cringe. Touching it is out of the question. One may say that my reaction to Terracotta is quite irrational: I have no discernable reason for it. But rationality seems to have little to do with it – its not the sort of thing for which one has reasons. My aversion is something to be explained, not justified. It is not the kind of thing that a revealed lack of justification would have an effect on, and is thus different from most beliefs and at least some judgments.
The lack of reason for my Terracotta-aversion means that I don’t (and shouldn’t) try to persuade others to have the same sort of reactions Or, insofar as I do, it is pure prudential egotism, in order to make sure that I won’t encounter terracotta (aaargh, that word again!) when I go visit.
So here is this thing that reliably causes a negative reaction in me. For me, terracotta belongs to a significant, abhorrent, class. It partly overlaps with other significant classes like the cringe-worthy – the class of things for which there are reasons to react in a cringing way. The unproblematic subclass of this class refers to instrumental reasons: we should react aversely to things that are dangerous, poisonous, etc. for the sake of our wellbeing. But there might be a class of things that are just bad, full stop. They are intrinsically cringe-worthy, we might say. They merit the reaction. (It is still not intrinsically good that such cringings occur, though, even when they’re apt – the reaction is instrumental, even when its object is not).
Indeed, these things might be what the reactions are there for, in order to detect the intrinsically bad. Perhaps cringing, basically, represents badness. If we take a common version of the representational theory of perception as our model, the fact that there is a reliable mechanism between type of object and experience means that the experience represents that type of object.
Terracotta seems to be precisely the kind of thing that should not be included in such a class. But what is the difference between this case and other evaluative ”opinions” (I wouldn’t say that mine for Terracotta is an opinion although I sometimes have felt the need to convert it into one), those that track proper values? Mine towards terracotta is systematic and resistent enough to be more than a whim, or even a prejudice, but it doesn’t suffice to make terracotta intrinsically bad. Is it that it is just mine? It would seem that if everyone had it, this would be a reason to abolish the material but it wouldn’t be the material’s ”fault”, as it where. Is terracotta intrinsically bad for me?
How many of our emotional reactions should be discarded (though respected. Seriously, don’t give me terracotta) on the basis of their irrelevant origins? If the reaction isn’t based on reason, does that mean that reason cannot be used to discard it? This might be what distinguishes value-basing/constituting emotional reactions from ”mere” unpleasant emotional reactions . Proper values would simply be this – the domain of emotional reactions that can be reasoned with.
”Does it contain any experimental reasoning, concerning matter of fact and existence?” – David Hume
In last weeks installment of the notorious radio show that I’ve haunted recently, I spoke to the lovely lady on my left on the picture below about the use of empirical methods in moral philosophy. The ”use of empirical methods” of which I speak so fondly is, on my part, constricted to reading what other people has written, complaining about the experiments that haven’t been done yet, and then to speculate on the result I believe those experiments (not yet designed) would yield.
Anyway: I have a general interest in experimental philosophy, but I haven’t signed anything yet, you know what I mean? That is: I don’t think (what the host of the radio show wanted me to say) that ”pure” armchair philosophy is uninteresting. Indeed, I believe that any self-respecting empirical scientist ought to spend at least some time in the metaphorical armchair, or nothing good, not even data, can come out of the process.
When coming across a philosophically interesting subject matter (and, let’s face it, they’re all philosophically interesting, if you just stare at them long enough. Much of our discipline is like saying the word ”spoon” over and over again until it seems to loose its meaning, only to regain it through strenuous conceptual work) I often find it relevant to ask ”what happens in the brain”? What are we doing with the concept? It is obviously not all that matters, but it seems to matter a little. Especially when we disagree about how to analyze a concept, there might be something we agree on. Notoriously, with regard to morality, we can disagree as much as we like about the analysis of moral concepts, but agree on what to do, and on what to expect from someone who employs a moral concept, no matter what here meta-ethical stance. Then, surely, we agree on something and armchair reasoning just isn’t the method to coax it out.
I try to be careful to emphasize that empirical science is relevant to value-theory, according to my view, given a certain meta-ethical outlook. Given a particular way to treat concepts. If we treat value as a scientific problem, what can be explained. Since there is no consensus on value, we might as well try this method. Whether we should or not is not something we can assess in advance, before we have seen what explanatory powers the theory comes up with.
Treating ”value” as something to find out about, employing all knowledge we can gather about the processes surrounding evaluation etc. is, in effect, to ”Quine” it. It seems people don’t Quine things anymore, or rather: that people don’t acknowledge that this is what they’re doing. To Quine something is not the same as to operationalize it, i.e. to stipulate a function for the concept under investigation, and to say that from now on, I’m studying this. To Quine it is to take into consideration what functions are being performed, which have some claim to be relevant to the role played by the concept, and to ask what would be lost, or gained, if we were to accept one of these functions as capturing the ”essence” of it. It is to ask a lot of round about questions about how the concept is used, what processes influence that use and so on, and to use this as data to be accounted for by an acceptable theory of it.
A Lamp, David Brax (yours truly) and Birgitta Forsman (I cannot speak for her, but I’m sure she likes you to). The lamp did not volunteer any opinions on the subject matter, but has offered to participate in a show on a certain development in 1800-century philosophy. Foto: Thomas Lunderqvist