Legal punishment is normally justified by appeal to Wrongdoing (the criminal act) and Culpability (”the guilty mind”). These are features focusing on the perpetrator, which makes sense as it is he (nearly always a ”he”) who will carry the burden of the punishment. We want to make sure that the punishment is deserved.
But it is also typically justified by appeal to societial well-being. To protect citizens from harm, to promote the sense of safety, to reinforce certain values, to prevent crime by threatening to punish, to rehabilitate or at least contain the dangerous. According to so-called ”Hybrid” theories, punishment is justified when these functions are served, but only when it befalls the guilty, and in proportion to their guilt (this being a function of wrongdoing and culpability). Responsibility/culpability constrain the utilitarian function. Desert-based justification is backward-looking, while the utilitarian, pro-social justification is forward-looking. (Arguably, the pro-social function is dependent on the perceived adherence to the responsibility-constraint.)
Neuroscientist and total media-presence David Eagleman had a very interesting article in The Atlantic a while ago, pointing out that revealing the neural mechanisms behind certain crimes tends to weaken our confidence in assigning culpability. Rather than removing the justification for punishment, Eagleman suggests that we move on from that question:
(Dedicated to 300 year old David Hume, with whom one would have liked to chat, according to widespread scholarly opinion. I sort of think a contemporary version exist in the form of Simon Blackburn)
Normative/evaluative concepts are difficult to analyze all the way down. Attempts to do so tend to leave one with a normative ”residue”.”Value” is one such concept, one that I’ve spent the best part of my youth trying to get to grips with. ”Ought” is another, one that I’ve spent the best part of my youth neglecting. ”Reason”, of course, is the current darling of the moral theory set.
G E Moore, famously, took this difficult residue to as evidence for the fundamental irreducibility of value. Value is a simple notion, one which we grasp but don’t know how we grasp, and nothing more can be said about it. It’s notable (and noted) that this statement comes rather early in Moore’s Principia Ethica and that he then goes on to say quite a lot about value.(Wittgenstein at least had the decency to END his Tractatus with his similar, but more general, claim).
Clearly, as Moore realised, things can be said about simple notions, otherwise how could we distinguish between different simple notions? For instance, notions such as ‘ought’ carry implications. This blogpost, which hasn’t quite started yet, is about the implications of ‘ought’. Goodbye, Youth.
Here are a handful of suggestions/observations, three about the ”implications” of ‘ought’, and one negative about the inferability of ‘ought’
- ‘Ought’ implies ‘If‘: the Hypothetical Imperative, an ”instrumental” ought. IF you want to get to the station in time, you OUGHT to take this short-cut. Some will say this is the ONLY sense of ‘ought’ that makes sense. Even the moral ‘ought’ carries conditionals of this sort.
- ‘Ought’ implies ‘can’: If you ought to do something, you can do it. What you ought to do is, for instance, the act that has the best possible consequences of all the acts that you can perform. These may not be very good, but what can you do? You can’t be blamed for not doing what you cannot do. The complications here regards the scope of the ”can”.
- ‘Ought’ implies ‘Would’: Not a strict implication, but this is an often used, but seldom recognised, method to backward engineer an inductive argument: If Utilitarianism is correct, we ought to put the fat man down on to the tracks to stop the runaway trolley. But I/you/most people wouldn’t. Thus, Utilitarianism cannot be correct. Utilitarians can, and do, reply that what you would do does not imply anything about what you should do, but still: this is awkward, and in need of explanation. If your moral theory implies that you should do something that you are reluctant to do, the theory suffers. It’s basically a sort of reductio argument, but with the ”absurd” replaced by the ”icky”. And then there is my pet peeve:
- You CANNOT infer an ‘ought’ from an ‘is’. Try as you might, the self-styled Humean blurts out, gather all the evidence you can, as long as you have only descriptions, you cannot infer what we ought to do. My main objection against this line of argument is that it is LAZY. People, professional philosophers not excluded, use this argument in order to save themselves from additional work – the find the normative claim, fail to identify any normative premiss, and then they don’t bother with the rest of the reading. Hume was right, you cannot infer an ‘ought’ from an ‘is’, at least not until you know what ‘ought’ means, how it is circumscribed by other normatively charged terms, and, in turn, what they mean and, possibly, refer to.
Developmental issues in general have, for obvious reasons, been much on my mind lately. It strikes me, as it struck Alison Gopnik thus causing the book the philosophical baby to be written, as strange that the importance of the development of certain capabilities, such as morality, belief-acquisition, language, understanding of objects and other persons, has not been seriously attended to in the theories of those things. Surely, a proper understanding of any domain needs to involve an understanding of how we come to know about it. The cognitive operations that the adult mind is capable of didn’t start out that way, and part of solving the mysteries of cognition is to investigate how it got that way. As Gopnik pointed out in her earlier book the scientist in the crib, babies learn in the way science proceed: by testing hypotheses, revising previous concepts and explanations to fit with the facts, and by thinking up new experiments. We start out with very little, but not nothing, and then we build on that. People generally start out the same – babies everywhere can learn whatever language, but at some point, when we’ve found what sorts of sounds typically occur in communication, we start to interpret, and eventually to ignore small vocal nuances in favor of more effective and more charitable interpretation within the language we thus acquire.
Understanding development is important in itself, and for understanding what it is that thus developed, but it is also important for treatment. If we know how certain capabilities develop, we might understand what happens when they don’t.
But here comes the first kink: scientist disagree about a key feature of development: whether we actually learn ”the hard way”, or whether certain developmental stages, such as understanding that others may have different beliefs from us, just ”kick in” at a certain age. Some knowledge may develop, not like conscious, or even non-conscious, belief-revision, but like facial hair or breasts. Presumably, these things start due to some biological signal, too, but it seems to be a different process from the sort of learning involved in science. It is also possible that the ”signal” in question must appear at a certain window of time. The intense developmental period known as childhood doesn’t last forever. For instance, if you cover the eyes of a cat from birth until a certain time, it wont develop eyesight at all.
These things are even more important in the case of treatment. If I fail to develop certain forms of understanding, such as understanding false beliefs, it is very important whether I can learn to understand it, or whether I need the biological signal. And, of course, whether this biological signal can be provided later on, or if it is too late.
Understanding these features when it comes to morality is clearly of immense interest. How does morality develop? We often hear that children can distinguish between moral and conventional rules at the age of 2 1/2 – 3. But how does this happen? How does one learn the difference? Clearly, we are born with a sense of good and bad (as I’ve argued, this is the capacity to feel pleasure and displeasure, and certain objects and situations that cue these feelings), and with the early stages of social neediness. From this, arguably, morality is created. But how? Is it just the persistent association of the needs/desires/interests of others with hedonic reaction in oneself? Or is it a further developmental stage that is needed?
This is a crucial thing, if we want to understand and do something about immorality. Immorality may, of course, arise in many ways. It may not have been nurtured, so that the right association wasn’t made in the crucial developmental window. But it may also be that the mechanism didn’t kick in, due to some cognitive disorder. And finally, there are cases where the moral reaction is just outnumbered by other interests: morality isn’t all of evaluative motivation. Which of these is the origin of a certain immoral act or immoral person is of immense interest when it comes to treatment, and also when it comes to assigning responsibility.
Tomorrow, I’m giving a short presentation at a lab meeting with the sinisterly named MERG (Metro Experimental Research Group) at NYU. The title is ”Value-theory meets the affective sciences – and then what happens?”. For once, the question tucked on for effect at the end will be a proper one (normally when using this title, I just go ahead and tell the participants what happens). I really want to know what should happen, and how the ideas I’ve been exploring could be translated into a proper research program. I’m constantly finding experimental ”confirmation” of my pet ideas from every branch of psychology I dip my toes in, but there are obvious risk with this way of doing ”research”. The question is whether, and how, those ideas might actually help design new experiments and studies more suited to confirm (or disconfirm) them.
I believe meta-ethics could and should be naturalized, and I have certain ideas about what would happen if it were. Now, we prepare for the scary part.
The last few years have seen a number of different approaches to morality become trendy and arouse media interest. Evolutionary approaches, primatological, cognitive science, neuroscience. Next in line are developmental approaches. How, and when, does morality develop? From what origins can something like morality be construed?
Alison Gopnik devoted a chapter of her ”the philosophical baby” to this topic and called it ”Love and Law: the origins of morality”. And just the other day, Paul Bloom had an article in the New York Times reporting on the admirable and adorable work being done at the infant cognition center at Yale.
Basically, we used to think (under the influence of Piaget/Kohlberg) that babies where amoral, and in need of socialization in order to be proper, moral beings. But work at the lab shows that babies have preferences for kind characters over mean characters quite early, maybe as early as age 6 months, even when the kindness/meanness doesn’t effect the baby personally. The babies observe a scene in which a character (in some cases a puppet, in others, a triangel or square with eyes attached) either helps or hinders another. Afterwards, they are shown both characters, and they tend to choose the helping one. Slightly older babies, around the age of 1, even choose to punish the mean character. Bloom’s article begins:
Not long ago, a team of researchers watched a 1-year-old boy take justice into his own hands. The boy had just seen a puppet show in which one puppet played with a ball while interacting with two other puppets. The center puppet would slide the ball to the puppet on the right, who would pass it back. And the center puppet would slide the ball to the puppet on the left . . . who would run away with it. Then the two puppets on the ends were brought down from the stage and set before the toddler. Each was placed next to a pile of treats. At this point, the toddler was asked to take a treat away from one puppet. Like most children in this situation, the boy took it from the pile of the “naughty” one. But this punishment wasn’t enough — he then leaned over and smacked the puppet in the head.
In a further twist on the scenario, babies (at 8 months) where asked to choose between still other characters who had either rewarded or punished the behavior displayed in the first scenario. In this experiment, the babies tended to go for the ”just” character. This is quite amazing, seeing how the last part of the exchange would have been a punishment (which is something bad happening, though to a deserving agent.) It takes quite extraordinary mental capacities to pick the ”right” alternative in this scenario.
If babies are born amoral, and are socialized into accepting moral standards, something like relativism would arguably be true, at least descriptively. Descriptively, too, relativism often seem to hold: we value different things and a lot of moral disagreement seems to be impossible to solve. In some moral disagreement, we reach rock-bottom, non-inferred moral opinions and the debate can go no further. This is what happens when we ask people for reasons: they come to an end somewhere, and if no commonality is found there, there is nothing less to do.
A common feature of the evolutionary, biological, neurological etc. approaches to morality is that they don’t want to leave it at that. If no commonality is found in what we value, or in the reasons we present for our values, we should look elsewhere, to other forms of explanations. We want to find the common origin of moral judgments, if nothing else in order to diagnose our seemingly relativistic moral world. But possibly, this project can be made ambitious, and claim to found an objective morality on what common origins occurs in those explanations.
If the earlier view on babies is false, if we actually start off with at least some moral views (which might then be modulated by culture to the extent that we seem to have no commonality at all), and these keep at least some of their hold on us, we do seem to have a kind of universal morality.
We start life, not as moral blank slates, but pre-set to the attitude that certain things matter. Some facts and actions are evaluatively marked for us by our emotional reactions, and can be revealed by our earliest preferences. Preferences can be conditioned into almost any kind of state (eventhough some types of objects will always be better at evoking them), so its often hard to find this mutual ground for reconsiliation in adults and that is precisely why it’s such a splendid idea to do this sort of research on babies.
”Does it contain any experimental reasoning, concerning matter of fact and existence?” – David Hume
In last weeks installment of the notorious radio show that I’ve haunted recently, I spoke to the lovely lady on my left on the picture below about the use of empirical methods in moral philosophy. The ”use of empirical methods” of which I speak so fondly is, on my part, constricted to reading what other people has written, complaining about the experiments that haven’t been done yet, and then to speculate on the result I believe those experiments (not yet designed) would yield.
Anyway: I have a general interest in experimental philosophy, but I haven’t signed anything yet, you know what I mean? That is: I don’t think (what the host of the radio show wanted me to say) that ”pure” armchair philosophy is uninteresting. Indeed, I believe that any self-respecting empirical scientist ought to spend at least some time in the metaphorical armchair, or nothing good, not even data, can come out of the process.
When coming across a philosophically interesting subject matter (and, let’s face it, they’re all philosophically interesting, if you just stare at them long enough. Much of our discipline is like saying the word ”spoon” over and over again until it seems to loose its meaning, only to regain it through strenuous conceptual work) I often find it relevant to ask ”what happens in the brain”? What are we doing with the concept? It is obviously not all that matters, but it seems to matter a little. Especially when we disagree about how to analyze a concept, there might be something we agree on. Notoriously, with regard to morality, we can disagree as much as we like about the analysis of moral concepts, but agree on what to do, and on what to expect from someone who employs a moral concept, no matter what here meta-ethical stance. Then, surely, we agree on something and armchair reasoning just isn’t the method to coax it out.
I try to be careful to emphasize that empirical science is relevant to value-theory, according to my view, given a certain meta-ethical outlook. Given a particular way to treat concepts. If we treat value as a scientific problem, what can be explained. Since there is no consensus on value, we might as well try this method. Whether we should or not is not something we can assess in advance, before we have seen what explanatory powers the theory comes up with.
Treating ”value” as something to find out about, employing all knowledge we can gather about the processes surrounding evaluation etc. is, in effect, to ”Quine” it. It seems people don’t Quine things anymore, or rather: that people don’t acknowledge that this is what they’re doing. To Quine something is not the same as to operationalize it, i.e. to stipulate a function for the concept under investigation, and to say that from now on, I’m studying this. To Quine it is to take into consideration what functions are being performed, which have some claim to be relevant to the role played by the concept, and to ask what would be lost, or gained, if we were to accept one of these functions as capturing the ”essence” of it. It is to ask a lot of round about questions about how the concept is used, what processes influence that use and so on, and to use this as data to be accounted for by an acceptable theory of it.
A Lamp, David Brax (yours truly) and Birgitta Forsman (I cannot speak for her, but I’m sure she likes you to). The lamp did not volunteer any opinions on the subject matter, but has offered to participate in a show on a certain development in 1800-century philosophy. Foto: Thomas Lunderqvist
I am a fan of keeping options left open, and of not leaving open options left undeveloped. When we find ourselves with conflicting intuitions in situations where intuitiois our only ground for theoretical decisions, it is basically an act of charity to develop a theoretical option anyway, in case someone will find it in their heart to – as we so endearingly say in philosophy – entertain the proposition. (My attitude here, you might have noticed, is a bit counter to my lament about a certain trope in the post below.)
Store that in some cognitive pocket (memory, David, it’s called ‘memory’ ) for the duration of this post.
Response-dependency. Some concepts, and some properties, are response-dependent. That means that the analysis of the concept, and the nature of the property, is at least partly made up by some response. To be scary, for instance, is to have a tendency to cause a fear-response. There is nothing else that scary things have in common. Things are different with the concept of danger: Dangerous things are usually scary too, but that is not their essence. Their essence consist in the threat they pose to something we care about, or should care about. Fear is usually a reasonable response to danger; fear is usually how we detect it. Danger might still be response-dependent, but then fear is not the crucial response.
Response-dependency accounts have been developed for many things. Quite sensibly for notions such as being disgusting. Famously by Hume for aesthetic value. And arguably first and foremost under the name ”secondary qualities” by Locke, and unceasingly since by other philosophers, for colours. Morality, too, has been judged response-dependent , and a great many things have been written about whether this amounts to relativism or not, and whether that would be a point against it.
Response-dependency accounts of value and of moral properties has a lot going for it. Famously, beliefs about moral properties are supposed to involve some essential engagement of our motivational capacities. And if the relevant property is one our knowledge of which is dependent on some motivational response, say an emotion, this we’re all set. Further, if these concepts/properties are response dependent, it would account for many instances of moral disagreements – we disagree on moral issues when our moral responses differ, and when the difference is not accountable by a difference in other factual beliefs and perceptions. If we accept that responses are all there is to moral issues, we might have to learn to live with the existence of some fundamental disagreements between conflicting responses, and moral views. Relativism follows if there is nothing that moral respones (for the most part, the moral response involved is some kind of emotion) track. What the account gives us is a common source of evaluative meaning in located in the fact that we all share the same basic type of responses. We just disagree in what causes those responses, and about what objects merit the response.
If we insist on locating the value (moral or otherwise) in the object/cause of the response (note that the object and the cause might be different things – we might project an emotion of something that did not cause it. This happens all the time), the response-dependency account results in a form of moral relativism. If one finds relativism objectionable, and there is no way to provide firm moral properties in the cause/object structure of typical moral responses, one might therefore want to reject response-dependency wholesale. I think this is mistake. If we agree that there is such a thing as a moral/evaluative response, and this response is something that all conceptually competent evaluators have in common, we have our common ground right there, in the response. It is not the kind of relativism where we find that seemingly disagreeing parties are actually speaking about different things altogether. In fact, there is a common core evaluative meaning, and that meaning is provided by the relevant response.
So, to the suggestion then, our theoretical option left open for development: that moral/whatever value is in the response, not in the object of the response. This seems to be the obvious solution once we’ve established that the value is metaphysically dependent on the response, and there is no commonality to what causes the response. If the responses themselves can is something that seemvaluable, and emotions usually do, we should develop that option, and disregard the fact that we tend to project value to the object of mental states. (If we keep on, as I do, and argue that the evaluative component of any emotion consists in it valence, and valence is cashed out in terms of pleasantness – unpleasantness, we have a kind of hedonism at our hands, but this option is open for any response you like).
Nothing is metaphysically more response-dependent than the responses themselves, and yet, this move avoids any objectionable form of relativism, while explaining the appearance of relativism. And, given that the response is motivationally potent, we have an inside track to the motivational power of moral/evaluative properties/beliefs. This, I’d say, makes it a theoretical option worth pursuing.
Lately, the question ”what is your dissertation about” has become somewhat more frequently asked of me. Forthy-five minutes later, I usually get the impertinent question if I’d mind making the answer a bit shorter. Well, I do mind, but allright. So I end up experimenting with different short-versions, none of which is unqualifiedly true. But then again, to be unqualifiedly true is pretty much to ask of any theory. After all, my argument is that hedonism is true enough. Anyway: I’d thought I’d give it another go, and to give you in less than, say 200 words, the gist of my theory of value. Ready? Here we go:
What’s good? Opinions diverge, so we turn to the more basic question: What do we say when we say that something’s good? What would make that statement true? Theories are wildly at odds with each other. What to do? It seems we are dealing with different uses of the term ‘good’, and we must decide how to treat this problem. The first decision is to look for their common origin, so we can say that these uses are variations on a common theme. The other decision is to treat goodness as a natural property.
Whatever value is, it must correspond to what we believe about it. We might be mistaken about it, but cannot be totally wrong. Value should fit with our beliefs about value and be part of the causal explanation of those beliefs. I argue that pleasure fits with many of our value beliefs, especially regarding how value relates to motivation, and it is universally believed to be valuable. Hedonic processes, are also a key part of the causal explanation of our evaluations, and evaluating abilities. This means that the common beliefs that the theory does not make true, it can explain away. Pleasure is value.
How did I do?