Most blogs are abandoned within a year, according to statistics that I just made up. Due to distractions, lack of readers or time, or just the failure to make blogging part of the unforced everyday writing that just sort of happens in a writing person’s life. The sort of writing that doesn’t take more time than it does to read it. Just thinking in a slightly more public medium.
What I’m saying is that I intend to, and kind of think I ought to. Just not when it competes with the much superior activity of hanging with this guy.
So ethics month(s) just ended. On Thursday, I sent around 50 critically acclaimed essays on applied, normative and meta-ethics back to their authors. Leaving me pondering the proposition that there is now a group of people, the sort of university educated people that invariably turn out ruling the world, the media, the arts and so on, that I taught ethics. For future readers of the web-archives: I’m sorry. Alternatively: You’re welcome.
Normatively, they are all over the place. Utilitarianism is probably the strongest contender, but not by a majority vote. Meta-ethically the interest has a clear tendency towards epistemology, and a weaker tendency towards coherentism. In general, they are very much able to relate their moral judgements in particular cases not only to the normative theory they favor, but also to other theories they know are held by other people. Let’s go out on a limb and call it a good thing.
Those propositions are now passed on to you, as I turn my attention to other things, assuming other perspectives. I’m on parental leave. My main objective for the next few months is play. There will be drumming, there will be crawling and toddling, there will be incomprehensible talk and the provision of feedback. There will, in all likelihood, be a sharp decline in vocabulary, grammar and level of abstraction in the blog posts to come.
How do you get to do philosophy? As a profession, I mean. How do you go from being genuinely interested in the stuff to doing it as a full time job? For an initiated look at the profession and some perfectly juicy gossip to boot, you would do well to turn to Leiter reports. If you happen to be female, or actually: no matter what you happen to be, you might want to look into the excellent but sometimes somewhat discouraging Being a woman in Philosophy. Talking to members of staff at your local, or any, department for philosophy is generally advisable. This being said, my academic advisor, when I asked about the prospect for a career in philosophy, said that it’s perfectly possible. If you’re independently wealthy.
Arriving at university very much a Man with Ideas, the question was mostly a practical one. How could I get to spend my days working on those ideas, ideally while getting paid and duly appreciated?
I find, and others have shared this experience, that it goes something like this. You read and you listen and you talk a lot. You are quite impressed, but not discouraged by the amount of excellent thinking that has already taken place. You’re not discouraged, either, by the fact that the history of philosophy has an unfair advantage. Kant, the lucky s.o.b. had the good fortune of being born into a world where Kantian ideas had not already been developed and twisted and turned ever which way. You find something that’s particularly interesting. Your teacher, if you got one, has probably given you an assortment of topics to choose from, and you look around for stuff that treats the topic you find yourself drawn to.
And then it happens. You find that something is missing. Despite initial appearances, there are philosophical theories that have not been developed yet. This is the opening. If you are really lucky, the position is not only unoccupied, but also quite plausible. And then you go about developing that missing theory, defending it’s merits over other, neighboring theories. More likely than not, you’re not going to convince anyone at this stage, but you may earn their respect, and the possibility to spend a couple of years making the best possible case for your theory.
Along the line, as you’re vacuuming of the field progresses, it is likely that you will find something that is awkwardly close to what you wanted to say to begin with. But then you’re already so von kopf bis fuss entrenched in it that you go on to develop an enhanced variation of it any way.
For me, the missing theory bit happened when I was dealing with preferentialism. The ongoing debate was between objective and subjective satisfaction versions of the theory:
Either it is good that what you want to be the case is the case no matter whether you know it or not, or the only thing that matters is that you think that your wish is satisfied: what is good is the conjunction of the desire and the belief that it is satisfied. Whereas the latter version struck me as more plausible, the mere conjunction of desire and belief did not seem to me to be close enough. It’s not enough that I have a desire that P while having the belief that P is the case. I may have those things, and fail to make the connection, so to speak. Neither would it be sufficient that I have the desire that P and the belief that this desire is satisfied. No, I thought there should be a more concrete relation between the two. Beliefs and desires being mental states, they had to actually meet up and a new mental state: the desire being satisfied as a result of the ingoing components meeting up was the real value-bearer here.
The ultimate result of that train of thought became part 1 of the dissertation, the theory of pleasure.
First of all: I like Susan Blackmore. In fact, I met her once, at the first proper conference I ever attended (the ”Toward a Science of Consciousness” conference in Tuscon 2004 hosted by David ”madman at the helm” Chalmers). She came and sat next to me during the introductory speech and asked me what had just been said. I said I hadn’t payed that much attention, to be honest, but I seemed to remember a name being uttered. We then proceeded to reconstruct the message and ended up having a short, exciting discussion about sensory memory traces. From now on, I remember thinking (having to dig deeper than just in the sensory memory traces, which will all have evaporated by now), this is what life will be like from now on. It hasn’t, quite.
ANYWAY: So I like Susan Blackmore, but today, I’m using her to set an example.
I recently had occasion to read her very short introduction to consciousness in which she take us through the main issues and peccadilloes in and of consciousness research. One of the sections deals with change blindness and she describes one of the funniest experiments ever devised: The experimenter approach a pedestrian (this is at Cornell, for all of you looking to make a cheap point at a talk) and asks for directions. Then two assistants, dressing the part, walks between the experimenter and the pedestrian carrying a door. The experimenter grabs the back end of the door and wanders off, leaving the pedestrian facing one of the assistants instead. And here’s the thing: only 50% of the subjects notice the switch. The other 50% keeps on giving direction to the freshly arrived person, as if nothing has happened.
This is a wonderful illustration of change blindness, and it’s a great conversation piece. You can go ahead and use it to illustrate almost any point you like, but here comes the problem: there is a tendency to overstate the case, especially among philosophers (I’m very much prone to this sort of misuse myself), due to the fact that we usually don’t know, or don’t care much, about statistics. Blackmore ends the section in the following manner:
When people are asked whether they think they would detect such a change they are convinced that they would – but they are wrong.
We have a surprising effect: people don’t notice a change that should be apparent, and as a result you can catch people having faulty assumptions about their own abilities, and no greater fun is to be had anywhere in life. But Blackmore makes a mistake here: People would not be wrong. Only 50% of them would. It’s not even a case of ”odds are, they are wrong”.
I would use this as an example of some other cognitive bias – something to do with our tendency to remember only the exciting bit of a story and then run with it, perhaps – only I’m afraid of committing the same mistake myself.
(Btw: I also considered naming this post ”so Sue me”)
”Does it contain any experimental reasoning, concerning matter of fact and existence?” – David Hume
In last weeks installment of the notorious radio show that I’ve haunted recently, I spoke to the lovely lady on my left on the picture below about the use of empirical methods in moral philosophy. The ”use of empirical methods” of which I speak so fondly is, on my part, constricted to reading what other people has written, complaining about the experiments that haven’t been done yet, and then to speculate on the result I believe those experiments (not yet designed) would yield.
Anyway: I have a general interest in experimental philosophy, but I haven’t signed anything yet, you know what I mean? That is: I don’t think (what the host of the radio show wanted me to say) that ”pure” armchair philosophy is uninteresting. Indeed, I believe that any self-respecting empirical scientist ought to spend at least some time in the metaphorical armchair, or nothing good, not even data, can come out of the process.
When coming across a philosophically interesting subject matter (and, let’s face it, they’re all philosophically interesting, if you just stare at them long enough. Much of our discipline is like saying the word ”spoon” over and over again until it seems to loose its meaning, only to regain it through strenuous conceptual work) I often find it relevant to ask ”what happens in the brain”? What are we doing with the concept? It is obviously not all that matters, but it seems to matter a little. Especially when we disagree about how to analyze a concept, there might be something we agree on. Notoriously, with regard to morality, we can disagree as much as we like about the analysis of moral concepts, but agree on what to do, and on what to expect from someone who employs a moral concept, no matter what here meta-ethical stance. Then, surely, we agree on something and armchair reasoning just isn’t the method to coax it out.
I try to be careful to emphasize that empirical science is relevant to value-theory, according to my view, given a certain meta-ethical outlook. Given a particular way to treat concepts. If we treat value as a scientific problem, what can be explained. Since there is no consensus on value, we might as well try this method. Whether we should or not is not something we can assess in advance, before we have seen what explanatory powers the theory comes up with.
Treating ”value” as something to find out about, employing all knowledge we can gather about the processes surrounding evaluation etc. is, in effect, to ”Quine” it. It seems people don’t Quine things anymore, or rather: that people don’t acknowledge that this is what they’re doing. To Quine something is not the same as to operationalize it, i.e. to stipulate a function for the concept under investigation, and to say that from now on, I’m studying this. To Quine it is to take into consideration what functions are being performed, which have some claim to be relevant to the role played by the concept, and to ask what would be lost, or gained, if we were to accept one of these functions as capturing the ”essence” of it. It is to ask a lot of round about questions about how the concept is used, what processes influence that use and so on, and to use this as data to be accounted for by an acceptable theory of it.
A Lamp, David Brax (yours truly) and Birgitta Forsman (I cannot speak for her, but I’m sure she likes you to). The lamp did not volunteer any opinions on the subject matter, but has offered to participate in a show on a certain development in 1800-century philosophy. Foto: Thomas Lunderqvist
My first all too serious philosophical essay was on Heidegger (well, actually, I did a number on the ”positionality” concept in the work of Sartre earlier still, but it would take an insane amount of scholarly obsession for anyone to ever dig that up). The nicest thing said about was that it is ”not as incomprehensible as these things usually are”. The literature I discussed, I found at the University Library, actually going through a number of philosophy journals. I had a computer at the time, which was just barely hooked up to the internet, but didn’t use it for literature searches, just for writing and the occasional email. I spent a lot of time thinking about the subject of my essay, and used a very limited amount of sources.
A year or so later, while working on a different essay, I discovered JSTOR, and for about a month and a half, the printer didn’t get a rest. It suddenly dawned on me that everything interesting had been written about, at length, from almost every perspective, and the goal to find a theoretical position that was not currently occupied, and then to occupy it, suddenly struck me as much more difficult than I’d imagined. I spent the next few years reading more, too much probably, and thinking and writing less.
I used to do all my best thinking during walks and while running (or derogatorily: ”jogging”). Usually in very dull environments, not to distract from the thinking. Then I got an iPod, and started to listen to lectures, podcasts and audiobooks during those walks and runnings. (iTunes university has some great stuff, the podcasts from Nature, and from TED and the RSA are excellent. BBC 4’s ”thinking allowed” and ”in our time” just have me in stitches). And instead of thinking about what I’ve just heard, I tended to listen to another lecture, podcast or audiobook. Similarly with papers, even books. Before I start working on this chapter, I argued, I just need to read this paper, or that book. One wouldn’t like to be caught out ignorant, now, would one? No, one would not.
The all too great availability of other people’s writing and thinking made me quite heavy on the consumer side of science and philosophy, and much less of a producer. It is, of course, a great thing to learn, and to listen, but in order to become a philosopher, it is necessary to start doing it for yourself. To actually not care, for a bit, whether someone has written that same thing before, and been more well read while doing so.
My dissertation took longer than it should have, and I know people who have been, and still are, in that state where they just can’t seem to finish their texts. Partly, I believe, for this reason. They are excellent, well-read consumers and thoughtful, accomplished critics, but seems almost to have forgotten how to actually do philosophy. (The dominance of ”critical” philosophy among published articles is a testament that this tendency is very common indeed). The kind of second-order thinking were you are constantly reflecting on how what you are writing relates to what other people have written tends to stand in the way of confident, genuinely original and interesting work. At some point, you just have to get out of reading mode, and enter writing mode.
I consider banning the phrase ”one might argue that…” from all my future writings. Since these writings have a better than average chance of being writings in philosophy, this ban would be the equivalent of Perec’s ban of the letter ‘e’ from that novel you know you ought to have read already.
You see, it’s such a useful phrase, and it rids you of all responsibility for anything that follows. No referee, however anonymonous, is likely to insert the comment ”no, one might not”. Since philosophy consist to such a large extent of stating propopositions to which one does not assent, dispensing with this trope is perhaps masochistic of me, and might well prove near impossible, but I believe it would be an improving exercise.
I don’t sing in the shower. It’s not that I don’t sing Period, I will join in when people let me and have even been known to attempt the occasional added fifth or fourth or thereabouts. It’s just not a shower thing. I do, however, prepare lectures in that setting. Trying out how they sound and so on. Thinking is usually a rather unstructured, associative affair, and even writing these post-typewriter days has lost some of it’s definitiveness (as these rather freefloating associations of mine amply demonstrate), but Thinking Out Loud helps getting the clarity and simplicity required for getting the ideas across. Just ask the neighbors in the poorly soundproofed apartment next to ours.
When philosophical questions are being asked, my go-to answer is usually ” ‘Yes’, with an ‘if’, or ‘No’ with a ‘but’ .”
Do we have free will? Well, yes, if you’re prepared to accept being within a certain range of mental parameters as having free will. No, if you are looking for something somehow else. Then there is no free will, but there is a pretty good substitute, one that quite closely fits with what we expect from free will, and makes it possible to distinguish between free and unfree acts in a way corresponding to our intentions in using that term. And so on for God, virtues, consciousness, value, causation, etc.
As the discerning reader will no doubt have noticed by now, the ”yes” and ”no” answers can be roughly the same: the qualifications push them into roughly the same position. Which is why one should never rely to much on the fact that one philosopher says ”yes” and the other ”no” to a particular question. Nor, of course, should one rely on the fact (if it ever is one) that two philosophers seem to agree. But mostly, one should take to heart the words of Robert Webb, playing that part of a raging host of the radio show ”Big Talk”:
Guest (being asked whether there is a God): Well, there is no yes or no answer.
Webb: What? I can think of two yes or no answers right of the top of my head!