Perception: a discriminating capacity or a predictive capacity?

By John Joseph Dorsch

Abstract: Schellenberg (2016) believes that perception is an essentially discriminating capacity. On the basis of this belief, she develops the particularity argument, which holds that perceptual states are constituted by particulars. The first premise of the argument reads “If a subject S perceives a particular α, then S discriminates and singles out α” (p. 11). This premise can be rejected by arguing that it is possible to perceive a ganzfeld, viz. a homogeneous field without any particulars to discriminate. Schellenberg dismisses this counterargument by referring to the ganzfeld effect, which she describes as “experiencing a sense of blindness”, concluding that “we cannot in fact see a completely uniform wall that fills out our entire field of vision” (p. 13).

I discuss two challenges for Schellenberg’s dismissal of the ganzfeld. Participants in ganzfeld experiments report experiencing a sense of blindness only after approx. 30 minutes of exposure (Wackermann et al 2008). Surely it is plausible that viewers see the ganzfeld during this half hour before the sense of blindness is reported to occur. The second challenge is this: the ganzfeld effect only occasionally results in viewers reporting a sense of blindness; far more often, viewers report seeing hallucinations. I show that Schellenberg’s theory of perception as an essentially discriminating capacity cannot adequately respond to this challenge and argue that a more plausible alternative is the predictive processing theory of perception/action (Hohwy 2013; Clark 2016). I conclude with possible implications for Schellenberg’s particularity argument and her belief that perception is an essentially discriminating capacity.

 

Introduction

The ganzfeld effect can be easily reproduced. Take a white ping-pong ball and slice it in half; now place both halves over the viewer’s eyes, making sure her visual field is completely obscured; set a light source in front of the viewer and adjust illumination to be as uniform as possible. Within a short period of time, the viewer should begin hallucinating just like participants in ganzfeld experiments. These hallucinations typically develop in complexity throughout ganzfeld exposure; most participants report seeing a dense, monochromatic fog at first (Metzger 1930). As time progresses, more complex hallucinations appear, such as dreamlike imagery of people and places; sometimes participants report seeing “blackness” or experiencing a “sense of blindness” (Metzger 1930 ; Cohen 1960; Gibson 1979; Pütz 2006; Wackermann et al 2008).

Schellenberg draws upon the ganzfeld effect to bolster her belief that perception is an essentially discriminating capacity. She begins by claiming that if you perceive something, say, a table, it is only because you discriminate the table from its surround. She concludes from this that perceiving a homogeneous field, a ganzfeld, is impossible, since there would be nothing in a ganzfeld to discriminate. She believes further that this conclusion is supported by empirical evidence from ganzfeld experiments: some participants report experiencing a sense of blindness. That said, the ganzfeld effect is constituted by more than experiencing a sense of blindness; more often, ganzfeld viewers report seeing hallucinations.

The goal of this paper is to argue that you can, contrary to Schellenberg’s belief, see a ganzfeld. One possible challenge, which I call the argument from delay, is that a considerable amount of ganzfeld exposure is required before viewers report experiencing a sense of blindness. However, I will argue that Schellenberg may sufficiently counter this argument, provided an appeal is made to a system of object indexes (Leslie et al. 1998; Scholl and Pylyshyn 1999; Carey and Xu 2001; Scholl 2007). In the second section, I anticipate Schellenberg’s response to the argument from hallucination; i.e. participants in ganzfeld experiments more commonly report seeing hallucinations than experiencing a sense of blindness. That said, I show that this anticipated response fails to explain how hallucinations emerge. Therefore, I argue that an alternative theory of perception, called predictive processing, is needed to account for the ganzfeld effect. In the conclusion, I discuss what our discussion means for Schellenberg’s particularity argument and her belief that perception is an essentially discriminating capacity.

 

1. Argument from Delay

I will refer to Schellenberg’s view of perception as the neatly expressed maxim: no perception without discrimination, which shares similarities to Merleau-Ponty’s maxim: no perception without a figure-background structure. Merleau-Ponty would claim that perception can occur only on the basis of some discriminated figure vis-à-vis some discriminated background. Thus, Merleau-Ponty (1945) also denies the possibility of perceiving a ganzfeld, “A truly homogeneous area, offering nothing to perceive, cannot be given to any perception” (p. 4). In responding to arguments based on the ganzfeld, Merleau-Ponty has avenues at his disposal that Schellenberg does not. Merleau-Ponty might claim that even when viewing a ganzfeld, one still perceives a figure-background structure: the figure is the ganzfeld, while the background is provided by other sensory modalities, such as proprioception. Schellenberg cannot pursue this route, however, because her argument focuses on a single modality. Therefore, when confronted with arguments based on the ganzfeld, she emphasizes that viewers report experiencing a sense of blindness during ganzfeld exposure. For this section, I assume that a sense of blindness is the only ganzfeld effect viewers report; but assuming this does not dismiss the need to explain the delay between ganzfeld onset and ganzfeld effect (approx. 30m).

One might respond by claiming that the visual system requires time before the ganzfeld is perceived as a truly homogeneous field. This means, between ganzfeld onset and ganzfeld effect there is an interval, in which the visual system can still discriminate. Since the ganzfeld is a homogeneous field with nothing to discriminate, the source of discriminata cannot be the visual field. Instead, the source must be the visual system itself. Thus, I propose that the visual system continues to discriminate its own, what are called, object indexes (Leslie et al. 1998; Scholl and Pylyshyn 1999; Carey and Xu 2001; Scholl 2007).

Conjectured to underpin visual perception, object indexes function like pointers for perceptible objects by representing them and tracking them in the visual field. Imagine the following scenario. You see a completely homogeneous field before you. There are no differences in the field to discriminate, but you can still single out points in the field. Though each point appears identical to the others, each point is differentiated by the direction of your visual focus; e.g. you observe the top-left area, then you observe the bottom-right area. While shifting your focus from one area to the next, you continue to discriminate. It is claimed that these discriminated points are underpinned by a system of object indexes.

 

Though this response accounts for the delay in the ganzfeld effect, it fails to account for its very occurrence. An event is needed, wherein even discrimination underpinned by object indexes is no longer possible. Recall the scenario above. Notice that in order to discriminate the left from the right, or the top from the bottom, some distance between these areas must be perceived. And yet, it seems difficult to imagine how distance can be perceived in the absence of any perceptible difference, such as during ganzfeld exposure. This argument is consistent with empirical evidence from ganzfeld experiments: as ganzfeld exposure increases, participants’ ability to judge distance becomes progressively diminished (Metzger 1930). Therefore, it seems plausible that discriminating between top and bottom, left and right, becomes increasingly more difficult as discerning the distance between these areas becomes increasingly more difficult. Possibly, during prolonged ganzfeld exposure, the system of object indexes, which is enabling perception to continue to operate as a discriminating capacity, fails, once object indexes themselves, being dependent on some perceptible distance, can no longer act as aids to discrimination.

In responding to the argument from delay, Schellenberg, believing perception to be an essentially discriminating capacity, may appeal to a system of object indexes to account for the delay between ganzfeld onset and ganzfeld effect. This response thus posits two classes of discriminating capacities: allocentric and egocentric (see Klatzky 1998). The allocentric capacity discriminates based on a world-centered reference frame, i.e. how things in the world relate to each other. The egocentric capacity discriminates based on a self-centered reference frame, i.e. how the self relates to itself, the visual field and, if possible, things in that field. In the case of ganzfeld exposure, only the egocentric capacity can be employed, so points in the visual field are discriminated from other points by shifts in visual focus, all of which are underpinned by a system of object indexes, which is integrated into the visual system itself. That is not to say that object indexes only represent egocentrically; object indexes can represent allocentrically as well (Alæs et al. 2015). Instead, I mean to say that if things in the visual field can only be discriminated egocentrically, such as during ganzfeld exposure, this egocentric discriminating capacity is enabled by a system of object indexes. Once the discriminata of the egocentric capacity are no longer available, the ganzfeld fills the visual field, and, since perceiving a homogeneous field is believed to be impossible, the viewer experiences a sense of blindness. That said, the ganzfeld effect is more than a sense of blindness; more often, ganzfeld viewers report seeing hallucinations.

 

2. Argument from Hallucination

Discovering how ganzfeld exposure causes viewers to hallucinate has interested psychologists and philosophers for nearly a century now. In the late 1950’s, it was postulated that hallucinations emerge as the result of the breakdown of the perceptual system. Discussing possible reasons for the ganzfeld effect, Cohen (1957) conjectures, “…that the perceptual mechanism has evolved to cope with a differentiated field, and, in the absence of differentiation, there is a temporary breakdown of the mechanism” (p. 407). I believe Schellenberg would respond to the argument from hallucination in a similar fashion: ganzfeld hallucinations actually support the view no perception without discrimination; since there are no discriminata, it is plausible to think, in light of the emergence of hallucinations, that employing perceptual capacities generates discriminata. Though this response indicates that ganzfeld exposure leads to the generation of hallucinations, it does not explain how hallucinations are generated.

The theory of perception called predictive processing postulates that perceptual systems are essentially predictive capacities. If the environment is noisy, such that little in the environment can be reliably predicted, say, in a dense fog, where just about anything can emerge, then predictions about the environment will be processed as more reliable than sensory data from the environment. When the conditions are such that the sensory data are weighted as very unreliable, as is thought to be the case during ganzfeld exposure, the generation of predictions can lead to the emergence of hallucinations. So if the predictive processing theory of perception is correct, hallucinations do not result from the breakdown of the perceptual system but from its proper functioning. In other words, predictive processing maintains that hallucinations emerge during ganzfeld exposure because you can and do see a homogeneous field.

A predictive-processing perceptual system copes with the environment by continuously estimating its own uncertainty regarding sensory data. Through a balancing act of bottom-up influences, such as sensory data, and top-down influences, such as predictions, the perceptual system discloses percepts at the juncture between what is sensed and what is predicted (Hohwy 2013; Clark 2016). If sense data are processed as uncertain, less weight is given to errors resulting from the sense data: the less weight given to the errors, the less impact the data have on what is perceived. In other words, if the environment is processed as very unreliable, then the sensory data of the environment will have little impact on what is perceived. Instead, in these error prone environments, more weight is given to top-down influences that generate predictions.

 

This is how predictive processing explains the emergence of hallucinations during ganzfeld exposure. You might imagine seeing nothing but blue sky on a cloudless day. In that event, you would see a natural ganzfeld. That aside, the ganzfeld is a rare, natural occurrence. Since its occurrence is so uncommon, sensory data in the ganzfeld are processed as very unreliable. This means, the sensory data are inhibited from revising top-down predictions. This suppression of bottom-up sensory data leads to an increase in activity of top-down prediction data. Normally, predictions are regulated by sensory data, but, during ganzfeld exposure, errors resulting from sensory data are weighted so low that predictions begin to progressively regulate themselves and hallucinations emerge as a result—like a self-fulfilling prophecy.

This explanation can provide a response to the argument from delay. As mentioned above, the hallucinatory imagery often begins as a dense, monochromatic fog that slowly fills the visual field. Over time, more complex images appear within or outside of the fog. So the delay can be perhaps better understood as a slow progression from simple to more complex hallucinations as ganzfeld exposure increases. Thus, it is plausible to think that this progression develops in parallel with predictions generated by top-down processing: when predictions adapt to the unreliable environment and begin to regulate themselves, more complex hallucinations emerge.

In addition to the argument above, the predictive processing theory of perception, regarding its explanation for the emergence of hallucinations during ganzfeld exposure, is supported by neurological evidence. Pütz (2006) investigated the EEG correlates of ganzfeld induced imagery. When participants reported hallucinations, Pütz discovered an increase in alpha wave activity, which suggests “the retrieval, activation, and embedding of memory content in the ganzfeld imagery” (p. 175) and “[the inhibition] of the processing of optical and acoustical sensory input”, which “needs to be suppressed to allow for internally directed attention” (p. 177). This “internally direction attention” corresponds to the ganzfeld induced mental state, in which hallucinatory imagery emerge. If you assume a correlation between the phenomenology of experiencing hallucinatory imagery and the observed electrical activity in the brain, then it is plausible to think that Pütz’s findings support the conclusion that hallucinations emerge as the result of higher level processing coupled with the inhibition of lower level processing. Thus, regarding ganzfeld exposure, the case for predictive processing is supported by these findings.

 

Conclusion

I would like to summarize what this discussion means for Schellenberg’s particularity argument and her belief that perception is an essentially discriminating capacity. The alternative theory of perception, predictive processing, is able to account for hallucinations during ganzfeld exposure, though Schellenberg’s theory of perception cannot. One conclusion to be drawn from the predictive processing theory of perception is that we can and do see a homogeneous field. If this is correct, then the first premise of Schellenberg’s particularity argument is false and, furthermore, doubt is cast on her belief that perception is an essentially discriminating capacity; instead, perception may be an essentially predictive capacity. However, predictive processing, if correct, would need to explain why some ganzfeld viewers experience a sense of blindness, which I find difficult to imagine accounting for without appealing to perception as an essentially discriminating capacity. So regarding future investigation, perhaps a unification of the two theories should be sought.

Forms of subjective bodily experience

 

By Masa Urbancic

Introduction

In our everyday lives, we experience and relate to our bodies in a variety of ways. There are moments when the body lies at the forefront of our attention, whereas in other circumstances it is at the back of our minds. In this paper, I will discuss Legrand’s (2007) differentiating modes[1] of subjective bodily experience and argue that observation of one’s own body does not always imply that the body is experienced as an object and can thus be experienced as a subject when observed. The latter experience hinges upon a different kind of observation. Essential to note is that this account does not contradict Legrand’s forms of bodily experience but adds an important point that was omitted in her account.

Firstly, I will introduce Legrand’s modes of bodily experiences. Secondly, I will proceed with my argument of different kinds of observation and show that the body can be experienced as a subject when it is observed. Lastly, the argument will be extended to the case of expert dancers, which Legrand also draws on in her distinction of different modes of subjective bodily experience.

Legrand’s (2007) forms of subjective bodily experience

Based on empirical and phenomenological explorations of self-consciousness, Legrand argues that there are observational reflective[2] and non-observational pre-reflective forms of self-consciousness. Proceeding from this starting point, she distinguishes four different ways in which the body can be experienced from a first-person stance (p. 493). On one side of the spectrum lies the opaque body, where the body is taken as the object of our attention. Here the body is observed and “one does not look through [the body], but at [the body]” (p. 500, my emphasis). On the other side lies the invisible body, which corresponds to the state where one is unconscious of the body and the latter is not experienced at all, such as in the case of loss of proprioception (p. 500). The other two modes of bodily experience lie between these two ends of the spectrum, both of which fall under the sphere of bodily pre-reflective self-consciousness and it is where most people experience their bodies in everyday life (ibid). They are divided between the pre-reflective experience of the body (the performative body) and the pre-reflective bodily experience of the world (the transparent body) (p.500).[3] Performative body is how expert dancers normally experience their bodies, where the body lies at the front of one’s experience but it is not being reflected upon or observed (Legrand 2007, 501-2). The body is thus experienced as a subject-agent (p.506) and the emphasis in this mode lies with the experience of the body in a pre-reflective, non-observational way (ibid).

The other is the transparent body, where the focus is directed at the experience of the world “in a bodily way” (p. 506, emphasis in original). In other words, the experience of the world is given through this transparent body – one looks “through it to the world” (p. 504, emphasis in original). In this mode, the body is also experienced as a subject that perceives and acts and is experienced as being in the world (p. 504), but the body is not positioned at the front of one’s experience in this mode. In both modes therefore, the body is not seen as an “object of identification” (p. 506), but is experienced as a subject on a pre-reflective level.[4]

The body as observed and experienced as a subject

In this part, I will attempt to show that the body can be experienced in a different way from the ones mentioned above. I concur with the reflective and pre-reflective division of possible bodily experiences but I argue that observation of one’s body, though it sometimes is, does not always imply the body is always experienced as an object. The attempt will be to demonstrate the body can be observed while simultaneously experienced as a subject in the world and therefore is an important addition to Legrand’s account. Legrand’s use of the word observation is not clear enough and it seems to be more associated with eyesight (e.g. she uses the word look– one looks at or through the body (p. 500)). It is my proposition that different kinds of observation are responsible for the subject or object bodily experience.

This subject-object perception, I argue, depends on the angle one takes towards the body, i.e. where the locus of attention is and what direction it takes from there. At least in our culture, observation is usually associated with eyesight. In this sense, when I look down at my body, my attention travels from my eyes onto (the surface of) my body. Hence my body is being perceived as an object, especially if I am consciously concentrating on how to move it, like in the case of a dancer learning new choreography. This is what happens in the case of the opaque body and it is a metaphor we tend to live by that only by seeing we know and understand (Lakoff and Johnson 1980, 470).

However, I can deliberately shift my locus of attention from my eyes by positioning it within my body and directing it towards the world. In other words, attention commences inside one’s body, e.g. at core abdomen muscles and is being directed to the outer world. In this way, despite the attention being directed at the world, the body is held in awareness at the same time because the attention originates within one’s body and hence permeates the body and the world. Precisely this change of locus creates the experience of the body as a subject, since the body is not being observed anymore from the location of eyesight, but within itself. In this manner, the body can still be observed. I can still consciously focus on my movements, but if the attention begins within the body, it is no longer an object. This is an addition to Legrand’s view, where the body can be only experienced as a subject at the pre-reflective level, either in performative or transparent body mode.

For further discussion, let us take Merleau Ponty’s example of the touching-touched hand that Legrand also refers to in her discussion. “If I touch with my left hand my right hand while it touches an object, the right hand object is not the right hand touching” (Merleau- Ponty 2012, 95). What the touched hand is experiencing, Legrand equates with observational consciousness, whereas the experience of the touching hand is consistent with pre-reflective bodily consciousness (p. 499). The touching hand, therefore, is the experience where the body is the “subject of experience and is experienced as such” (ibid). I agree with Legrand that the touched hand is experienced as an object of observation, be it either by the touching hand or the eyes. In the case of the touching hand, I would add that the touching hand is experienced as touching because that is where the sense of agency lies in that moment, i.e. where attention commences. I am in agreement that in this case the direction of attention is not held in conscious awareness and, as such, the touching hand is experienced pre-reflectively. In everyday circumstances, the movement of bodily parts is not being done on reflection, i.e. what Gallagher (2005, 74) calls performative awareness. I do not need to specifically reflect that these are my arms that are moving (ibid). However, it is unclear why the touched hand should be experienced as consistent with observational consciousness, since in our usual experience we are not consciously observing or reflecting on our body either and still experiencing it as an object. Both the touching and the touched hand are experienced pre-reflectively in this case, so the touched hand cannot be observed if Legrand’s argument follows. That is why I claim it is the interplay between the origin and direction of attention that matter in the bodily experience of subject-object, not simply observation or reflection. Hence, although not usual, the experience of the touching hand is possible when observed as well. Here a potential objection might be raised that observational bodily experience as a subject is not applicable since it scarcely resembles the everyday bodily experience of most people. However, in the same way Legrand’s mode of performative body is experienced differently by dancers and non-dancers (p. 502), the observational bodily experience as a subject will be differently experienced by those who have practiced the skill of consciously observing their bodies in different ways and those who have not. But it is an experience of the body that can be enhanced through training (e.g. meditation or dance), not an experience people would be lacking altogether.

There are two further reasons why there is credibility the body is not necessarily always experienced as an object at a reflective level and it can be perceived as a subject. First, the term subject or agent implies action and activity in the world. It therefore appears plausible that the body is experienced as a subject at the reflective level as well, where the body is explicitly present in our conscious experience, which in turn increases the sense of agency in the world. It could be said here that focus on the body takes the attention away from the activity. But here the reply is the second reason, namely that the assertion of the body being viewed as an object at a reflective level carries with it a hidden assumption, which is that the mind is not capable of being aware of more than one thing at a time. However, there is no reason to suppose one could not be aware both of the world and the body simultaneously.

To emphasise once more, there is no denial that there exists a distinction between pre-reflective and reflective levels of bodily experience. But the subject/object dichotomy hinges upon the origin and direction of awareness and its interchange, not simply on reflection. This was demonstrated by showing the body can be experienced as a subject at the reflective level.

Dancers: Experiencing the body

For the end, I will shortly look at the case of dancers, in view of the fact they are mentioned by Legrand and deemed as being more familiar with bodily experiences than most people. In her paper, Legrand maintains that a beginner dancer or a dancer starting to learn a new choreography will need to consciously control her movements and position, hence taking an observational perspective on the body (p. 501). She is in the mode of the opaque body. An expert dancer, on the other hand, is embodying the dance, having a pre-reflective experience of the body (ibid). However, I would argue there is also the state where the body is observed and experienced as a subject, but that depends on the origin and direction of attention. When a dancer is learning the dance, she is in fact learning how the movements feel from the inside. In other words, she is moving from looking at the body from the eyes to experiencing it from the inside. As a former dancer said to me[5]: “When I’m learning the choreography and I’ve got it, I’m taking it from the outside and putting it on the body; I’m learning how it feels like.” On the other hand, when the dancer performs, her attention seems to be situated within the body:

“Where is your mind when you already know the dance? – In my body. But the attention is not on the movements because that is a separate programme that is running.”

“It’s all one [music and body]. …. You can just walk in, switch the programme on, all you do is feel the body.”

“When I am on stage, my intention will be to perform. Your focus is on how you are performing or telling the story, depending where the story needs to be for the choreography. The body just does it, it is a separate programme.” (personal interview)

In dancing, the attention is not directed towards the movements but towards the feeling of the body and performing. I would argue here that because attention is on how the body feels, it is still held in conscious awareness (whereas the sequence of movements is held pre-reflectively). Therefore, the body is experienced as a subject because the feeling of the body originates within the body. This bodily awareness is also felt in daily life:

“When I am sitting here I am feeling my pelvis and that I was too leaning forward there /…/ so there’s lots of things that while I am talking to you, I am just going around my body….So your body is always in awareness? – Yes, pretty much….not always in awareness, but a lot of times in awareness. I’d say it’s a good 50 % of my life in any day that I got one part of my mind on my body. “

“Do you experience it as a subject or an object? – Yes, it’s a subject, it’s not an object. It’s a consciousness ….my body is a conscious being that I inhabit.” Can you observe it and still experience it as a subject? – Yes.”

From this testimony it seems that the attention is still directed into the outer world in everyday life but simultaneously also encompasses the body in a reflective way, since the attention is on how the body feels. But the body is not being looked at, but felt from the inside.

Conclusion

The purpose of this essay was to present Legrand’s different forms of subjective bodily experience and add that observational consciousness does not imply the body always is experienced as an object. There is the possibility that the body is observed and simultaneously experienced as a subject. This was presented by demonstrating that a bodily experience as subject or object is determined by different kinds of observation, i.e. by the location from which attention originates and its direction.

 

 

BIBLIOGRAPHY:

Colombetti, G. 2011. Varieties of pre-reflective self-awareness: foreground and background bodily feelings in emotion experience. Inquiry: An Interdisciplinary Journal of Philosophy 54 (3): 293-313.

Gallagher, S. 2005. How the body shapes the mind. Oxford: Oxford University Press.

Lakoff, G. and M. Johnson. 1980. Conceptual Metaphor in Everyday Language. The Journal of Philosophy 77 (8): 453-86.

Legrand, D. 2007. Pre-reflective Self-Consciousness: On Being Bodily in the World. Janus Head 9 (2): 493-519.

Merleau-Ponty, M. 2012. Phenomenology of Perception. Abingdon: Routledge.

Pinku, G. and J. Tzelgov. 2006. Consciousness of the self (COS) and explicit knowledge. Consciousness and Cognition 15(4): 654-61.

[1] I use words mode and form interchangeably.

[2] Legrand (2007, p. 497) states her observational reflective self-consciousness corresponds to Pinku and Tzelgov’s (2006) notion of consciousness of the self as object.

[3]Legrand states there are three forms of experience: the body is experienced as opaque, as pre-reflective or as invisible (p. 500). Note that in terms of bodily self-consciousness, there are three forms, but the pre-reflective experience is further subdivided into the performative and transparent body (hence, four modes of bodily experience). I argue that reflective bodily experience does not always imply the body is experienced as an object.

[4] I take these two modes as corresponding to Colombetti’s (2011) distinction between foreground and background bodily feelings. The latter are both experienced on a pre-reflective level, with bodily feelings lying either at the foreground or background of our experience. In a similar way, in the performative body, the latter lies at the front, whereas in the transparent body, the body lies at the background of our pre-reflective experience.

[5] This was part of a personal interview with Chris Blagdon, who is a former professional ballet dancer and a Pilates instructor.

Should we allow freedom of reproductive choices?

Gabriela Arriagada Bruneau

As scientific progress moves at a higher rate than our own moral understanding, we are encountering scenarios that increase our moral qualms towards the limits of human genetic intervention. Consequently, we ask ourselves if we should permit human genetic interventions and, if permissible, to what extent? In this essay, I claim that our concerns and hesitations towards the implementation of such practices are well founded. Here I will argue against Savulescu’s proposal in his article “Deaf lesbians, ‘designer disability’, and the future of medicine”[1] which claims that we should extend freedom of reproductive choices by accepting the non-identity argument. Savulescu states that if a conceived child is not worse off than non-existing, there is no harm done to such child. Therefore, a wide range of reproductive choices become permissible. In contrast, I claim that even if we do accept the non-identity argument of his view, there are still reasons to narrow the extension of such practice. There must be a limit to the scope of human intervention.

 

  1. Introduction

 

Before entering the discussion about reproductive choices, it is important to highlight what falls into the category of reproductive human genetic intervention. When I refer to reproductive human genetic intervention, I will have in mind an external interference that deliberatively determines how a human being will be by altering its natural conception. This can include positive or negative interventions. Positive interventions are those that intend to prevent an overall bad outcome for the patient, such as having a lethal disease. In such cases, progenitors can, for example, choose to eliminate a gene that causes cancer in their offspring. On the other hand, negative interventions are those which seek the enhancement or deprivation of certain qualities based on motivations that fulfil an ulterior motive or desire of the progenitors. Having this distinction clarified, I will reformulate Savulescu’s argument.

 

 

 

  1. Savulescu’s argument

 

While some couples might want to use the practice of genetic intervention to avoid diseases or improve the healthiness of their offspring, Savulescu defends the case of choosing a disability, an irreversible life option for a child.

 

A lesbian couple deliberatively decided to create a deaf child having the option not to do so. The procedure involved the use of their friend’s sperm that had the condition carried in his family for five generations.[2] The main argument in favour of the intervention is based on the future well-being of the child. The couple considers the deaf community as part of their cultural identity and wants to share this with their child. Also, being deaf for them is not conceived as a disability but rather a “sophisticated […] language that enables them to communicate fully with other signers as the defining and unifying feature of their culture”.[3]

 

Savulescu argues that we should extend the freedom of reproductive choices so that we can meet the best possible life prospect for the future child. We must give individual couples the freedom to act under their own value judgement of what is constitutive of the best possible life prospect. A couple’s judgement will be morally acceptable if no child is harmed. And, because a negative intervention does not inflict any harm to the child (if the intervention is prevented, then another child would have existed) then the child is not worse off than it would have been otherwise. Therefore, negative interventions are morally permissible.

 

       III.         Analysing the argument

 

If we accept Savulescu’s argument while acknowledging there is no identity problem, the action of creating a deaf child produces no harm. If we accept that argument as plausible, intuition still points out that something might be morally wrong, we must clarify what.

 

  1. Parental duties

It could be argued that the parents have the duty to look after the well-being of their child. However, in this case there is an imposition from the progenitors to the child, of a physical and psychological burden. This contradicts what we generally consider as parental duties, i.e., providing the best well-being possible. Savulescu’s argument of the child not being worse-off is insufficient. Once the child does come to exist, the damage is activated. By activated I mean that causing the existence of a disabled child damages the existent child by giving him a burden. Imagine that you had the choice of being conceived with a burden-bag or without one. Your parents, having the choice to give you no burden-bag they decide to give you one, arguing that is the best option for them to raise you and, therefore, to give you the best possible life prospect. However, although you are not directly harmed by coming into existence with the burden-bag, their decision is casually related to the fact that you indeed have it. This bag activates over time, when the effects of the burden start restricting the limits to increase your well-being. In the case of the two deaf lesbians, the deaf child is not harmed by being conceived deaf per se, but because the implications of that choice do not seem to increase the child’s well-being, but rather limit it.

 

The damage arises from the decision of the parents wanting to create a disabled child; in other words, the motivations behind that decision do not seem morally sufficient to justify the creation of a child carrying such burden.

 

  1. External and internal motivations

Motivations behind the decision to create a disabled child in this case, are related to an internal motive. An internal motivation I take to be related to a fulfilled desire, for example, arguing that it will be the most convenient way for the parents to raise a child because of their cultural beliefs. On the contrary, external motivations are linked to a profit or good obtained, for example, deciding to do a genetic intervention to my child to get financial aid from the government. In the case of the two deaf lesbians, it is argued that their motives are not founded on personal greed. However, if the argument to accept such practice is because they consider it part of their culture, they could still include that into the child’s life without depriving him of his ability to hear. Furthermore, there seems to be an element of contradiction. If what matters is the inclusion of their child to the deaf community it does not follow that the child should necessarily be deaf and use sign language as a requirement to share their cultural identity. The best way to achieve cultural identity and protect the well-being of the child will certainly not be making the child deaf. The apparently well-intended claim about personal beliefs, could be easily identified as selfish or ultimately greedy.

 

Allowing negative interventions makes it harder to justify any value judgement based on the personal beliefs of the progenitors. It is not clear why we should accept or encourage some of them as a morally permissible practice. The lack of clear limitations overlooks the potential danger that this freedom of reproductive choices entails. But, to discuss the limits of reproductive freedom, first we need to review the concept of the best possible life prospect stated by Savulescu.

 

  1. The concept of the best life prospect

Savulescu constantly mentions the best life prospect but without giving a clear definition of the concept. The closest delimitation is this:

[…] my value judgment should not be imposed on couples who must bear and rear the child. Nor should the value judgment of doctors, politicians, or the state be imposed directly or indirectly […] on them. The Nazi eugenic programme imposed a blueprint of perfection on couples seeking to have children by forcing sterilisation of the “unfit,” thereby removing their reproductive freedom. There are good reasons to engage people in dialogue about their decisions, […] but in the end we should respect their decisions about their own lives.[4]

 

However, following this logic to argue for reproductive freedom, can end up in an undesirable outcome. Savulescu is not regarding the Nazi policy as immoral but only as unfair, because it restricts reproductive freedom. According to Savulescu, we should respect the parent’s decisions even if we don’t share their value judgement. The problem of supporting policies that allow this extended freedom is that we face the endorsement of wrongdoing, e.g. permitting the parents to victimise a child with a perverse life prospect like conceiving a deaf child. It is necessary to keep in mind that by accepting such permissibility, we are also endorsing policies that will allow extreme practices that could bring morally undesirable consequences if not carefully narrowed.

 

  1. A conceptual discrepancy: reproduction freedom

As stated before, to Savulescu, endorsing a practice that allows negative interventions also implies that no person or government entity can interfere with the progenitor’s decisions. In a way he is right, ultimately the decision of wanting to have a child or not, should be respected. However, there is a difference between deciding to have a child and deciding what type of child I want. Reproductive freedom strikes me as the freedom to choose if I want to procreate or not, with whom, under what method and at what point of my life. A completely different concept is to have the freedom to choose your offspring based on your own value judgement. It is not clear why we should have any right to decide how our offspring will look like or what type of abilities they should have. What we are discussing here are the limits of genetic intervention applied in reproductive processes, not freedom of reproduction, which can be slightly misleading. Regardless of this distinction, Savulescu is not wrong to raise this wider concept of freedom of human genetic intervention in reproduction. Nevertheless, the problem remains, do we have enough reasons to extend this freedom without restriction?

 

  1. Resentfulness and the scope for the best life

As I mentioned before, I believe having negative interventions could lead, overall, to a disastrous scenario. But how?

 

By allowing progenitors to select traits for their children we are permitting children to resent their parents. When it comes to the case of the two deaf lesbians, we can wonder if that child will not resent them for making him deaf, having the possibility to avoid it without further trouble. Under what authority do they think they can choose if I get to have a ‘good life’ or not? –the child might ask. What may seem to be a good enough reason for the progenitors, could as well seem a selfish motivation for the affected child. If the argument to support an extended freedom of genetic interventions in reproductive processes is to give the best life prospect to the child, then it is highly debatable that a wide range of negative interventions might fall under that category. This resentment, a direct consequence of the progenitors’ decision, also highlights an important aspect of this debate: the best life prospect is, in the end, decided by the child. What parents must do is to guarantee the maximization of the scope for the autonomy of the child, i.e., do not intervene in a way that will abolish it or override it.

 

To exemplify why conceiving a deaf child is, under my analysis, morally incorrect, the concept of need presented by David Wiggins can help clarify why the progenitors’ intervention goes against the needs of the child, i.e., his well-being. “[…] a person needs x [absolutely] if and only if, whatever morally and socially acceptable variation, it is […] possible to envisage occurring within the relevant time-span, he will be harmed if he goes without x”.[5] Under this definition of need, we can see that by conceiving a deaf child, he will be harmed. The hearing ability is something the child needs. But there are different factors that can help us classify the needs more accurately. The five factors stated by Wiggins are: urgency (referring to the imperativeness of the need), gravity (the significance of the harm), basicness (how primary that need is for survival), entrenchment (how easily we can dispose the need) and substitutability (extent to the substitution of the need). These factors can provide a crucial role in the generation of public policies regarding the limitations of negative interventions related to human genetic interventions in reproduction. These dimensions allow us to prioritize needs. Hence, by the given guidelines, a public policy will have to imply a restriction in the freedom of reproductive choices by not allowing negative interventions which in most –if not all cases– imply a direct neglect of a high ranked need.

 

  1. Conclusion

 

If we were to consider negative interventions as morally permissible, we will be agreeing to an intervention that surpasses the best life prospect for the child, and becomes the best life prospect imposed by the progenitors’ requirements. Savulescu’s argument recklessly overlooks the potential harms of extending reproductive choices. It is inevitable, that given the scientific progress development, a wide range of possibilities will be at our disposition. Notwithstanding, what should be essential to decide on general policies for genetic interventions should be the respect for autonomy and the preservation of the well-being of the child to-be. By autonomy, I understand the idea of freedom from external control. Part of what preserves our autonomy is our right to choose, and if we allow progenitors to interfere with the aleatory process of conception in a negative way, we are overriding that child’s autonomy, coercing him to be a product of the desire of his progenitors. Furthermore, if we endorse these types of interventions, we end up trespassing the child’s well-being, like in the case of the two deaf lesbians. While they argue that they are choosing the child to be deaf for his well-being, this does not imply that the deprivation of hearing will be what constitutes the best possible outcome to achieve such end. And by well-being I understand a state of the higher healthiness and happinessThis means that if we come to live in a damaged environment in which having white skin is dangerous to your health, then intervening a child to avoid the trait of being white skinned will become a positive intervention instead of a negative one, like it is now.

 

Our environment changes and the moral understanding of our needs should change with it. But based on our present situation, I claim that we should only allow positive genetic human interventions in reproduction, limiting the practice of negative interventions. This, however, will most likely change; but only when a modification shall be required will it be prudent to reconsider those limitations. For now, we should refrain from endorsing interventions such as intentionally conceiving a deaf child based on cultural or personal belief arguments. If we can recognize such intervention as negative, and we can objectively state that being deaf implies the lack of a useful and desirable ability for any human life, then the two deaf lesbians are trespassing the morally acceptable limit for reproduction genetic intervention.

 

 

 

 

 

[1] Savulescu, J (2002) Deaf lesbians, ‘designer disability’, and the future of medicine, British Medical Journal 325, pp. 771-773.

[2] Spriggs, M. (2002) Lesbian couple create a child who is deaf like them. Journal of Medical Ethics 28, p. 283.

[3] Mundy L. A world of their own. The Washington Post 2002 Mar 31: W22. http://www.washingtonpost.com/wp-dyn/articles/ A23194–2002Mar27.html

[4] Savulescu, J (2002) p. 772.

[5] Wiggins, D. (1987) Needs, Values, Truth. Essays in the Philosophy of Value, Basil Blackwell: Oxford.

[6] I will consider happiness, for the sake of the argument, as an overall state of fulfilment of needs. Needs being physical and psychological.

Grounding Strawson’s social claim in folk-psychology

Louis Ramirez

According to most contemporary philosophers, being morally responsible amounts to being an appropriate subject of the Strawsonian reactive attitudes.[1] Strawson (1993) makes two claims about them: they signal a human disposition to react to others’ quality of will and, because of this, they are crucial to human society. In this paper, I develop his view (1), propose an empirically based relativism charge (2), and proceed to counter it (3). Reactive attitudes, I argue, fulfill a species-wide need to be intelligible to one another by scaffolding social-cognitive competence. Because of this, I continue, they are indeed necessary for human society. With Strawson, I conclude that, while the form that these attitudes will take may vary among cultures, their existence will not.

 

  • Reactive Attitudes and Human Society

 

Strawson’s landmark essay, ‘Freedom and Resentment’ (1993), begins by inviting the reader to remember that actions such as blaming and praising involve emotions. Blame involves resentment. Likewise, praise involves admiration or gratitude. Strawson argues that all variations on the act ‘holding accountable’ involve a class of sentiments called the reactive attitudes. Following him, most contemporary philosophers agree that accountability ascriptions depend on these emotions.[2]

 

Given that reactive attitudes ground accountability ascriptions, they are conceptually tied to moral responsibility. Most philosophers agree that ‘moral responsibility’ is the property of agents that makes holding them accountable for their actions appropriate (see e.g. Wallace 1994). Thus, for those who claim that holding accountable hinges on reactive attitudes, the property of being morally responsible is the property of being an appropriate recipient of these.[3]

 

Strawson claims that the reactive attitudes express our attachment to others’ quality of will. In calling attention to them, he wants to stress ‘the very great importance that we attach to the attitudes and intentions towards us of other human beings, and the great extent to which our personal feelings and reactions depend upon, or involve, our beliefs about these attitudes and intentions’ (1993, 48). He contends that understanding moral responsibility requires us to recall ‘how much we actually mind, how much it matters to us, whether the actions of other people … reflect attitudes towards us of goodwill’ (49). Reactive attitudes, in his view, express the general human fact that ‘we demand some degree of goodwill or regard on the part of those who stand in … relationships to us’ (Ibid.). In sum, they are ‘essentially natural human reactions to the good or ill will or indifference of others towards us, as displayed in their attitudes and actions’ (53). In what follows, I call this this claim the ‘quality of will’ thesis. Reactive attitudes, as Strawson sees them, express our human disposition to react to others’ attitudes.

 

Strawson’s second claim is that reactive attitudes are crucial to intelligible human interactions. ‘The existence of the general framework of attitudes itself,’ he writes, ‘is something we are given with the fact of human society’ (1993, 64). According to him, without a framework of attitudes for expressing our demand for goodwill, such societies would be impossible. In such a case, he continues, ‘it is doubtful whether we should have anything that we could find intelligible as a system of human relationships, as human society’ (Strawson, 65). Expressing our demands for goodwill, argues Strawson, is the hallmark of what we understand by human society. This is Strawson’s ‘social’ claim.

 

It is true, Strawson concedes, that the specific form of reactive attitudes will vary among cultures (1993, 64). It may be that what counts as goodwill varies. It may also be that the attitudes that express our demand will vary too. What will not vary, however, is the existence of a framework of reactive attitudes qua vehicles for expressing our demand for goodwill. Without such a framework, Strawson argues, there would be nothing intelligible as a system of human relationships.

 

  • Empirically based relativism charge

Strawson underestimates the extent of the psychological differences between different populations. In this section, I discuss evidence suggesting that reactive attitudes will vary in problematic ways that he does not anticipate. More specifically, the existence of their framework may be limited to some cultures. Thus, I argue that tying responsibility to reactive attitudes risks making it culturally local and entailing that only members of some cultures are persons.

 

Evidence from psychology

 

Emotions are often thought of as natural kinds (c.f. Ekman 1992). Yet, despite the fact that people feel and identify discrete emotions, a century of research has failed to validate this experience. There is, to date, no ‘objective’ way of determining whether or not someone is in a given emotional state (Barrett 2006; 2015). As Lisa Barrett sees it, this problem delivers the goal of accounting for ‘experiences of anger, sadness, fear … without assuming that their phenomenological character derives from stereotyped, specific patterns of somatovisual activity’ (2006, 30). Her solution is to view emotions as ‘conceptual acts’.

 

The idea behind the conceptual act theory is that data from perception and somatic states count as emotions when they are conceptualized as such in a given situation. As Barrett puts it, ‘a momentary array of sensations from the world … combined with sensations from the body (X) counts as experience of emotion or a perception of emotion (Y) when categorized as such during a situated conceptualization (C)’ (2015, 420).[4] Via this process, she continues, ‘sensations acquire functions that are not intrinsic to them … As a result, new functions are not based solely on the physical properties of sensations alone’ (Ibid.). Instead, the new functions stem from what these physical properties come to represent, namely a given emotion. So, for example, feeling a ball in your throat might serve the function of coordinating with others in the activity of grieving, when conceptualized as grief in the context of a wake.[5]

 

The conceptual act theory claims emotions are part of social reality: they are ontologically dependent on the collective intentionality of a given society. Counter-intuitively, one way of making sense of this claim is to compare emotions to weeds. Grant that a plant (X) counts as a weed (Y) when it has not been planted yet is in a flower patch (C). For this to be the case, ‘there must be a group of people who agree that certain instances [of plants] … serve particular functions [those of weeds]’ (Barrett 2015, 420). Likewise, for it to be the case that somatic states and sensations can come to represent an emotion, there needs to be some sort of collective agreement about what sensations count as which emotion in what situation. And, as a result, the novel functions of somatic states that make them emotions are dependent on the interests and values of a given culture. In other words, emotions are ontologically dependent on their cultural context.

 

There is some debate in cognitive psychology and anthropology about whether or not emotions are universal (c.f. Lutz & White 1986; Oatley 1992). Some cognitive psychologists claim that affect (arousal, pleasure) is universal, and that emotions are usually not (Barrett 2006). Other psychologists propose that there are basic emotions such as anger and sadness. Keith Oatley thinks there are five (1992). Yet none of the reactive attitudes feature in any such list. Given that emotions depend on the interests of a given culture, Strawson’s social claim can only be defended if they can be tied to interests that transcend cultural differences. Of course, he claims that they reflect a human interest in goodwill (this is his quality of will thesis). I take issue with this response in the next subsection.

 

Evidence from anthropology

Strawson’s quality of will thesis is consonant with the ‘moral intent hypothesis’. As the anthropologist, Clark Barrett, and his colleagues formulate it, the hypothesis states that ‘it is a species-typical property of humans to take an agent’s reasons for action into account in making most types of moral judgments’ (2016, 1). Strawson’s thesis, recall, is that humans naturally care about others’ attitudes, as expressed in action, and that this is what grounds a disposition to respond reactively. Both share the assumption that agents’ attitudes naturally fit into our reactive evaluations of their actions.

Evidential support for the moral intent hypothesis stems almost exclusively from Western, educated, industrialized, rich, and democratic (WEIRD) societies (Barrett et al. 2016). Yet, as the cognitive scientist Joe Henrich and his colleagues note, individuals with such backgrounds are ‘some of the most psychologically unusual people on Earth’ (2010, 29). And indeed, this is reflected in Barrett’s results: he and his team investigated the variations in moral judgment across ten societies in six continents, asking participants to morally evaluate norm-breaking actions in light of information about intentions and circumstances. Rather than being a species-general trait, the degree to which ‘an individual’s intentions influence moral judgments,’ they write, show ‘substantial variation … with intentions in some cases playing no role at all’ (2016, 1).[6]

There are human societies, which are intelligible as societies, where individuals attach little to no importance to others’ attitudes. Thus, it is not true that all humans demand ‘goodwill’ from one another. If reactive attitudes are to define intelligible human community, they must reflect another interest. If they do indeed depend on a demand for goodwill, they will vary in ways that Strawson does not anticipate: both their form and the existence of their framework will vary.

Relativism challenge:

Reactive attitudes are key for holding people accountable. If they were exclusive to some cultures, this would mean that only members of these cultures are capable of holding others accountable. The problem with this is that the capacity to hold accountable is thought by some to be a precondition for being responsible (see e.g. Darwall 2006; McKenna 2011). In turn, being responsible is standardly thought of as a defining mark of persons (Frankfurt 1971; F&R 1998). Thus, cultural specificity for responsibility entails cultural specificity for personhood. This is my relativism challenge.

 

  • Solution: grounding responsibility-practices in folk-psychology

 

To avoid relativism for responsibility, I propose to ground the ‘social’ claim in folk-psychology: reactive attitudes are a necessary scaffold for social-cognitive skills. Thus, given the importance of social cognition for human community, it is indeed doubtful we would find anything intelligible as a human society in the absence of reactive attitudes (Strawson 1993).

 

Social cognitive skills ground a human capacity to see other creatures (particularly other humans) as ‘minded’. Doing so enables us to coordinate, predict, and understand behavior – all crucial for living in society. On the standard view, such skills are a matter of discovering facts about each others’ mental states. The regulative view, with which I interest myself at present, challenges this paradigm. It views folk-psychology as principally a matter of forming and regulating our own mental states in accordance with an array of socially shared and maintained sense-making norms (McGeer 2015).

 

Rather than being an individual project, the regulative view posits that folk-psychology is a communal, norm-governed enterprise. One implication of this is that being a competent folk-psychological agent is a matter of acquiring a certain degree of know-how as well as know-that: one must grasp the rules of folk-psychology. But one must also learn to follow them. As one learns to apply the rules of shaping one’s mind intelligibly, one becomes more intelligible to others. Yet, since those others apply these same rules, mastering them also enables one to understand other people who partake in the practice. McGeer labels this emergent mutual understanding a ‘practice-dependent epistemic gain’ (2015).

‘Practice-dependent epistemic gains’, writes McGeer, ‘are also vulnerable to non-conforming thought and action.’ As she explains, ‘you will have difficulty understanding what your opponent is up to – and vice versa – if either one of you fails to conform to the rules and strategies [of the practice]’ (2015, 262). The result of this, according to her, is that norm-governed practices are often supported by a disposition for corrigibility. Communicative mechanisms remedy the potential for norm-deviating behavior by adjusting each others’ behaviors. When someone deviates and intelligibility breaks down, they are called out. This is the ‘scaffolded’ nature of regulative folk-psychology: even as one’s capacity to operate increases, it continues to depend on the regulative interventions of others (McGeer 2015, 262).

The scaffolded nature of folk-psychology suggests that we have a rich enforcement mechanism for sense-making norms. ‘And indeed we do,’ writes McGeer, ‘It is just the practice that P.F. Strawson celebrates in … “On Freedom and Resentment.”’ (2015, 272). Reactive attitudes suggest that we are reactively sensible to something. Yet it need not be others’ quality of will. According to the regulative picture, we are sensible to deviations from public sense-making norms. When interlocutors’ deeds do not fit together intelligibly, when they profess to have mental states that are incommensurate with their acts, or indeed when they deviate from moral norms, we react. This is not grounded in an assessment of their quality of will. It is grounded in an effort to interact intelligibly.

Reactive attitudes are, in effect, the backbone of social interactions. This is Strawson’s ‘social’ claim. Yet they do not fulfill this function in the way Strawson thought. Rather than giving voice to our attachment to quality of wills, I suggest they operate as the enforcement mechanism for communal intelligibility norms. In other words, the Strawsonian reactive attitudes scaffold our capacity to shape our minds in a way that makes us intelligible to others. Without this capacity, I submit, it would be hard to see anything intelligible as human society.

 

Conclusion

In conclusion, the quality of will thesis is implausible. Yet Strawson’s picture of ‘bounded’ cultural variations is correct. Different societies will have different sense-making norms. They may even have different specific reactive attitudes that enforce them. But they will nevertheless have a framework of reactive attitudes. This is his social claim. Because moral responsibility practices hinge on this framework, rather than the specific form its attitudes take, the relativism problem is avoided.

 

[1] Prominent examples include Strawson (1993), Wallace (1994), Fischer & Ravizza (1998), McKenna (2011). Moreover, Todd (2016) argues that this view is available to libertarians too. For one notable example see Zimmerman (1988) and other ‘ledger theorists’.

[2] Although, they disagree about what defines them and what else holding accountable involves. See in this connection Wallace (1994), who influentially claims it implies a belief that they are appropriate. See Bennett (1980) and McKenna (2011) for different viewpoints as to the nature of these emotions qua mental states. See McGeer (2014), Watson (2004), and Wallace (1994) for different views as to what defines reactive attitudes as a class. And see Todd (2016) for critical discussions.

[3] This view is a standard ‘point of agreement’ for different theories of responsibility. See Neil Levy (2005) in this connection.

[4] Accurately describing the phenomenon of situated conceptualization is not necessary for my purposes here. Nor is it within the scope of the paper. See Wilson-Mendenhall et al. (2011) for a detailed discussion.

[5] Barrett (2006) discusses the evidence that this process can explain discrete emotional experiences.

[6] For the full experimental detail, see Barrett et al. ‘Small-scale societies exhibit fundamental variation in the role of intentions in moral judgment’ (2016). The experiment tested for variations of moral judgment relative to six variables: intentional vs. accidental behavior, motivated vs. non-motivated behavior, justified vs. unjustified, and subject to mitigating factors or not.

Works Cited:

Barrett, Lisa Feldman. “Emotions Are Real.” Emotion 12.3 (2012): 413-29. Web.

Barrett, Lisa Feldman. “Solving the Emotion Paradox: Categorization and the Experience

of Emotion.” Personality and Social Psychology Review 10.1 (2006): 20-46. Web.

Bennett, J. “Accountability,” in Philosophical Subjects: Essays Presented to P.F. Strawson,

van Straaten, Z. (ed.), 1980. Oxford: Clarendon Press, 14–47. 1980

Ekman, P. ‘An argument for basic emotions.’ Cognition and Emotion 6. (1992): 169-200

Levy, Neil. ‘The Good, the Bad, and the Blameworthy.’ Journal of Ethics and Social

Philosophy 1. 2. (2005): 1-15

Lutz, Catherine., White, G.M. ‘The Anthropology of Emotions.’ Annual Review of

Anthropology, Vol. 15: 405-436 (1986)

McGeer, Victoria. “P. F. Strawson’s Consequentialism.” Oxford Studies in Agency and Responsibility, Volume 2 ‘Freedom and Resentment’ at 50 (2014): 64-92. Web.

Mcgeer, Victoria. “Mind-making Practices: The Social Infrastructure of Self-knowing

Agency and Responsibility.” Philosophical Explorations 18.2 (2015): 259-81.

Web.

McKenna, Michael. Conversation & Responsibility. New York: Oxford UP, 2011. Print.

Oatley, Keith. Best Laid Schemes: The Psychology of Emotions. Cambridge: Cambridge UP,

  1. Print.

Strawson, P. F. “Freedom and Resentment.” Perspectives on Moral Responsibility. Ed. John Martin Fischer and Mark Ravizza. Ithaca, NY: Cornell UP, 1993. 45-67. Print.

Todd, Patrick. “Strawson, Moral Responsibility, and the Order of Explanation: An

Intervention.” Ethics 127.1 (2016): 208-40. Web.

Wallace, R. Jay. Responsibility and the Moral Sentiments. Cambridge, MA: Harvard UP, 1994. Print.

Watson, Gary. “Responsibility and the Limits of Evil: Variations on a Strawsonian Theme.” Agency and Answerability (2004a): 219-59. Web.

Wilson-Mendenhall, Christine D., Lisa Feldman Barrett, W. Kyle Simmons, and Lawrence

  1. Barsalou. “Grounding Emotion in Situated Conceptualization.” Neuropsychologia 49.5 (2011): 1105-127. Web.

Zimmerman, Michael J. An Essay on Moral Responsibility. Totowa, NJ: Rowman and

Littlefield. 1988

 

Does a competing account of moral history by Abram plausibly undermine Leopold’s argument for the Land Ethic?

By Benjamin Edwards

This essay will show that Leopold’s account of moral history argued for in the Land Ethic can be undermined by an alternative account. Showing this, I will further suggest that although an account of moral history appears to be a minor part of Leopold’s Land Ethic, its undermining has damaging implication for how convincing the remainder of his argument is (Leopold, 2001). The alternative account I will choose to explore in opposition to Leopold’s is David Abram’s in The Spell of the Sensuous (Abram, 1996). There are a number of social-anthropological accounts of moral history that could be given as an alternative to Leopold’s. I have chosen to focus just on Abram’s account as it seems closely related to Leopold’s in many respects and its description and discussion can be encapsulated in this relatively short essay. I will, however, return to the topic of other alternative accounts towards the end.

To begin the essay, I will set out a reconstruction of how Leopold’s account of moral history links to his overall argument for the Land Ethic. I will argue in the following section that Abram’s account undermines a specific premise of this reconstruction. In Section 3 I will present two possible responses in defence of Leopold. Finally, in the last section, I will suggest that even though there could be questions as to the validity of Abram’s account, a weaker claim can be levelled against Leopold which successfully undermines his argument without requiring a conclusive answer as to whether Abram’s account is valid.

  • 1: The Land Ethic and Moral History

Aldo Leopold begins his argument for the Land Ethic with an example of Ancient Greek morality as contrasted to our current moral framework (Leopold, 2001, p.168). Although it appears at first sight merely as an example, it plays a significant rhetorical role in making his entire argument plausible: we have expanded our moral sphere before, therefore we can expand it again to include the land.

P1: Past humans had a limited sphere of social co-operation

P2: Humans now have a larger sphere of social co-operation

SC1: [from P1 & P2] The sphere of human social co-operation has expanded

P3: An ethic is a system of social co-operation

SC2: [from SC1 & P3] Our ethic has expanded

P4: The Land Ethic would be an expansion of our ethic

P5: [from SC2] Humans have expanded their ethic in the past

P6: If an expansion has occurred in the past then it is plausible that it will occur in the future

C: [from P4 & P6] It is plausible that the Land Ethic will occur in the future

Leopold builds on this claim in the rest of his argument for the Land Ethic, going on to suggest that we ought to act as if we are citizens of a biotic community rather than dominant over it. He ends with the loosely ethical claim that we ought to encourage actions that “preserve the integrity, stability, and beauty of the biotic community” (Leopold, 2001, p.190; p.189). Although there is interesting discussion to be had about these other elements of his argument, the undermining of the argument concerning moral history – as something of a keystone in the Land Ethic – will be a serious objection to Leopold’s position even without addressing any of the other elements of his argument.

  • 2: Abram’s Alternative Account of Moral History

The alternative historical account presented in Abram’s The Spell of the Sensuous will undermine [P1] of Leopold’s argument. Abram argues that early human cultures had an enlarged ethical sphere that included that which Leopold conceives of as ‘the land’. Abram’s account is based on a rejection of the kind of stories of moral history told by the likes of Leopold:

There are those who suggest that a generally exploitative relation to the rest of nature is part and parcel of being human, and hence that the human species has from the start been at war with other organisms and the earth. Others, however, have come to recognise that long-established indigenous cultures often display a remarkable solidarity with the land that they inhabit, as well as a basic respect, or even reverence for the other species that inhabit those lands (Abram, 1996, p.93)

Abram is therefore arguing that our current hostility and dominating attitude towards nature is something of a modern (post-Greek) aberration. For much of human history, we have had a far more convivial, co-dependent relationship with the land; the kind of relationship Leopold suggests we need to move towards in his vision of the land ethic. If we accept Abram’s picture we turn Leopold’s argument on its head; there has not be a gradual expansion of our ethical sphere, instead there has been a considerable contraction. By undermining P1, SC1 is rendered false and by extension the conditional at P6. This filters through Leopold’s argument to seriously unsettle the plausibility of the Land Ethic.

Abram finds evidence of commonalities amongst many pre-Greek human cultures in the way they respected nature as part of their ethical sphere. This often manifested in a form of animism, with common themes being the imbuing features of nature – such as the air – with a pseudo-religious significance (Abram, 1996, p.226). Pursuing the example of the air, Abram finds a varied selection of current and now-extinct indigenous cultures whose religious connection to the air motivated a relationship with nature similar to that which Leopold suggests we ought to cultivate in terms of our relation with ‘the land’.

A final important element of Abram’s book to consider, is the mechanism by which this expanded ethical sphere of indigenous communities went about contracting. Abram suggests the reason for this was the advent and spreading of alphabetic writing systems (Abram, 1996, p.240; p.250). In pre-alphabetic societies, human needs such as finding food and water were necessarily externalised through an oral tradition (Abram, 1996, p.120). The anthropomorphising of natural phenomena was a common method by which to memorise important information and this anthropomorphising was a major driver for the animism that was the impetus behind an ethical sphere that included the land. Once stories and instructions as to how to serve these needs could be written down, the anthropomorphising of the land was no longer necessary. This, in turn, led to the collapse of the animist religious traditions in these communities, the respect for the land evaporated, hence a contraction of the ethical sphere (Abram, 1996, p.121). The above account by Abram suggests that [P1] of Leopold’s argument is an incorrect story of the development of human ethical systems.

  • 3: Two Responses in Defence of Leopold

I suggest there are two possible responses that could levelled against Abram. They both assume the validity of Abram’s account as I will come onto the question of validity in §4. I will address them both in turn, presenting successful rebuttals to both.

Firstly, Leopold could salvage his argument by suggesting that although [P1] may not be true if we include the entirety of human culture, he can dispense with it and simply accept that [P2] is true: there has been an ethical expansion ever since the Greeks (Leopold, 2001, p.167). From the more limited post-Greek claim we can still derive the rest of Leopold’s argument successfully.

However, I would suggest for Leopold to pursue this line of argument would lead him to accept some implications that may be troubling for proponents of the Land Ethic. Looking back at Abram’s argument for his account of moral history, he suggests that respect for the land is fundamentally bundled with the kind of animist oral tradition that preceded alphabetic cultures. To be an alphabetic culture is to necessarily be one that does not see nature as significant to one’s own existence (Abram, 1996, 263). This shows us that for Leopold to accept Abram’s picture of pre-Greek history, he must also accept the implications Abram sets out for the kind of culture that has the expanded ethical sphere the Land Ethic seeks to create. Leopold would have to go beyond the claim he makes about the ethical maxim of the land ethic – that of promoting those actions which “preserve the integrity, stability, and beauty of the biotic community”. Instead he would have to bundle these claims with those that Abram suggests would go along with such an arrangement: an animist, oral culture. Although this is not technically contradictory with Leopold’s project and he could simply bite the bullet, this is a significant departure from the Land Ethic as it stands.

A response to my above reply could be put forward that nothing in Leopold implies that he would have to accept that the way in which the expanded moral sphere worked in the past would necessarily be the way it functioned in the future. However, Leopold’s argument, as reconstructed, uses the inductive premise of [P6]. If he is to employ this premise in the argument concerning moral history, then it would suggest that he is forced to accept that the cultural implications of the non-literate societies Abram describes would be carried over into future manifestations of the expanded ethical sphere.

The second response Leopold could level against Abram would, again, accept the validity of Abram’s picture of moral history, yet say that it does not do the work it may seem to do in presenting an objection to his argument for the Land Ethic. Leopold may, instead say that the non-literate communities that Abram describes do not truly have an expanded ethical sphere. Instead they are, at base, working in a self-interested manner. Leopold appears to suggest that self-interest and the expanded ethical sphere of the Land Ethic are essentially incompatible:

a system of conservation based solely on economic self-interest is hopelessly lopsided. It tends to ignore, and thus eventually to eliminate, many elements in the land community that lack commercial value, but that are (as far as we know) essential to its healthy functioning. (Leopold, 2001, p.179)

Having stated this, Leopold could argue that the mnemonic function (of where edible plants are etc.) described in §2 represents the respect for nature non-literate societies possessed to have self-interested foundations. Given that, as stated in the above quote, self-interest always leads to an undervaluing of some aspects of the biosphere; the kind of respect for nature these communities had was not truly analogous with the Land Ethic and therefore it can be said that an expansion of our ethical sphere did occur – this pre-Greek enlarged ethical sphere never existed in the truest sense.

I would counter this response by showing that Leopold’s own understanding of self-interest does not preclude communities that respect nature out of self-interest from embodying the Land Ethic. As the quote shows above, self-interest is not bad simply because it is self-interest, it is because respect for the land based on self-interest is necessarily self-defeating. We can, however, use one of Leopold’s own examples to show that non-literate communities (even if they are essentially self-interested) do not perform actions that are self-defeating in the way Leopold would need to suggest:

This same landscape was ‘developed’ once before, but with quite different results. The Pueblo Indians settled the Southwest in pre-Columbian times, but they happened not to be equipped with range livestock. Their civilization expired, but not because their land expired (Leopold, 2001, p.172).

To present a contrast with the way in which post-Columbian settlers cultivated the land, Leopold presents the example of the Pueblo Indians. The Pueblo Indians were exactly the kind of culture Abram describes in his characterisation of non-literate societies: their religion had large elements of animism, their traditions were oral and they had no alphabetic written language (Vecsey, 1983). If we are to say that this society is, at base, self-interested, and that they do not, in fact, embody the land ethic, we would expect Leopold to say that their land did “expire” because of the “lopsided” way in which self-interest functions. However, this is the opposite of what Leopold actually suggests.

Therefore, if Leopold follows this response he has two options: either he recants his claim that self-interest is contrary to the land-ethic, or maintain this but recant on the claim that non-literate cultures are essentially self-interested. The first option has significant negative consequences for the Land Ethic generally, and the second means this response does not stand. I have here presented two responses that could be made in defence of Leopold. I will now briefly move to assess whether Abram’s account is a valid account of moral history.

  • 4: Is Abram’s Account Accurate?

At this juncture, there are two paths I can go down. I could make the strong claim that Abram’s account is the correct characterisation of moral history, affirming my above argument wholeheartedly and concluding that Leopold’s argument for the Land Ethic is fundamentally flawed. I could also make the weaker claim that any – even a limited – story of moral history is an impactful counter to Leopold.

I would like to make the case for the weaker claim in this final section. The strong claim would, of course, be more convincing, but I do not have the space nor the expertise to close the book on which account is the more anthropologically valid. I would, however, like to say that there are various sources that support an account of moral history that is broadly in line with Abram’s. Although they are not making entirely the same argument, writers such as Peterson and Campbell make claims that suggest that nature was, for much of human history, treated as part of an enlarged ethical sphere (Peterson, 1999; Campbell, 1959).

The weaker claim is therefore the only route to pursue in this essay. Leopold’s account of moral history is tremendously limited. The evidence to support it are but a few examples contemporary to the time being discussed. Although this limited account does the work it needs to push the Land Ethic forward, the lack of evidence to back-up the account makes it precarious. Hence, by simply presenting at least an equally plausible claim about moral history – as I have done with Abram – is in many ways sufficient to dislodge Leopold’s fragile claim about moral history. In the vein of Nietzschean and Foucauldian genealogy, all that needs to be done to dislodge a particular theoretical picture of history (especially one as delicate as Leopold’s) is simply to present another, competing and coherent account (Foucault, 1998, p.386; Nietzsche, GM Pref: 5; Nietzsche, WP 254).

This latter, weaker claim is enough, I think, to show that the project of this essay has been successful in showing that we can undermine Leopold’s account of moral history with a competing one from Abram. With both more space and a deep knowledge of social-anthropology, the strong claim would be more convincing. However, within this essay, the weaker claim gets the job done without succumbing to two responses in defence of Leopold.

Bibliography

Abram, D. 1996. The Spell of the Sensuous. Vintage: New York.

Campbell, J. 1959. The Masks of God: Primitive Mythology. UK: Secker & Warburg.

Foucault, M. 1998 [1971]. ‘Nietzsche, Genealogy, History’. In Aesthetics, Method, and Epistemology: Essential Works of Foucault 1954-1984, Volume Two, ed. Faubion, J. Trans. Hurley, R. USA: The New Press, pp.369-393.

Leopold, A. 2001 [1949]. Sand County Almanac. OUP: USA.

Nietzsche, F. 1967 [1887]. The Genealogy of Morals. Trans. Kaufmann, W. New York: Vintage.

Nietzsche, F. 1967 [1901]. The Will to Power. Trans. Kaufmann, W. New York: Vintage.

Peterson, J. 1999. Maps of Meaning. Routledge: London.

Vecsey, C. 1983. ‘The Emergence of the Hopi People’. In American Indian Quarterly, 7:3, pp.69-92.

THE CLIMATE IS CHANGING: WHAT DO I MORALLY HAVE TO DO ABOUT IT?

by Eva Maria Parisi

Ludwig Maximilian University of Munich

PREFACE

The climate is changing. Scientific evidence shows unequivocally how the emissions of human society are strongly influencing the earth, causing a rise in global temperature and in sea levels, a reduction of the ice sheet, glacial retreat, ocean acidification and an increase in the number of natural catastrophes. These phenomena, however, are so massive that it can be difficult for single individuals to recognize their responsibility in causing them. It is, in fact, the emissions of millions of individuals, not merely a few of them, which make a difference in ruining – or saving– the planet and the species living on it. But still, something is happening and even not considering who is to blame, the world has developed differently than it should have. (See Lawford-Smith 2014 p. 392.) What is my moral obligation to remedy this situation?

The aim of the following essay will be to confront this question. This will be done mainly in two steps: first, it will be argued that, in spite of a general skepticism about the difference we actually make by contributing to the earth’s pollution with our emission, our actions do make a significant difference in defining our identities as persons. Then the focus will be set on the social structures to which we belong and which play an important role in determining the responsibility we bear toward one another, even in the context of climate change. It is exactly through these structures that, as we will see, we must be determinant in some specific sense.

PART I – DO I MAKE A DIFFERENCE?

The question our considerations start with is a provocative and much discussed question concerning individual actions and their connection to the issue of global warming: Do I really make a difference? The philosophical as well as the political debate of the last years attempted to answer this question with moral arguments, data and statistics. However, although, on the one hand, it has been shown that in the field of global warming we make a difference: namely myself, together with thousands of other individuals, and my acts, considered within the setting of acts these others find themselves involved in, (See Parfit 1986, pp. 67 – 86.) on the other hand, the fact that I make a difference still has not been proven. If I were the only person emitting carbon dioxide, my act would not affect the process of climate change.(See Kagan 2011, p. 109.) Even if I stopped emitting carbon dioxide, planted trees and engaged myself in political parties defending the earth, the climate would still change and I, through my actions, would not be in a position to make any difference in the condition of the climate.

These considerations, I assume, are shared by many of us. However, I shall argue they are based on a common mistake: the one of considering the rightness or wrongness of our actions as merely depending upon the effects they produce. Although consequentialism, as has been shown, can be saved by objections which consider it an unsuitable theory for defending individual responsibilities within social processes producing unjust effects, (See, among others, Kagan 2011, pp. 105-141.) I fear it is not suitable enough to show why my individual pollution should be considered morally wrong. Therefore, let us approach the phenomenon from a Kantian perspective, according to which the judgement of the rightness or wrongness of my acts does not have to be based on their consequence, but on my own will and on my intended actions. According to such a perspective, the question whether I, by my individual acts, really make a difference, whether my individual impact on the climate is significant or not, loses part of its meaning. That is, the question, “Do I make a difference?” acquires a new evaluation: I make a significant and not a negligible difference by acting in one way rather than in another way, it is a difference which concerns the way I perceive and constitute myself as the person I am. (See Korsgaard 2009, pp.20- 26.) Therefore, even if (and I emphasize“if”) my actions would make no difference in the process of global warming, they would still make a difference in the definition of my moral integrity and personal identity.

PART II – CONSTRUCTING THE KIND OF PERSON I WANT TO BE

As considered in the previous section, asking ourselves what we morally have to do about climate change is strictly connected to a deeper question which asks how we define the kind of person we want to be. I will argue that the answer to the question is based on the definition of  what “I” is or, more generally, what individuals are, namely parts of social structures which, in part, determinate them and, in part, make them determinant in some relevant sense.

Part of our personal identity is defined by the interpersonal structures that we, voluntarily or not, belong to. We come into the world as sons or daughters of someone else and live in the world as citizens of a given nation, customers of chosen products, friends of selected people etc. These structures confer upon us certain rights and impose upon us certain duties and constitute in this way a source of responsibility: It can be plausibly argued, in fact, that as long as the social structures we belong to produce unjust outcomes – as in the case of global warming – then we bear responsibility for those outcomes and must strive to find a remedy for them. (See Young 2006, pp. 102-130.) Let us consider the case of global warming and we will see that the mistakes and omissions we make every day make it easy to recognize our responsibility in destroying the planet: as parents not educating our children to have a deep respect for nature, as citizens for not putting enough pressure on our representatives to reduce the national emissions, as customers for putting our own comfort above the basic needs of future generations. Now, even if we do not want to be responsible for the problem of global warming, if we started acting in the best possible way, we would still be part of structures – such as the political or economic ones – that lead to climate change. Escaping our responsibilities, escaping moral mistakes,would be impossible for us.

However, pointing the finger against ourselves is still not sufficient to remedy the problem of global warming. Instead, we risk creating a general skepticism which fosters the attitude that since everyone is causally responsible for ruining the planet and no one really has the power to save it, then even trying to do it is useless. Let us keep in mind that the actions we choose make a difference in the construction of the persons we are and let us begin by considering the social structures we are part of as a possibility to be determinant rather than as structures which condemn us to moral failure. For the sake of the following argumentation we will assume for a moment that culpability could be put aside in order to focus on the case “where we might intuitively think the world has gone other than it ought, without any agent doing other than she ought, which is to say, without culpability.” (Lawford-Smith 2014, p. 393.) Considering this scenario, we will see that even if none of us were blameable for the problem of climate change, even if no outcome or causal responsibility could be identified or assigned to any of our actions, each of us would still have an obligation to remedy it. And this because of the structures connecting us to one another and through the relational dimensions that constitute human society, where relationships represent the bonds which connect people, each single person with the others, in the infinite web which makes of a crowd of single individuals a group of people living together.

There is, I shall argue, a moral obligation to help due to the relational dimension binding individuals together, obliging us to renounce the maximization of our own interests in order to secure the basic needs of all human beings. Now, someone could object that there is no relational structure binding someone like oneself to the individuals in Bangladesh, for example, which is one of the most vulnerable countries in the world to the possible effects of climate change (See Bose 2015). This consideration is, in my opinion, wrong. There are, I could argue, economical structures binding me to the people living in that faraway country, where many of the products I buy are produced, and there are political structures binding our nations together, proven by the existence of embassies. But still, even if the economical and the political structures did not exist, there would still be a relationship binding me to the community in Bangladesh, vulnerable to the phenomenon of climate change, and this relationship is given, I would say, by the fact that I am in the position to know about that community’s vulnerability. This means, I have the choice of acting, or not acting, in order to provide help. As long as someone is in the position to know about the existence of someone else, as I am in the position to know about the situation of those individuals living in Bangladesh, then even not doing anything to help them is an action within an interactional structure binding one to those individuals. And it is a fact that there is an interpersonal relationship binding me to someone in need, to someone vulnerable, (See Goodin 1985 Ch. 4. and Scheffler 1997 pp.189–209) that gives me reasons to provide aid.

PART III – WHAT DO I MORALLY HAVE TO DO ABOUT CLIMATE CHANGE?

The climate is changing. What do I morally have to do about it? This essay primarily reveals that there is something I morally have to do about it and that this something is strictly related to the interactional structures I am part of. What is missing is an answer to the question how to be determinant in the concrete sense. What I would need is a list of those actions I should morally undertake in order to do my part. This is not fortuitous, as not fortuitous is the fact that the question is set in the first person singular: its answer only can be found in the first-person perspective. Given that we all morally have to be determinant through and because of the interpersonal structures we are part of, each of us will be in the position to be determinant in different ways, according to his or her means, according to his/her role within his/her political, economic or religious community, according to the interpersonal bonds he or she is part of.

Being determinant. This is what I morally have to do about climate change, even if I were the only one reducing my emission, teaching myself and others respect for nature, and placing the safety of global basic health standards above the optimization of my own comfort. Even if the climate still changes, I have to put pressure on my political representatives to find proper solutions for the problem of global warming, and to become the proper representatives of my political community if none of my representatives will defend the interests of our earth, which is in my own interest and the interest of those who are most vulnerable to climate change, most vulnerable to my choices and actions. As a human being and as the person I want to construct through my choices and actions, I morally have to provide them withhelp.

REFERENCES

  • Bose, Pablo S., Vulnerabilities and displacements: adaptation and mitigation to climate change as a new development mantra, in Area. doi: 10.1111/area.12178, 2015.
  • Goodin, Robert, Protecting the Vulnerable (Chicago, IL: University of Chicago Press, 1985).
  • Young, Iris Marion, Responsibility and Global Justice: A Social Connection Model, Social Philosophy and Policy Foundation 23, pp. 102-130, USA 2006.
  • Kagan, Shelly, Do I make a Difference? in Philosophy and Public Affairs, Vol. 39, Issue 2, Spring 2011.
  • Korsgaard, Christine, Self Constitution, Agency, Identity and Integrity, Oxford University Press, 2009.
  • Lawford-Smith, Holly, Benefiting from Failures to Address Climate Change, Journal of Applied Philosophy, Vol. 31, No. 4, 2014.
  • Parfit, Derek, Five Mistakes in Moral Mathematics, in Reasons and Persons, Oxford University Press, pp. 67 – 86, 1986.
  • Scheffler, Samuel, Relationships and responsibilities, in Philosophy & Public Affairs 26, pp.189–209, 1997.

The Doctrine of the Mean in Aristotle’s Virtue Ethics: Still Useful?

by Owen Kelly

University of Edinburgh

This essay offers a description of Aristotle’s Doctrine of the Mean and the context within which in operates in his moral theory; and considers how it could be applied to some characteristically modern virtues.

  1. Introduction and theoretical context

The Doctrine of the Mean is central to Aristotle’s account of the virtues. It is not only a tool for describing, analysing and codifying them but is also intrinsic to a correct understanding of their nature. Whether a particular virtue is an excellence of character or of intellect, a distinction Aristotle draws at the beginning of Book II of the Nicomachean Ethics, it necessarily lies on a mean. This emerges where he introduces the doctrine as part of the very definition of excellence of character. He first identifies the genus of such excellence as a ‘state’. Within that genus, he then looks for what distinguishes excellence of character from other states:

“Excellence, then, is a disposition issuing in decisions, depending on intermediacy of the kind relative to us, this being determined by rational prescription and in the way in which the wise person would determine it.” (NE 1106b 36 – 1107a 2 – trans. Rowe, 2002 – all references in this essay to the Nicomachean Ethics (NE) are to this translation).

So a virtue – an excellence – is aimed at making choices from a range of possibilities, that lie on a continuum, seeking the mean point on that continuum, which is determined by reference to the moral agent him or herself and all the relevant circumstances. Moreover, the choice is made by the use of reason; and the measure of whether all these criteria are met is, in turn, whether the choice is made in the way a person of ‘practical wisdom’ (a central Aristotelian virtue) would make it.

More broadly, Aristotle’s approach to ethics is based on the working assumptions that the good and fulfilled life is the objective for all people, and that exercising virtue as an excellence of character contributes to the realisation of that; that the overall goal of investigating morality in the first place is practical, rather than theoretical (” So when one looks at everything that has been said up to this point, one should be bringing it to bear on one’s life as actually lived, and if it is in harmony with what one actually does, it should be accepted, while if there is discord, it should be supposed mere words.” NE 1179a 20 – 23).

  1. The nature of the mean

Aristotle is careful to explain that the mean is not arrived at by finding equidistance between the excess and the deficiency – an arithmetic process – but by finding the point most appropriate, all circumstances and factors considered. We must “look to what suits the occasion” (1104a10). In particular, an act must be ‘for the sake of the noble’ (1117b31). It follows from this that there will be occasions when the extreme of emotion – extreme anger, for example – is justified and, indeed, virtuous.

Aristotle gives practical guidance on how to find the mean. He recommends we first avoid the extreme or deficiency that is most vicious, since going to that extreme would be the worst error we could make (1109a 30). If we are aiming, for example, to be courageous but not foolhardy, we should first aim to avoid cowardice, which is the vice of deficiency, in this case. He further recommends (1109b 3) that we consider our own inclinations and tendencies, and compensate for them. So if I know myself to be naturally fearful, I should take that into account in seeking the mean that represents the excellence of being courageous. He also notes (1109b 8) that it is difficult to be impartial about pleasures, since we are naturally drawn towards them and suggests we apply the same discount to them as the elders of Troy did to Helen, who saw the threat she presented to their city while still admiring her beauty.

In Book VII of the Nicomachean Ethics, Aristotle explains that the good person will identify the mean without effort or self-doubt, because he or she enjoys acting virtuously and is disposed towards doing so; that the self-controlled person will find the mean despite desires or proclivities that would, without self-control, lead him or her astray; that the person lacking in self-control will not find the mean but will feel guilty about missing it, recognizing that the mean exists; and the person of bad character will miss the mean but feel good about doing so.

Aristotle’s doctrine is not simply an appeal for moderation in all things. It is more sophisticated, holding that deficiency and excess are equally in error. Failing to show appropriate anger at gross injustice is as bad as losing one’s temper over a triviality. And the mean is found in frequency of action as well; one can respond in accordance with the mean on one occasion but consistency is also required and one needs to find the mean on each occasion it is required. Acting in accordance with the mean is to “feel and manifest each emotion at such times, on such matters, toward such people, for such reasons, and in such ways as are proper.” (Urmson, p.161). Moderation as a virtue is compatible with the doctrine but it is not part of it.

Aristotle asserts that a virtue is not purely an intellectual condition but a state, or disposition, entailing emotion as well as knowledge and reason. Excellence of character, in exercising its choices, involves emotions as well as actions. The virtuous person aims for the mean between extremes of emotion and between extremes of action.

  1. Putting the mean to modern use

On obvious criticism of Aristotle’s approach to ethics, and perhaps of virtue ethics in general, is that it is practically of little value – it doesn’t tell us how to act in any given situation. His emphasis on the need for judgement based on circumstances is, however, a recognition of an inescapable feature of existence for any moral agent, in the world as it is known to us, namely that situations requiring moral choice are infinitely variable. While Aristotle’s theory, of which the doctrine is a central part, is not easy to grasp and put to practical use, it is nevertheless immune to the positing of endless variations in circumstances that undermines utilitarian or deontological attempts to create rules or codes of morality.

Losin’s analysis of the Doctrine of the Mean invokes a good deal of self-analysis and self-management. It connotes some of the structured approaches to personality control and projection used in management education (Myers-Briggs is a prominent and widely-used example) and has an obvious relevance in such settings.

Thinking of emotions with modern currency but perhaps unacknowledged in Aristotle’s time, how might we apply the doctrine to some distinctively modern virtues?  Independence of mind would be one such virtue, at least in the Western world. Since the Enlightenment, the ability to reason independently and to question authority has been seen as a virtue. If we take this as the mean state, the excess would be dogmatism; and the deficiency would be an unquestioning or supine acceptance of authority. This virtue, if it is such, is intellectual in nature rather than a disposition of character.

Another modern virtue might be open-mindedness, or lack of prejudice. If we take that to be the mean state, the excess would be naivety, or a lack of a critical faculty; and the deficiency would be racism and other forms of prejudice. This, again, is an intellectual virtue.

Humanity, on the other hand, is a disposition of character with particular modern currency, living as we do in an age when human rights are generally accepted as existing and as morally important. How could humanity fit into the Doctrine of the Mean?

An excess of humanity might be described as adopting an exclusively anthropocentric view of things and failing to take into account the interests of, say, wild animals. But this would be an unreasonably literal interpretation of the word ‘humanity’ and would not reflect the moral content of the word ‘humane’ as it used every day, which centres on the treatment of human beings in an appropriate fashion, avoiding cruelty and degradation.

So perhaps an excess of humanity would be a failure to punish or otherwise act in a retributive manner, in a way that is accepted as necessary to support justice within society; being ‘too soft’ and excessively merciful, in other words. And that would also be a form of acting unjustly, in not giving the appropriate punishment to secure rectificatory justice. So it would be a form of moral weakness: ‘not having the stomach’ for harsh but necessary action. But punishment need not be inhumane to be effective in achieving rectificatory goals. It is difficult to imagine how inhumanity could be morally justified in Aristotelian terms.

So perhaps ‘humanity’ is not a virtue on a mean and is, in Aristotle’s terms, in the same category as malice, in that it has an absolute quality. He says of some aspects of bad character that “…..in some cases they have been named in such a way that they are combined with badness from the start, as eg with malice, shamelessness, grudging ill will and, in the case of actions, fornication, theft, murder: for all these, and others like them, owe their names to the fact that they themselves – not excessive versions of them, or deficient ones – are bad.” (NE1107a 9 – 13). Humanity and, perhaps, unconditional love, may be examples of the unreservedly virtuous counterpoints to these unreservedly bad emotions and actions.

  1. Conclusion

The Doctrine of the Mean is complex and deceptively simple to the casual eye, which easily mistakes it for an argument for moderation in all things. Its strengths lie in its recognition and accommodation of the limitless permutations of moral decision-making; its analytical and descriptive power; and its grounding in human decision-making, in finding a mean ‘relative to us’, rather than by appeals to transcendental forms or entities. Its weaknesses lie in its formulaic, almost tabular imposition of structure on emotions, which are subject to almost infinite calibration; and the need to reinterpret it to accommodate concepts such as justice and humanity which cannot be fitted into a trichotomous framework of an excess, a mean, and a deficiency.

 

 

Owen Kelly

March 2016

 

 

 

References

Aristotle, Nicomachean Ethics, translated by Christopher Rowe (with commentary by Sarah Broadie) Oxford University Press, 2002

Losin, Peter, ‘Aristotle’s Doctrine of the Mean’, History of Philosophy Quarterly 4/3, July 1987

Urmson, J O, ‘Aristotle’s Doctrine of the Mean’, in Rorty, Amélie Oksenberg (ed.), ‘Essays on Aristotle’s Ethics’, University of California Press, 1980

Mixed Inferences: Not a Problem for Pluralism about Truth

1.Background

1.1 Roadmap

Christine Tappolet posits that mixed inferences seem to pose problems for pluralistic theories about truth (Tappolet, 1997; 2000). I argue that responses to the problem on behalf of pluralists successfully meet Tappolet’s challenge, and that the most viable responses to the challenge focus, at the heart, on the idea that while a stipulated generic truth predicate seems to do the work at explaining the validity of mixed inferences, it is the other truth predicates that do the work in explaining why the premises and conclusion are true to begin with. The premises and conclusion of Tappolet’s mixed inference might be true in the generic sense, but only because they are first true in some other way—as a result of social agreement or correspondence to the facts, or some further way.

To begin with, I will first sketch a background on pluralistic theories about truth and review the challenges posed by mixed inferences. After this, I will analyze several philosophers’ responses to these problems and explain how (some of) these responses successfully reply to Tappolet’s challenge.

1.2 Pluralism About Truth

Pluralism is the idea that there are different ways of being true. More specifically, in different discourses, or subject matters, the truth predicate attributes different properties (Wrenn, 2015: 133). For example, a pluralist about truth might place the proposition Wet cats are funny in the domain of comedy, where one way of being true, Tsa, would apply because all propositions that fall under this domain are truth-assessable in terms of social agreement. The proposition This cat is wet might be placed in the domain of scientific claims or claims about the state of affairs of the world, and a different way of being true, Ta, would apply because all propositions that fall under this domain are truth-assessable in terms of correspondence to the facts.

Tappolet’s problem of mixed inferences might seem especially well-postured to pose a deathblow to Strong Alethic Pluralism, which is the view that there is more than one truth property, and that no one truth property can explain the truth of true propositions (Pedersen, 2006: 106).

1.3 Mixed Inferences

Tappolet demonstrates that mixed inferences pose a problem to pluralist theories about truth by testing the pluralists’ central claim that different types of truth predicates correspond to sentences of different subject matters (Tappolet, 1997: 209–210). She presents the following deductively valid argument, or ‘mixed inference,’ whose premises are sentences from two different subject matters—and thus, two different truth predicates would apply, if the pluralist about truth is correct (Tappolet, 1997: 209–210):

  1. Wet cats are funny.
  2. This cat is wet.
  3. Therefore, this cat is funny.

In the above argument, (i) is a type of sentence that, according to the pluralist about truth, does not involve realism about the entities of the sentence, and is truth-assessable in terms of a minimal or ‘lightweight’ truth (pluralists would classify comical or moral sentences under this type). For (i), the pluralist about truth might think of the truth predicate in the same way a coherence theorist about truth would.

And (ii) is a type of sentence that the pluralist about truth holds is truth-assessable in terms of a ‘heavyweight’ truth that implies a realist view of the subject matter. While (i) is an allegedly non-descriptive sentence, in that it does not assert any fact of the matter about the world, (ii) is a descriptive sentence. A pluralist about truth might think of the truth predicate here in the same way a correspondence theorist about truth would.

Tappolet argues that if the pluralist about truth is correct, and one type of truth predicate explains the truth of (i) and a different type of truth predicate explains the truth of (ii), then the argument would not be valid. However, it clearly is valid, and so therefore, Tappolet argues, there must be only one truth predicate that applies to all three sentences. She further argues that since the argument is valid, both (i) and (ii) must be assessable in terms of the same truth predicate—not different ones as the pluralist about truth maintains.

In explaining the above argument, Tappolet advocates that truth is what is preserved in valid inferences. Her challenge to the pluralist about truth is this: if we already have one truth predicate that seems to apply to all three sentences in the argument above, why do we need multiple truth predicates?

1.4 Tappolet’s Trilemma

According to Tappolet, in the face of the above challenge, the pluralist must do one of three things: first, claim that in addition to the one, generic truth predicate that seems to apply to all three sentences, there are different truth predicates that apply to different sorts of sentences (Tappolet, 2000: 382–383). The problem is, Occam’s razor should hold. That is, if both monistic theories about truth and pluralistic theories about truth can explain the truth of the argument equally well, then we should rally behind the simplest theory (i.e. monistic theories about truth in general, or a generic truth predicate in particular) (Tappolet, 2000: 383). The bulk of responses to Tappolet’s challenge focus on this horn of the trilemma.

Second, the pluralist about truth might deny that mixed inferences are valid. The problem is, they clearly are valid. None of the responses this paper concerns itself with go this route—and it would seem strange to do so, because mixed inferences seem intuitively valid.

Third, the pluralist about truth may deny the classical account of validity, which says that an argument is valid if and only if the truth of the premises necessitates the truth of the conclusion. The problem is, it seems that the classical account of validity should hold and our theories about truth should not haphazardly throw it out. While none of the responses this paper concerns itself with go this route either, one plausible alternative might be to expand the definition to account for mixed inferences, if needed, rather than throw out the classical account of validity altogether. However, this paper will say nothing more on this matter, and instead turn to responses that tackle the first horn of Tappolet’s trilemma, which seems a far more fruitful approach.

2. Proposed Solutions to the Problem of Mixed Inferences

2.1 Beall’s Solution

JC Beall sidesteps Tappolet’s trilemma by appealing to many-valued logics, which allow for the possibility of there being more than one way for a proposition to be true, and do not restrict the number of truth values to just two, true and false; rather, Beall seems to imply there are as many designated values as there are different ways of being true (Beall, 2000: 381–382). By appealing to the concept of designated value, where every way of being true is a designated value, the standard account of validity as necessary truth preservation holds (Lynch, 2004: 388–389).

Using many-valued logics, we can represent the truth of (i) by 1 and the (different) truth of (ii) by ½ (Beall, 2000: 381–382). For this example, Beall assumes there is exactly one way to be not-true, which we can represent by 0 (Beall, 2000: 381–382). In many-valued logics, an argument is valid if and only if the conclusion cannot be false (0) if all the premises are designated (1 or ½) (Beall, 2000: 382). Alternatively, an argument is valid if there is no case where the premises are designated (1 or ½) and the conclusion fails to be designated (0) (Beall, 2000: 382).

Thus, Beall argues, pluralists may simultaneously maintain that: 1. (i) and (ii) represent different ways of being true, and 2. The argument above is valid, so long as there is no case where the premises are designated (1 or ½) and the conclusion fails to be designated (Beall, 2000: 382).

Beall’s appeal to many-valued logics fails to deliver a deathblow to Tappolet’s argument. Tappolet herself makes a good point; if all three sentences are designated, is that not a kind of truth? (Tappolet, 2000: 384). Michael P. Lynch makes a similar point: designation is a thinly-veiled truth concept and is doing all the work in explaining the validity of the mixed inference (Lynch, 2004: 389).

However, a response to Tappolet’s rejoinder might go something like this: Beall himself says that designated values are the same as the different ways of being true, and so does not dispute that they are kinds of truth. What is at issue is Tappolet’s contention that sentences with designated values (e.g. 1 or ½) can be further reduced to something more elemental—to just the fact that the sentences are designated. Surely this is not something to hang one’s hat on, as each designated value maps onto a different way of being true—a way of being true that cannot be further reduced. For example, the designated value of 1 maps onto something being true as a result of social agreement, and a designated value of ½ maps onto something being true as a result of correspondence to the facts. Being designated in the general sense might be akin to being true in some generic sense. However, while the premises and conclusion of Tappolet’s mixed inference might be true in the generic sense, this is only because they are first true in some other way—as a result of social agreement or correspondence to the facts, or some further way. Thus, being designated in the general sense is dependent on being designated in a particular sense (e.g. 1 or ½). So, it seems that Tappolet has it backwards: the designation of 1 or ½ cannot be further reduced to being designated in general; being designated in general has to do first with being designated in some other way.

2.2 Cotnoir’s Solution

Aaron J. Cotnoir’s solution to Tappolet’s problem of mixed conjunctions, while only indirectly related to the problem of mixed inferences, can also be applied to meet Tappolet’s challenge. Mixed conjunctions are those with conjuncts from different subject areas (for example, ‘this cat is wet and it is funny’) (Tappolet, 2000: 384–385). Tappolet’s challenge is this: mixed conjunctions can obviously be true (Tappolet, 2000: 384–385). But if the pluralist about truth is correct, and each conjunct is true in a different way, in what way is the conjunction itself true? It seems that the conjunction itself should be true in the same way its conjuncts are—in the generic sense.

Cotnoir sees no reason to think that a generic truth predicate would make other ways of being true redundant, and suggests that a generic truth predicate is not incompatible with pluralism, if the generic property is defined by, or dependent on, the other ways of being true (Cotnoir, 2009: 478). This sounds very close to Douglas Edwards’s view about mixed conjunctions, where the truth of the conjunction is entirely dependent on the truth of its conjuncts (i.e. p & q is true because p is true in one way, T1, and q is true in a different way, T2) (Edwards, 2008: 147).

Cotnoir successfully answers Tappolet’s challenge about why we should let in other ways of being true into a theory about truth, as opposed to just the one generic truth property. That is, the generic truth property in and of itself lacks all explanatory power as to why a proposition is true to begin with; that is, a proposition is generically true only if it is true in some further way. Being true in the generic sense is defined by, or dependent on, the other ways of being true.

2.3 Pedersen’s Solution

Nikolaj Jang Linding Pedersen maintains that mixed inferences fail to make a dent in strong alethic pluralism (Pedersen, 2006: 107). He posits that alethic pluralists can appeal to a sparse view of properties to get around Tappolet’s assertion that there must be one generic truth predicate that applies to all three sentences in her mixed inference (Pedersen, 2006: 108). Sparse properties ‘carve things up at the qualitative joints’; abundant properties do not (Pedersen, 2006: 108). For example, the property of being a cat is a sparse property; all items in the set cats are qualitatively similar—they share similarities in terms of appearance, behavior, evolutionary history, et cetera (Pedersen, 2006: 108). However, the property of being either a cat or a real number is an abundant property—there is no qualitative similarity between being a cat or a real number (Pedersen, 2006: 108). That is, the only property that all items in the set cat or real number have in common is being a member of this set (Pedersen, 2006: 108).

It seems that Pedersen’s point can be expanded thusly: say we take Tappolet’s lead and maintain that all three propositions—Wet cats are funny, This cat is wet, and the conclusion that necessarily follows, This cat is funny—must belong to the set generically true in order for the traditional notion of validity to hold. Under the sparse conception of properties, since there is no qualitative similarity between the ways the propositions are true (for example, what makes it true that wet cats are funny, which might be the result of something like social agreement, is not qualitatively similar to what makes it true that a particular cat is wet, which might be true as a result of something like correspondence to the facts), the property of being generically true does not qualify as a property at all (Pedersen, 2006: 109). This is because Tappolet’s posited generic truth property is not a qualitative property, but only a logical one (Pedersen, 2006: 109).

While Pedersen makes an interesting point, he seems to leave something out. Since mixed inferences are deductively valid, it seems evident that there is a generic truth property, whether it is qualitative or not. That is, truth is what is preserved in any valid inference, and for our purposes, we will assume that the truth that is preserved is the generic truth property. The generic truth property plays an important role which makes it so that the truth of Wet cats are funny is comparable to the truth of This cat is wet, and facilitates the logical leap so that the conclusion, This cat is funny, necessarily follows. However, upon close examination, the generic truth property is merely a bucket that captures all other ways of being true, one which arises out of logical necessity. It does no work in explaining why the premises or conclusion are true to begin with (or, as Pedersen might say, the generic truth property is an abundant property, and the only thing that the premises and conclusion have in common is that they are in the same set, generically true) (Pedersen, 2006: 109). In other words, the fact that the premises and conclusion are generically true is dependent on the ways that each of the premises and the conclusion are (independently) true.

For example, it is true, Tsa, that Wet cats are funny, because in the domain of comedy, in which funniness is decided by something like social agreement, it is the case that wet cats are funny; and it is true, Tc, that This cat is wet, because in the domain of scientific claims or states of affairs of the world, in which whether or not something is wet is decided by correspondence to the facts, it is the case that this cat is wet. Taking this example further, if a proposition is true (in this case, by correspondence to the facts or social agreement), then it is true in a further way: generically true, Tg. This provides an answer to Tappolet’s question, ‘why should we need the many truth predicates instead of the one that does the inferential job…?’ (Tappolet, 2000: 384). The generic truth property facilitates logical inference, but holds no other meaning in itself.

3.3 Conclusion

One of the main motivations behind pluralism about truth is that all monistic theories of truth share a common weakness: the Scope Problem (Wrenn, 2015: 134). That is, monistic theories about truth do not seem to successfully apply to truths of all subject matters (Wrenn, 2015: 134). For example, a correspondence theory of truth seems to explain very well why a particular cat might be wet, but seems to do a poor job of explaining why wet cats might be funny. A generic truth predicate would explain the truth of both why a particular cat is wet and why wet cats are funny (and all true propositions, for that matter) the same way, which seems wrong, because this sidesteps why the propositions are true to begin with.

In this paper, I pursued two main tasks. First, I sketched a background on pluralism in general and strong alethic pluralism in particular, and on Tappolet’s problem of mixed inferences; second, I outlined several responses to the problem and reflected on each of these in turn.

In the first horn of her trilemma (and the horn that is most easily tackled), Tappolet argues that the pluralist about truth must claim that in addition to the one, generic truth predicate that seems to apply to all three parts of the mixed inference, there are further truth predicates that apply (Tappolet, 2000: 382–383). I argue that the most viable responses to Tappolet’s challenge focus, at the heart, on the idea that while a stipulated generic truth predicate seems to do the work at explaining the validity of mixed inferences, it is the other truth predicates, for example, Tsa and Tc, that do the work in explaining why the component propositions are true to begin with. The premises and conclusion of Tappolet’s mixed inference might be true in the generic sense, but only because they are first true in some other way—as a result of social agreement or correspondence to the facts.

Mixed inferences might pose a problem for pluralists, but they also pose a problem for monists about truth—in a different way. A correspondence theorist about truth, for example, might very easily explain why it is true that a particular cat is wet, but struggle to explain why wet cats are funny; a theorist who subscribes to a social agreement theory about truth might very easily explain why it is true that wet cats are funny but struggle to explain why a particular cat is wet. Monistic theories about truth would have a hard time explaining why all parts of a mixed inference are true. How, then, could we ever make the logical leap to this cat is funny?

References

Beall, JC. (2000). On Mixed Inferences and Pluralism about Truth Predicates. The Philosophical

Quarterly, 50.200: 380–382.

Cotnoir, A. J. (2009). Generic truth and mixed conjunctions: some alternatives. Analysis, 69.3:

473–479.

Edwards, D. (2008). How to Solve the Problem of Mixed Conjunctions. Analysis, 68.2: 143–149.

Lynch, M. P. (2004). Truth and Multiple Realizability. Australasian Journal of Philosophy, 82.3:

384–408.

Pedersen, N. J. L. (2006). What Can the Problem of Mixed Inferences Teach Us About Alethic

Pluralism? The Monist, 89.1: 102–117.

Tappolet, C. (1997). Mixed inferences: a problem for pluralism about truth predicates. Analysis,

57.3: 209–210.

Tappolet, C. (2000). Truth Pluralism and Many-Valued Logics: A Reply to Beall. The

Philosophical Quarterly, 50.200: 382–385.

Wrenn, C. (2015). Truth. Cambridge: Polity Press.

 

The Epidemic of Attention-Deficit Hyperactive Disorder

by Laurie Scarborough

University of Cape Town

Attention-Deficit Hyperactive Disorder (ADHD) symptoms began to be documented about a century ago (Kos & Richdale, 2004). Epidemiological studies show that prevalence rates have soared since then with rates increasing from 3% in 1980 (American Psychiatric Association, 1980), to modern prevalence rates reaching as high as 18% (Rowland et al., 2002). Why the sudden spike in prevalence rates? There are multiple reasons. One could be that ADHD has become a desired diagnosis to certain parties. Another is that modern society has facilitated an environment in which behaviours associated with ADHD thrive.

 

ADHD: A desired diagnosis

 

With ADHD prevalence on the rise one must ask why these prevalence rates are so high. Is it simply that more and more people with ADHD being noticed? Rather, perhaps there are factors that make ADHD a desired diagnosis to certain parties. This may be a foreign concept in the realm of mental illness because nobody wants to have a mental illness, and of course when I say “desired” I do not intend to mean that the child desires the diagnosis or that the parent, teacher or anyone else wishes the behaviour on the child. But instead that certain parties may have agendas and interests that motivate seeking out diagnoses and this continues to increase prevalence rates of the disorder.

 

Parents, for example, may grow tired of having an inattentive, hyperactive, or what they perceive as a “badly behaved” child, and a diagnosis allows the blame of bad parenting to shift away from them and onto the child (Blackman, 1999; Graham, 2006; Stolzer 2007; Timimi & Taylor, 2004), thus a diagnosis meets the needs of parents who do not want to take responsibility for bad parenting or for their child’s behaviour (Smelter et al., 1996). With the help of a diagnosis, parents have access to medication for their children which is seen as an easy way out of parenting – a quick fix. The problem can be solved with a pill and suddenly the parent becomes the model parent with a well-behaved child (Rafalovich, 2005; Smelter, at el., 1996). The absolution of effort and blame on behalf of the parents of ADHD children may be motivating parents to get diagnoses for their children.

 

Teachers are another motivating factor in diagnoses. Teachers want control of their classroom and want to control the deviant child in their class who is being disruptive. Teachers have become very involved in a diagnosis of ADHD (Saddichha, 2010) and they could also be pushing parents to get their children seen by clinicians. Schools in America receive funding from the government for educating disabled children, including children with ADHD (Stolzer, 2007), so it is in their interests to educate more children with ADHD. This could lead schools to motivate parents and teachers to get children taken to specialists and get diagnoses. The financial implication of medical aid reimbursements for ADHD medications are also making a diagnosis in ADHD more relevant to parents and adults with ADHD who need these rebates (Blackman, 1999; Rafalovich, 2005).

 

There is incredible pressure to perform in an educational sphere and at a professional level (Manne, 2001) and the idea of abusing ADHD medication has risen in modern culture. Adults and teenagers may therefore seek diagnoses of ADHD in order to gain access to these ADHD medications in order to boost performance in school, university or at work, even if perhaps they do not truly meet the criteria for ADHD. This would obviously inflate prevalence rates of the disorder in the teenage and adult cohort.

 

Studies show that 99% of people diagnosed with ADHD are treated with stimulant medication (Stolzer 2007; Stolzer, 2009). This means that pharmaceutical companies also have something to gain every time someone is diagnosed with ADHD and so it is in their interests to push for more diagnoses to be made (Stolzer 2007; Stolzer 2009). Saddichha (2010) goes so far to suggest that pharmaceutical companies “disease monger”, or convince people that they are sick in order for people to meet criteria for diagnosis when they in fact do not. More worrying still is that our own diagnostic manual, the DSM, has been influenced by these pharmaceutical companies. Lardizbal (2012) reports that 56% of the APA members who contributed to the DSM-IV and DSM-IV-TR diagnostic criteria have financial ties with pharmaceutical companies. This means that they have their own financial agenda to keep disorders that are commonly treated with psychoactive drugs in the DSM. As I previously mentioned, ADHD is one such disorder. This is a clear conflict of interest. This is just another party that has an interest in seeing ADHD diagnoses being made.

 

As we can see there are several groups that have interests in promoting the continued diagnosis of ADHD. Parents, teachers, schools, medical aids, adults seeking medications, and pharmaceutical companies all have their reasons. I think it is important to mention that of course the child does not want a diagnosis of ADHD. As Hacking puts it, human kinds have moral value, and so we care about how we are labelled and categorised (Hacking, 1995), and that is why we must be careful and mindful of these diagnoses. While giving someone a diagnosis may give someone sympathy, and make people want to help (Smelter et al., 1996), it could also lead to stigmatisation and bullying.

 

The integrity of the ADHD diagnosis

 

The incidence rates of ADHD are ever increasing and the fact that ADHD is such a highly diagnosed disorder may be due to the way that it is diagnosed. Perhaps clinicians are too easily diagnosing or are misdiagnosing. Are they really diagnosing disordered behaviour or are they just noting individual differences in people and pathologizing these behaviours?

Diagnostic tests need to be valid, reliable, normed on appropriate samples and used under standardised conditions (Hunsley et al., 2003). However, there are often discrepancies in diagnosis between assessors (Armstrong, 1996) showing that the tests do not always correlate and are not very reliable.

 

The assessment for ADHD is very subjective (Stolzer, 2009) because there are no physical, biological, metabolic or “disease” markers of ADHD (Baughman & Fred, 1999; Grigg, 2003; Stolzer, 2007). The assessments are based on the subjective opinions of teachers and parents (Stolzer, 2007), who have vested interests in the outcome of the assessments, instead of speaking to the child in question (Bratter, 2007), bringing into question the integrity of the eventual diagnosis. A teacher wants the child to behave in their classroom, while a parent wants a well-behaved child. The teacher also does not want to appear incompetent or simply as if they are boring the child into being an inattentive, hyperactive monster (Bratter, 2007). The opinion of the teacher thus becomes biased and problematic in a diagnostic situation. The questionnaires used for ADHD diagnosis usually have phrases followed by “ never”, “rarely”, “often”, “sometimes”, “always” (Stolzer, 2007). However, “rarely”, “often” and “sometimes” are not objectively quantifiable values; they are subjective linguistic terms that could be interpreted in various ways (Stolzer, 2007), further subjectifying an ADHD diagnosis. Tests like these also bring into question whether traits and behaviours like those prevalent in ADHD are really quantifiable and whether we should be measuring them in such a way that abstracts them quantitatively (Horntstein, 1988). Are we essentially losing something about these qualities by doing this to make them more “scientifically measurable”, when actually they are qualitative concepts, such as behaviour patterns (Hornstein, 1988)? Perhaps assessments for ADHD should rather not use such subjective questionnaires that attempt to quantify qualitative concepts, and should be more child-focused, at the very least involve the child in part of the interview by asking them questions about their own behaviour.

 

Another criticism of ADHD diagnoses is that often ADHD appears to pathologize normal childhood behaviour (Grigg, 2003). Studies in Canada found that being born in December as opposed to January was a strong predictor for a diagnosis in ADHD (Frances, 2012). A similar study in Virginia found that 68% of children young for their grade were medicated for ADHD (Watson, et al., 2013). This means that children simply younger than their peers are being diagnosed for ADHD. This is very worrying because it means that clinicians are singling out children and pathologizing normal developmental immaturity (Frances, 2012). In the DSM-5 criteria it specifically states “to a degree that is inconsistent with developmental level” (APA, 2013), so why are clinicians struggling to maintain this criterion? It is because of the subjectivity of the diagnostic criteria. How can we define what is and is not exactly age appropriate? Again this is a tricky situation for teachers and clinicians and it brings into question the integrity of an ADHD diagnosis.

 

Boys are ten times more likely to receive an ADHD diagnosis than girls (Luise, 1997; Stolzer, 2007; Stolzer, 2009), with prevalence rates reaching as high as 20% (Watson et al., 2013). Are we not simply pathologizing what might simply be more masculine behaviour? Is hyperactivity and inattention just a set of behaviours that are associated with being male? In this case again we are pathologizing normal behaviour, and an ADHD diagnosis in these cases would be irrelevant.

 

Whether we are pathologizing typical developmental youth or normal masculine behaviours, people are being diagnosed with ADHD, when in fact they may just have behaviours that are normal or can be attributed to simple individual difference. If these people do not in fact have ADHD but are being diagnosed this obviously inflates prevalence rates and brings into question the integrity of the diagnostic category.

 

The ADHD ecological niche

 

ADHD has only recently risen to epidemic proportions. A hundred years ago, nobody was singled out as having ADHD, and the DSM categorical system did not yet exist so people were not being classified using this system. Does this mean that people are actually changing and have recently become more hyperactive and less attentive, or are we just becoming better at noticing it? I would argue that society has changed to facilitate an ecological niche in which behaviours associated with ADHD are more easily noticed, and that perhaps ADHD is a transient mental illness (TMI) that has surfaced because of this ecological niche. In his book, Mad Travellers, Ian Hacking described a TMI as an illness that appears for periods due to an environmental factor (or an ecological niche) that facilitates its appearance (Hacking, 1998). TMI can also prefer certain genders, class or other factors (Hacking, 1998), which is relevant for ADHD which seems to favour male cohorts.

 

The advent of compulsory schooling and the abolition of child labour at the end of the 19th century in Europe and the beginning of the 20th century in America (Van Drunen & Jansz, 2004a), saw children enter the education system sometimes for the first time. Before this period, childhood was not an idea – children were thought of as small adults, and it was only after the publication of Ellen Kay’s childcare book The Century of the Child in 1900 that the idea of childhood as a time for play and education began to rise (Van Drunen & Jansz, 2004a). Children started being seen as young people in need of protection and guidance rather than economic agents (Van Drunen & Jansz, 2004a). The first Child Guidance Clinic opened in the 1920s which was concerned for the mental health of children only emphasised this (Van Drunen & Jansz, 2004a). Social management (the direction and organisation of social life in terms of society and individuals (Van Drunen & Jansz (2004b)) specifically towards children intensified (Van Drunen & Jansz, 2004a), and education was one relevant area of this management.

 

Observability is important to an ecological niche (Hacking, 1998). Purposive parenting became an area that developed because of new ways that children were being understood. Teachers and parents were paying more attention to children, and their behaviour was more closely monitored and observed. Deviance would therefore be more easily noticed, and ADHD symptoms would be noticed quickly in settings like the school environment that demanded conformity.

 

For the first time children needed to sit quietly, concentrate for long periods of time on quiet and perhaps boring tasks, with few breaks, and conform to classroom regulations. Individualisation, in which the individual distinguishes himself from the collective with his own idiosyncrasies, feelings, ideas about the world and distinct behaviours (Jansz, 2004), was an idea that had come to maturity by the time compulsory education had reached society. Because the idea of the individual was conceptualised, in the classroom setting that meant that individual deviance was noticed easily, and symptoms associated with ADHD would stand out.

 

From the 1920s to the 1960s length of the school career began to lengthen because society demanded more complex education to prepare children for the outside world (Van Drunen & Jansz, 2004a). Some researchers have begun to question whether we have put too much pressure on children by expecting them to conform to behaviours that may not be natural to them at certain developmental stages, and have postulated that perhaps ADHD is a function of modern day society (Manne, 2001; Stolzer, 2007). Modern life and schooling may in fact illicit ADHD symptoms (Manne, 2001) in children and be a symptom of a disordered society (Stolzer, 2007), rather than a disordered mind.

 

More recently there has been a shift in society that has seen an emergence of emotionality and the psychologization of society (Furedi, 2004). Problems once thought of as social are now psychologised and the problem now sits with the individual rather than the collective (Furedi, 2004). Because of this, emotions and behaviour are taken much more seriously. There has also been a rise in therapeutic language and “deficit talk”, resulting in psychological language becoming more prevalent and commonplace in the layman’s vocabulary (Furedi, 2004), meaning that the everyday person has the vocabulary to talk about psychological distresses, including ADHD. This means that an ecological niche has arisen in which it is more likely that behaviours associated with ADHD may be spoken about more openly and thus identified more easily.

 

The opening of the aforementioned mental health clinic, the Child Guidance Clinic, highlights the beginnings of a psychologization of children (Van Drunen & Jansz, 2004a), meaning that the problems of children have become thought of a psychological or mental rather than economic, physical, social or anything else. This means that any behavioural problems noticed in a child may be automatically thought of as psychological, so when a teacher or parent notices something like ADHD symptoms, they may think of it as a psychological problem with the individual (Furedi, 2004) rather than as a problem with the system, such as compulsory education, social problems, or a problem with society. The problem with psychologization is the tendency to pathologize normal experiences and assign psychological and therapeutic labels to typical behaviour (Furedi, 2004). I think that this is a danger with ADHD that many researchers and clinicians are now beginning to become concerned about; that clinicians are pathologizing normal developmental immaturity or normal masculine behaviour.

 

Evidence shows that media coverage of certain disorders is linked to increased prevalence of these disorders. The rate of Dissociative Identity Disorder for example was found to increase in America rapidly after the release of Flora Rheta Schreiber’s 1973 book about the disorder and the subsequence TV movie (Lilienfeld & Lynn, 2003). One only needs to scan briefly through parenting magazines to see that media coverage of ADHD has taken off recently. From 1988 to 1997, one media outlet mentioned specific DSM ADHD symptoms in articles 403 times (Schmitz et al., 2003). Certainly more people know about the disorder now than they did several decades ago and this could be because of media coverage. In any case, more and more people, including laymen, are able to recognise the symptoms of ADHD and can notice these behaviours in their children or in themselves, making an ADHD diagnosis more likely.

 

Whether it is through the advent of compulsory schooling, the psychologization of society and the rise of therapeutic language, or an increase in media coverage resulting in heightened awareness about the disorder, an ecological niche for ADHD to thrive has arisen. The disorder may well recede in years to come because of changes in society but because of current environmental factors, ADHD has become a deviance that is easily noticed and identified.

 

Final thoughts

 

ADHD as an official classification has been under constant change since it entered the DSM system. It first entered the DSM-II in 1968 as Hyperkinetic Reaction to Childhood, and was only required to be in one setting (Kos & Richdale, 2004). It was then amended to Attention Deficit Disorder (ADD) in the DSM-III and DSM-II-R, with distinctions between hyperactivity, impulsivity and inattention (APA, 1980). The DSM-IV and DSM-IV-TR classified it as either ADD or ADHD, without a distinction between impulsivity, and hyperactivity, but the disorder needed to be displayed in two settings or more (APA, 1994). What do all these changing criteria mean? A longer discussion is beyond the scope of this essay, but I think that Ian Hacking’s looping effects is relevant here. Perhaps while the category was changing, so were the category holders. The category changed because society did and this evolution propelled a further change to the category which in turn saw more changes to the people who hold this category (Hacking, 1995). Because people are constantly changing, the category needed to be constantly changing too.

 

The prevalence rates of ADHD are ever rising, and there are a myriad of reasons for this. ADHD as a desired diagnosis could explain why the disorder is continually being diagnosed by the people who have the power to do so. An ecological niche that facilitates ADHD as a disorder only enhances its ubiquitous prevalence and the integrity of the ADHD diagnosis should be constantly put under scrutiny to question whether the assessment is really pathologizing what is actually disordered behaviour and not simply normal experience or a disordered society.

 

References

 

American Psychiatric Association. (2013). DSM-5: Diagnostic and statistical manual of mental disorders (5th ed.). Washington, DC: Author.

American Psychiatric Association. (1980). DSM-III: Diagnostic and statistical manual of mental disorders (3rd ed.). Washington, DC: Author.

American Psychiatric Association. (1994). DSM-IV: Diagnostic and statistical manual of mental disorders (4th ed.). Washington, DC: Author.

Armstrong, T. (1996). ADD: Does it really exist? Phi Delta Kappan, 77, 424 – 428.

Baughman, F. A. (1999). The ADHD consensus conference: End of the epidemic. Brown University Child and Adolescent Behavior Letter, 15(2), 8.Blackman, J. A. (1999). Attention-deficit/hyperactivity disorder in preschoolers: Does it exist and should we treat it? Pediatric Clinics of North America, 46, 1011 – 1025.

Bratter, T. E. (2007). The Myth of ADHD and the Scandal of Ritalin: Helping John Dewey Students Succeed In a Medicine-Free College Preparatory and Therapeutic High School. International Journal of Reality Therapy, 27(1), 4.

Frances, A. (2012). Better safe than sorry. Australian and New Zealand Journal of Psychiatry, 46(8), 695-696.

Furedi, F. (2004). Therapy culture. London: Routledge.

Graham, L. (2006, November). From ABCs to ADHD: The role of schooling in the construction of ‘behaviour disorder’ and the production of ‘disorderly objects’. Paper presented at the Australian Association for Research in Education 2006 Annual Conference, University of South Australia, Australia.

Grigg, W.N. (2003, August 23). Totalitarian medicine. The New American, 19-21.

Hacking, I. (1995). The looping effects of human kinds. In D. Sperber, D. Premack & A. James Premack (Eds.), Causal cognition: A multi-disciplinary debate (pp. 351-383). Oxford: Clarendon Press.

Hacking, I. (1998). Mad travelers: Reflections on the reality of transient mental illnesses. Cambridge, Massachusetts: Harvard University Press.

Hornstein, G. (1988). Quantifying psychological phenomena: Debates, dilemmas, and implications. In J.G. Morawski (Ed.), The rise of experimentation in American psychology (pp. 1-34). New Haven: Yale University Press.

Hunsley, J., Lee, C., & Wood, J. (2003). Controversial and questionable assessment techniques. In S.O Lilienfeld, S.J. Lynn & J.M. Lohr (Eds.), Science and pseudoscience in clinical psychology (pp. 39 -76). New York: Guilford Press.

Jansz, J. (2004). Psychology and society: An overview. In J. Jansz, & P. Van Drunen (Eds.), A social history of psychology (pp. 12-44). Oxford: Blackwell.

Kos, J. M. & Richdale, A. L. (2004). The history of attention‐deficit/hyperactivity disorder. Australian Journal of Learning Difficulties, 9, 22 – 24.

Lardizabal, A. (2012). Is financial gain to blame for the growing ADHD epidemic?. Journal of Child and Adolescent Psychiatric Nursing25(3), 164-164.

Lilienfeld, S.O. & Lynn, S.J. (2003). Dissociative identity disorder. Multiple personalities, multiple controversies. In S.O Lilienfeld, S.J. Lynn & J.M. Lohr (Eds.), Science and pseudoscience in clinical psychology (pp. 109 -142). New York: Guilford Press.

Luise, L. (1997) ADHD: The classroom epidemic. Vegetarian Times(241).

Manne, A. (2001, November). Setting the frame of the ADHD epidemic: Childhood under the new capitalism. In Public Seminar, Royal Children’s Hospital, Melbourne.

Rafalovich, A. (2005). Exploring clinician uncertainty in the diagnosis and treatment of attention deficit hyperactivity disorder. Sociology of Health & Illness, 27(3), 305-323.

Rowland, A. S., Lesesne, C. A., & Abramowitz, A. J. (2002). The epidemiology of attention‐deficit/hyperactivity disorder (ADHD): a public health view. Mental retardation and developmental disabilities research reviews, 8(3), 162-170.

Saddichha, S. (2010). Disease mongering in psychiatry: is it fact or fiction?.World Medical & Health Policy, 2(1), 267-284.

Schmitz, M. F., Filippone, P., & Edelman, E. M. (2003). Social representations of attention deficit/hyperactivity disorder, 1988–1997. Culture & Psychology, 9(4), 383-406.

Smelter, R. W., Rasch, B. W., Fleming, J., Nazos, P., & Baranowski, S. (1996). Is attention deficit disorder becoming a desired diagnosis?. Phi Delta Kappan, 77(6), 429.

Stolzer, J. M. (2007). The ADHD epidemic in America. Ethical Human Psychology and Psychiatry, 9(2), 109-116.

Stolzer, J. M. (2009). Attention deficit hyperactivity disorder: Valid medical condition or culturally constructed myth?. Ethical Human Psychology and Psychiatry, 11(1), 5-15.

Timimi, S., & Taylor, E. (2004). ADHD is best understood as a cultural construct. The British Journal of Psychiatry, 184, 8 – 9.

Van Drunen, P. & Jansz, J. (2004a). Child-rearing and education. In L. Jansz, & P. Van Drunen (Eds.), A social history of psychology (pp. 45-89). Oxford: Blackwell.

Van Drunen, P. & Jansz, J. (2004b). Introduction. In J. Jansz, & P. Van Drunen (Eds.), A social history of psychology (pp. 1-11). Oxford: Blackwell.

Watson, G. L., Arcona, A. P., Antonuccio, D. O., & Healy, D. (2014). Shooting the messenger: The case of ADHD. Journal of contemporary psychotherapy, 44(1), 43-52.

 

Debasing Scepticism Revisited

by Changsheng Lai

University of Edinburgh

Abstract:

In this essay, I will criticise Brueckner’s objection to debasing scepticism, and then provide a new solution to the debasing demon problem. I will firstly introduce debasing scepticism and highlight its alleged “merit”, i.e., it threatens universal doubt. Some objections to debasing scepticism will be briefly analysed. After that, it will be illustrated that Brueckner’s objection concerning the KK principle is not satisfactory because debasing scepticism can avoid utilizing KK principle. Finally, I will propound a new objection to debasing scepticism by rejecting an essential premise in Schaffer’s argument, namely, evidential position closure. It will be revealed that the closure-like premise is not actually closed and cannot survive an intensive analysis.

  1. Debasing Scepticism

Debasing scepticism challenges our everyday knowledge claims by casting doubt on whether our beliefs are properly based on the relevant evidence – viz., it targets the basing condition: “knowledge requires the production of belief, properly based on the evidence” (Schaffer 2010:232). Traditional deceiving scepticism is motivated by the possibility that a deceiving sceptical scenario might obtain. For example, there seems to be no way to rule out the possibility that we are constantly deceived by an evil demon, or that we are a brain-in-a-vat (See Descartes’ Meditations; Putnam (1982), etc.). Similarly, debasing scepticism typically relies on a “debasing demon” summoned by Schaffer (2010). The demon can undetectably make us believe that our daily beliefs are produced on proper bases (rational reasoning, sufficient evidences, etc.) – even though in fact they are all based on improper bases (e.g., wishful thinking, guessing, superstition). The debasing demon can thus deprive us of everyday knowledge, because it is widely accepted that true beliefs formed on improper bases cannot be counted as knowledge.

It is alleged to be a “merit” of debasing scepticism that it threatens universal doubt by imperilling our knowledge of any proposition, even including the cogito (Schaffer 2010:233). As Brueckner (2011) points out, Schaffer’s debasing demon extends the range of scepticism – traditional (deceiving) scepticism only casts a posteriori knowledge into doubt, because the hypothesis that those a priori truths are false (e.g., 2+2=4) seems metaphysically impossible, as there seems to be no possible world where 2+2 fails to equal 4. Nonetheless, debasing scepticism is alleged to be able to threaten both a posteriori and a priori knowledge – because according to Schaffer, it is always possible to suppose that every proposition that we claim to know, no matter known a priori or a posteriori, is debased.

  1. Debasing Scepticism and KK Principle

Brueckner (2011) famously argues against debasing scepticism by accusing it of resorting to a controversial premise, i.e., the KK principle. He reconstructs Schaffer’s sceptical argument as follows:

  • “(1) If I know T, then my belief of T is properly based. (Premise)
  • (2) If I know T, then I can know that I know T. (Premise)
  • (3) If I can know that I know T, then I can know that my belief of T is properly based. (By (1) and a variant of the Closure Principle)
  • (4) If I know T, then I can know that my belief of T is properly based. (By (2), (3))
  • (5) I cannot know that my belief of T is properly based. (Premise)
  • (6) I do not know T. (By (4),(5))”  (Brueckner 2011:296-297)

Brueckner rejects the second premise as it equals KK principle and KK principle is false (For a more sophisticated formulation of KK principle, see McHugh(2010:231)). Although there are some supportive voices of KK (e.g., see McHugh 2010; Greco 2014), the KK principle is widely regarded as unacceptable because its requirement for knowledge is too demanding. Typically, the principle was intensively attacked by some externalists, e.g., Dretske rejects KK because “factual knowledge, according to modest contextualism, depends for its existence on circumstances of which the knower may be entirely ignorant” (2004:176). Similarly, reliabilists have argued that one may be unaware of the reliable source from which she gains her knowledge of p, thereby be ignorant of the fact that she knows that p. Moreover, the KK principle is suspected of generating an infinite regress (to know that p, one has to know that he knows that p, and know that he knows that he knows that p; and so forth) and thus makes knowledge impossible. Brueckner argues that, given that KK principle is false, debasing scepticism can be rejected.

However, even if we grant that KK principle is unacceptable, Brueckner’s objection is problematic. Firstly, as Ballantyne & Evans point out, Brueckner “takes it for granted that debasing scepticism must go through a particular argument schema” (2013:552). Therefore his objection can be disarmed easily if the sceptic’s argument schema turns out to be different from the one reconstructed by Brueckner. For example, one can reconstruct the sceptical argument based on the underdetermination principle (see Pritchard 2016, Ashton 2015, Boult 2013, etc.), so the second premise can be reformulated as: “If I know that p, then I should have better reason to believe that my belief of p is properly based than that it is not”. In this way, the sceptical conclusion can be derived as well, in the sense that (according to Schaffer’s presumptions) I do not have better reason to believe that my belief of p is properly based rather than debased by the demon. KK principle is thus avoided in this revised argument schema.

Moreover, Schaffer’s original articulated schema of argument seems immune to Brueckner’s criticism. He clarifies his argument structure as follows:

  • “(S1) If one knows that p, then one believes that p on a proper basis;
  • (S2) If one knows that p, then one is in an evidential position to know that one knows that p;
  • (S3) If one is in an evidential position to know that p, and p entails q, then one is in an evidential position to know that q;
  • (S4) So if one knows that p, then one is in an evidential position to know that one believes that p on a proper basis;
  • (S5) One is not in an evidential position to know that one believes that p on a proper basis;
  • (S6) One does not know that p” (2010:234)

 Schaffer prudently avoids using “know” directly in (S2)-(S5), but replaces it with a tricky and ambiguous term “be in an evidential position to know” (hereafter, “epK”) instead, which makes his second premise weaker and thus more plausible than Brueckner’s. Although Schaffer does not articulate what “epK” exactly means, it should be clear that “epK” differs from “knows” in the sense that one can, for example, have a good evidence to believe that p while failing to do so because of misleading defeaters. In that case, the subject can “epK” that p without actually knowing that p. A debased victim controlled by the debasing demon can be another typical example who “epK” that p without knowing that p. With the distinction between “epK” and “knows” in play, whether KK principle applies to Schaffer’s original argument is doubtful. Therefore, I suggest a new objection to debasing scepticism.

  1. The New Objection

Now let us return to Schaffer’s original argument schema. Unlike Bruckner who targets the second premise, I aim to examine the third premise:

  • (S3) If one is in an evidential position to know that p, and p entails q, then one is in an evidential position to know that q.

This can be formalized as:

  • [Evidential Position Closure] epKp ^ (p→q) →epKq

I name this premise EPC as it is closure-like (cf. the standard “known entailment closure”; see Bernecker 2012:368): Kp ^ K(p→q)→Kq). EPC is essential for Schaffer’s argument as it bridges the gap between the possibility of the debasing hypothesis and the violation of the basing condition. Ex hypothesi, the possibility of debasing scenario means that one cannot distinguish a debasing scenario from a non-debasing scenario. One thereby fails to epK daily propositions (because no available evidence can support one’s belief that she is in a debasing scenario better than the opposite scenario). Hence the basing condition of knowledge is violated according to EPC. Without EPC, the mere possibility of the debasing demon cannot suffice for scepticism, because one can argue that knowledge is fallible and does not require ruling out all incompatible alternatives.

However, EPC is false as the closure-like principle is not actually closed, so one’s epistemic status, namely “epK”, cannot be transmitted from the antecedent to the consequent in the way that EPC predicts. Here is a counterexample:

I am in an evidential position to know that I won the lottery by watching my ticket drawn on TV live. However, unbeknownst to me, if I win the lottery, then a man called Jack who I have never heard before, would lose£100 because he bet with his friend Mary that he would win the lottery.

In this case, it is obviously implausible to claim that I “epK” that Jack will lose£100, because I am not in an evidential position to know about Jack’s bet or its consequence – I do not even know who Jack or Mary is. In this counterexample, “p” refers to “I win the lottery”, and “q” refers to “Jack will lose£100 to Mary”, I epK that p, and p entails q. Nonetheless, I do not epK that q, because I do not epK that “pq”. EPC thus fails. A moral that we can learn from this counterexample is: EPC is not closed unless the subject has good evidences to believe that “p→q” (cf. standard closure principle: Kp ^ K(p→q)→Kq).

Naturally, in order to avoid the aforementioned counterexample, one may try to revise EPC as follows:

  • [EPC*] epKp ^ epK(pq)→epKq

I.e. If one is in an epistemic position to know that p, and one is in an epistemic position to know that p entails q, then one is in an epistemic position to know that q. Admittedly, EPC* is seemingly closed. However, this revision might invite inconsistency. If EPC is replaced by EPC*, then the first premise should be correspondingly revised as “I am in an evidential position to know that if one knows that p, then one believes that p on a proper basis”, i.e., “epK(S1)”, so that it can be substituted into EPC* and then derive the sceptical conclusion. However, debasing scepticism promises to threaten universal doubt, so any proposition can be imperilled, including (S1). That is to say, according to (S5): “One is not in an evidential position to know that one believes that p on a proper basis”, one cannot epK that (S1) likewise any other proposition p. After all, there is no evidence can sufficiently support that one’s belief of (S1), likewise one’s belief of “I have both hands”, is not debased. Therefore, the revised first premise is incompatible with (S5). Hence the debasing sceptical argument would be inconsistent if its second premise is replaced by EPC*.

One may attempt to save debasing scepticism by constraining the extension of “p→q” to a priori deductions, e.g., “if I am alive then I am not dead”. In the lottery counterexample, EPC fails because “if I win the lottery then Jack will lose£100” is a posteriori deduction which requires my posteriori knowledge about Jack and his bet. So the lottery counterexample seems cannot disprove EPC.

However, even if “p→q” is a priori, it does not mean that one can epK that “p→q” a priori, so EPC can still fail to be closed. For example, “(√x=3)→(x=9)” is a priori, a three-year-old child who epK that “√x=3” may fail to epK that “x=9”, because he lacks relevant mathematical knowledge, so he does not epK that “(√x=3)→(x=9)”. So debasing scepticism encounters a dilemma here: on one horn, EPC is not closed if the subject does not epK that “p→q”; on another horn, if sceptics endorse that the subject does epK that “p→q”, then the problem of inconsistency aforementioned would occur.

  1. Conclusion

I have demonstrated that Brueckner’s objection to debasing scepticism can be disarmed by adopting different interpretations of Schaffer’s argument schema. Schaffer’s own argument schema is able to evade KK principle and Brueckner’s criticism because Schaffer uses “epK” rather than “knows”. A new objection is thus given, which refutes Schaffer’s own third premise, i.e., evidential position closure, by providing counterexample showing that the transmission of “epK” is not actually closed in EPC. It has been argued that debasing sceptics can neither abandon EPC nor consistently revise EPC. Debasing scepticism is thus rejected.

The author gratefully acknowledges sponsorship from China Scholarship Council for his current research.

Bibliography

Ashton, N. A. (2015). ‘Undercutting UnderdeterminationBased Scepticism’, Theoria 81 (4), 333-354.

Ballantyne, N. & Evans, I. (2013). ‘Schaffer’s Demon’, Pacific Philosophical Quarterly 94 (4), 552-559.

Bernecker, S. (2012). ‘Sensitivity, Safety, and Closure’, Acta Analytica 27 (4), 367-381.

Boult, C. (2013). ‘Epistemic Principles and Sceptical Arguments: Closure and Underdetermination’, Philosophia 41 (4), 1125-1133.

Bondy, P. & Carter, J. A. (forthcoming). ‘The Basing Relation and the Impossibility of the Debasing Demon’. American Philosophical Quarterly.

BonJour, L. (2003). ‘The conceptualization of sensory experience and the problem of the

external world’, Epistemic Justification: Internalism vs. Externalism, Foundations vs.

Virtues, Bonjour, L. and Sosa, E. (eds.). 77-96, Oxford: Basil Blackwell.

Brueckner, A. (2011). ‘Debasing Scepticism’, Analysis 71 (2), 295-297.

Conee, E. (2015). ‘Debasing Skepticism Refuted’, Episteme 12 (1), 1-11.

Descartes, R. (1996). Meditations on First Philosophy: With Selections From the Objections and Replies. Cambridge University Press

Dretske, F. (2004). ‘Externalism and Modest Contextualism’, Erkenntnis 61 (2-3), 173 – 186.

Evans, I. (2013). ‘The Problem of the Basing Relation’, Synthese 190 (14), 2943-2957.

Greco, D. (2014). ‘Could KK Be OK?’, Journal of Philosophy 111 (4):169-197.

Leite, A. (2005), ‘What the Basing Relation Can Teach Us About the Theory of Justification’, Online at http://www.indiana.edu/~episteme/Papers/Basing%20Relation.pdf

McHugh, C. (2010). ‘Self-knowledge and the KK principle’, Synthese 173 (3), 231 – 257.

Putnam, H. (1982). Reason, Truth and History. Cambridge: Cambridge University Press.

Pritchard, D. (2016). Epistemic Angst: Radical Skepticism and the Groundlessness of Our Believing. Princeton University Press.

Russell, B. (1912). The Problems of Philosophy. Barnes & Noble Books.

Schaffer, J. (2010). ‘The Debasing Demon’, Analysis 70 (2), 228 – 237.