Sunday, January 1, 2012

Culture and Cognitive Science


Culture and Cognitive Science

First published Wed Nov 2, 2011
Within Western analytic philosophy, culture has not been a major topic of discussion. It sometimes appears as a topic in the philosophy of social science, and in continental philosophy, there is a long tradition of “Philosophical Anthropology,” which deals with culture to some degree. Within core areas of analytic philosophy, culture has most frequently appeared in discussions of moral relativism, radical translation, and discussions of perceptual plasticity, though little effort has been made to seriously investigate the impact of culture on these domains. Cognitive science has also neglected culture, but in recent years, that has started to change. There has been a sizable intensification of efforts to empirically test the impact of culture on mental processes. This entry surveys ways in which the emerging cognitive science of culture has been informing philosophical debates.

1. What is Culture?

The meaning of the term “culture” has been highly contested, especially within anthropology (Kroeber and Kluckhohn 1952; Baldwin et al. 2006). The first highly influential definition came from Edward Tylor (1871, 1), who opens his seminal anthropology text with the stipulation that culture is, “that complex whole which includes knowledge, belief, art, law, morals, custom, and any other capabilities and habits acquired by man as a member of society.” Subsequent authors have worried that Tylor's definition packs in too much, lumping together psychological items (e.g., belief) with external items (e.g., art). From a philosophical perspective, this would be especially problematic for those who hope that culture could be characterized as a natural kind, and thus as a proper subject for scientific inquiry. Other definitions often try to choose between the external and internal options in Tylor's definition.
On the external side, anthropologists have focused on both artifacts and behaviors. Herskovits (1948, 17) tells us that, “Culture is the man-made part of the environment,” and Meade (1953, 22) says culture “is the total shared, learned behavior of a society or a subgroup.” These dimensions are combined in Malinowski's (1931, 623) formulation: “Culture is a well organized unity divided into two fundamental aspects—a body of artifacts and a system of customs.”
More recently, externally focused definitions of culture have taken a semiotic turn. According to Geertz (1973, 89), culture is “an historically transmitted pattern of meanings embodied in symbols.” Culture, on such a view, is like a text—something that needs to be interpreted through the investigation of symbols. For Geertz, interpretation involves the production of “thick descriptions,” in which behavioral practices are described in sufficient detail to trace inferential associations between observed events. It's not sufficient to refer to an observed ritual as a “marriage;” one must recognize that nuptial rites have very different sequelae across social groups, and these must be described. Ideally, the anthropologist can present a culture from the point of view of its members.
Geertz's thick descriptions may seem to move from the external focus of earlier approaches into a more psychological arena, but he does not take interpretation to centrally involve psychological testing. The term “thick description” is taken over from Ryle (1971), whose approach to the mind emphasizes behavioral dispositions. An even more radical break from psychology can be found in an approach called “cultural materialism” (Harris 2001). Cultural materialists believe that thick description thwarts explanation, because the factors that determine social practices are largely unknown to practitioners. For Harris, these factors principally involve material variables, such as the ecological conditions in which a group lives and the technologies available to it. Cultural variation and change can be best explained by these factors without describing richly elaborated practices, narratives, or psychological states. Harris calls the materialistic approach “etic” and contrasts it with the “emic” approaches, which try to capture a culture from within. This differs from Tylor's external/internal distinction because even external cultural items, such as artworks, may be part of emic analyses on Harris's model, since they belong to the symbolic environment of culture rather than, say, the ecological or technological environments—variables that can be repeated across cultural contexts. Harris aims for generalizations whereas Geertz aims for (highly particular) interpretations. The debate between semioticians and materialists can be described as a debate about whether anthropology is best pursued as one of the humanities or as a science.
Aside from Tylor, the approaches that we have been surveying focus on external variables, with Harris's cultural materialism occupying one extreme. But psychological approaches to culture are also prevalent, and they have gained popularity as cognitive science has taken a cultural turn. D'Andrade (1995, 143) tells us that, since the 1950s, “Culture is often said to consist in rules… These rules are said to be implicit because ordinary people can't tell you what they are” (D'Andrade himself favors a more encompassing, processual definition, which includes both external items and the cognitive processes that interact with them). Richerson and Boyd (1995, 5) define culture as “information capable of affecting individuals' behavior that they acquire from other members of their species through teaching, imitation, and other forms of social transmission.” Sperber (1996, 33) describes culture in terms of “widely distributed, lasting mental and public representations inhabiting a given social group.”
Those who advance definitions of culture do not necessarily assume that a good analysis must be faithful to the colloquial understanding of that term. Rather, these definitions are normative, insofar as they can be used to guide research. A focus on artifacts might orient research towards manufactured objects and institutions, a focus on behavior might promote exploration of human activities, a focus on symbols might take language as a principal subject of study, a materialist orientation might shift attention toward ecology, and a focus on mental states might encourage psychological testing. Philosophically, definitions that focus on external variables tend to imply that culture is not reducible to the mental states of individuals, whereas psychological definitions may imply the opposite. This bears on debates about methodological individualism. At one extreme, there are definitions like Richerson and Boyd's (culture as information) that leave external variables out, and, at the other, there are authors such as Harris, who say psychology can be ignored.
In summary, most definitions characterize culture as something that is widely shared by members of a social group and shared in virtue of belonging to that group. As stated, this formulation is too general to be sufficient (a widespread influenza outbreak would qualify as cultural). Thus, this formulation must be refined by offering a specific account of what kind of shared items qualify as cultural, and what kind of transmission qualifies as social. The definitions reviewed here illustrate that such refinements are matters of controversy.

2. Cultural Transmission

One common thread in the definitions just surveyed is that culture is socially transmitted. That point was already emphasized in Tylor's seminal definition. Social transmission is a major area of research and various theories have been offered to explain how it works.

2.1 Memes and Cultural Epidemiology

It is a platitude that cultures change over time. Some research studies the nature of these changes. Such changes are often described under the rubric of cultural evolution. As the term suggests, cultural change may resemble biological change in various respects. As with biological traits, we can think of culture as having trait-like units that arise and then spread to varying degrees. The study of cultural evolution explores the factors that can determine which cultural traits get passed on.
Some authors push the analogy between cultural evolution and biological evolution very far. Within biology, the most celebrated evolutionary process is natural selection: traits that increase fitness are more likely than others to get passed on from one generation to the next. 20th century evolutionary theory (“the modern synthesis”) supplements this Darwinian idea with the principle that traits are transmitted genetically. Genes produce traits (or phenotypes), which impact reproductive success, and thereby impact which genes will be copied into the next generation. Richard Dawkins (1976), who helped popularize this idea, suggests that cultural traits get reproduced in an analogous way. Dawkins characterizes cultural items as “memes” – a term that echoes “gene” while emphasizing the idea that culture is passed on mimetically—that is, by imitation. Like a gene, a meme will spread if it is successful (for development and defense, see Dennett 1995; Blakemore 1999).
Some authors have resisted the analogy, arguing that there are crucial differences between generic and cultural transmission (e.g., Atran 2001; Boyd and Richerson 2001; Sperber 2001). In natural selection, genes ordinarily spread vertically from parents to children. Cultural items, in contrast, often spread laterally across peer groups, and can even spread from children to parents, as with the rise of email and other technological innovations. Cultural traits are also spread in a way that is mediated by intentions, rather than blindly. A teacher may intend to spread a trait, and a student may recognize that the trait has some value, and innovators may come up with new traits by intending to solve problems. Intentional creation is unlike random mutation because it can happen at a more rapid rate with immediate correction if the trait doesn't succeed. Success, too, is measured differently in the cultural case. Some cultural traits are passed on because they increase biological fitness, but traits that reduce reproduction rates, such as tools or war or contraception, can also spread, and many traits, such as music trends, spread without any impact on procreation or survival. Unlike genes, cultural traits are also copied imperfectly, sometimes changing slightly with each transmission. And there is no clear distinction within culture between a genotype and a phenotype; the trait that gets reproduced is often responsible for the reproducing. For example, if someone learns to ride a bicycle, there is no clear distinction between an inner mechanism and an outward manifestation; the skill is both the mechanism and its deployment.
All these contrasts suggest to some that the notion of a meme is misleading. Cultural traits are spread in ways that differ significantly from genes. In an effort to bypass the comparison to genes, Sperber (1996) offers an epidemiology analogy. Cultural items, which for him are representations, are spread like viruses. They can be spread laterally, and they can reduce fitness. Viral transmission depends on contagion, and, like viruses, some cultural traits are catchier than others. That is to say, some traits are easier to learn—they are more psychologically compelling.
Boyer (2001) has applied this idea to the spread of religious beliefs. Tales of the supernatural build on existing knowledge but add variations that make them exciting, such as the idea of a person who can survive death and walk through walls. Boyer shows experimentally that such exotic variations on ordinary categories are easy to remember and spread.
The epidemiology analogy may have limitations. For example, viruses do not usually spread with intentional mediation, and they are often harmful. But it has some advantages over the analogy to genetic transmission. Ultimately, such analogies give way to actual models of how transmission works.

2.2 Imitation and Animal Culture

In cultural transmission, an acquired trait possessed by one member of a social group ends up in another member of that group. In order for this to occur, there must be some learning mechanism that eventuates in doing what another individual does. Traditional learning mechanisms, such as associative learning, trial and error, and conditioning through reinforcement, are inadequate for explaining social learning. If one individual performs a behavior in front of another, the other may associate that behavior with the model, but association will not cause it to perform the behavior itself. Likewise, witnessing a behavior cannot lead to conditioning, because observation alone does not have reinforcement value. Conditioning can be used as a tool in social transmission, of course—a teacher can reward a student—but such deployment depends on a prior achievement: the student must attempt to do what the teacher has done or instructed. Thus, transmission requires learning mechanisms that go beyond those mentioned, mechanisms that cause a learner to reproduce what a model has done.
In a word, cultural transmission seems to depend on copying. When observing a model, there are two things one might copy: the end or the means. If a model obtains fruit from a plant, an observer capable of copying ends may recognize that the plant bears fruit and try to obtain that fruit as a result of having seen what the model achieved. Tomasello (1996) calls such learning emulation. Emulation is not always successful, however, because one cannot always achieve an end without knowing the right means. Tomasello reserves the term “imitation” for cases where observers perform the actions that they observe. This is a powerful tool for social transmission, and it is something human beings are very good at. Indeed there is evidence that we spontaneously imitate facial expressions and gestures almost immediately after birth (Metzoff and Moore 1977). In fact, human children over-imitate: they copy complex stepwise procedures even when simpler ways of obtaining goals are conspicuously available (Horner and Whiten 2005).
The human tendency to imitate may help to explain why our capacity for social learning far exceeds other species. Apes may be more likely to emulate than to imitate (Tomasello 1996). That is not to say that apes never imitate; they just imitate less than human beings (Horner and Whiten 2005). Thus, apes do have some capacity to learn from conspecifics. If culture is defined in terms of practices or abilities that are shared within groups in virtue of the achievements of particular group members, then one can even say that apes have culture. Evidence for group-specific innovations, such as nut cracking techniques, have been found among chimpanzees (Whiten et al. 2005) and orangutans (Van Schaik and Knott 2001). Culture and cultural transmission has also been documented in dolphins (Krützen et al. 2005).
This raises a question. If other creatures are capable of cultural transmission, why don't they show the extreme forms of cultural variation and accumulated cultural knowledge characteristic of our species? There are various possible answers. Great apes may also be less innovative than humans, and this may stem from their limited capacity to understand causal relations (Povinelli 2000), or to plan for the distant future. Apes may also have limitations on memory that prevent them from building on prior innovations to create cultural products of ever-increasing complexity. In addition, apes have less highly developed skills for mental state attribution (Povinelli 2000), and that may further reduce their capacity for imitative learning. Human infants do not just copy what adult models do; they copy what those models are trying to do (Metzoff, 1995). Warneken and Tomasello (2006) have shown that young chimps understand intended actions to some degree, but less robustly than their human counterparts. Finally, the human capacity to build on prior innovations and transmit cultural knowledge is often linguistically mediated, and apes and dolphins may have communication systems with far more limited expressive potential, making it impossible to move beyond simple copying and adopt the deferred form of imitation that we call instruction.

2.3 Biases in Cultural Transmission

It is widely agreed that human cultural transmission often involves imitation, but there is also evidence that we do not imitate every behavior we see. We imitate some observed behaviors more than others. Much research explores the biases that we and other creatures use when determining whom and when to imitate.
Biases divide into two categories. Sometimes imitation depends on content. We are more likely to pass on a story if it is exciting (recall Boyer), we may be more likely to repeat a recipe if it is tasty, and we are more likely to reproduce a tool if it is effective. In other cases, imitation depends more on context than content. The term “context bias” refers to our tendency to acquire socially transmitted traits as a function of who is transmitting them rather than what is getting transmitted (Henrich and McErleath 2003). There are two basic kinds of context biases: those based on frequency and those based on who is modeling the trait. Let's consider these in turn.
The most important frequency-dependent bias is conformity. Social psychologists have known for decades that people often copy the behavior of the majority in a social group (e.g., Asch 1956). Copying the majority may help in creating cultural cohesion and communication, and it may also allow for group selection, a process in which a group's prospects for survival increases relative to other groups based on its overall fitness. Group selection is hard to explain by appeal to biological evolution, because genetic mutations are localized to individuals, and are thus unlikely to result in whole groups having different traits, but conformity allows for spread within a group, and thus overcomes this limitation of genes. This story still depends on the possibility that an innovation that has not yet become widely practiced can get off the ground. If people only copied the majority, that would never happen. One solution is to suppose that conformity biases work in concert with an opposing trend: nonconformity. If we sometimes copy rare behaviors, then new innovations can initially spread because of their novelty and then spread because of their high frequency. One example of these complementary processes is fashion. New fashions (such as street clothing coming from a small subculture, or the seasonal innovations of fashion designers) may initially appeal because of their novelty, and then spread through conformity.
The nonconformist bias is postulated to explain the observation that people sometimes prefer to copy cultural forms simply because they are rare. Model-dependent biases (the second class of context biases mentioned above) also promotes the imitation of rare forms. In these biases, people selectively copy specific members of a social group. We tend to copy those who are skilled, those who are successful, and those who hold high prestige. The prestige bias is the most surprising, because instrumental reasoning alone could lead us to copy people who are skillful or successful. Prestige is not synonymous with dominance. We do not necessarily hold those who dominate us in high regard, and we do not seek to look at them, be near them, or be like them. We do all of these things with high prestige individuals, and this tendency goes beyond our bias to copy people who are skilled in domains that we are trying to master. Henrich and Gil White (2001) review a large body of empirical evidence in support of this conclusion. For example, many people will shift attitudes towards experts, even when the experts have no expertise on the topic under consideration; people will copy the task-performance style of a professionally attired individual more often than they copy the style of a college student; and groups of high-status individuals exert more influence on dialect changes over time. Within the anthropological literature, it has often been noted that high prestige individuals in small-scale societies are listened to more than others, even on topics that have little to do with the domain in which their prestige was earned. Imitating prestigious individuals may confer advantages similar to imitating people who are skillful or successful, however. Doing so may increase the likelihood of acquiring prestige-enhancing traits.
Given the wide variety of biases, it may seem like a difficult task to figure out whom to imitate on any given occasion. This is especially daunting in cases where two biases conflict, as with conformity and prestige. To solve this problem, McElreath et al. (2008) have proposed that imitation biases are hierarchically organized and context-sensitive. For example, conformity may be the default choice when payoffs in a group of models are similar, but prestige bias kicks in when the payoff differential increases. McElreath et al. use computational models to show that such payoff sensitivity produces behavioral patterns that fit with empirical evidence.

2.4 Bio-cultural Interaction

Cultural transmission of traits is often contrasted with biological transmission. It is said to involve nurture rather than nature. Anthropologists emphasize the wide-ranging flexibility of human behavior and regard cultural transmission as evidence for that. This might suggest that cultural transmission operates in a way that is independent of biology. But this idea has been challenged.
One challenge comes from evolutionary psychology. Evolutionary psychologists place greater emphasis on innate capacities. Cultural variation may appear to be inconsistent with nativism, but evolutionary psychologists believe that some variation can be explained within a nativist framework. They admit that human groups differ in both their psychological states and customs, but deny that such variation requires a social explanation. The term “evoked culture” has been introduced to label the idea that differences in the physical environment may cause differences in how social groups think and act (Tooby and Cosmides 1992). We may be evolved with inner toggles that make us act in ways that are adaptive to different settings. For example, cultures that struggle with resource scarcity may be more belligerent than those that live in places of abundance, and it is possible that this personality difference hinges on an innate switch that changes position in an environment-sensitive way.
The idea of evoked culture challenges the dichotomy between environmental and evolved causes of behavior, by proposing that some ontogenetically acquired traits result from natural selection. But critics of evolutionary psychology note that evoked culture cannot explain the relatively open-ended nature of human innovation. Scarcity may trigger a biological disposition for belligerence, but does not cause us to invent canons, peace treaties, or agriculture. Those specific tools for coping with scarcity depend on insight and toil, rather than innate knowledge.
That dichotomy between biology and culture has been challenged in ways that are less radical than the idea of evoked culture. Indeed, one challenge pushes in the opposite direction; rather than saying cultural traits are innate, some say that innate traits depend on culture. Some species change their environment in a way that alters evolutionary trajectories (Day et al. 2003). This phenomenon is called “niche construction.” Niche construction does not always involve culture: innate traits, such as dam building in beavers, can alter the environment in a way that introduces selection pressures. But some niche construction is cultural. New inventions can lead to new environments that have biological impact. For example, Simoons (1969) argues that adult humans were all initially lactose intolerant, but acquired the ability to digest lactic acid as a consequence of technologies of dairy production. If so, culture can drive genetic change.
A more controversial example is language. Some have argued that language began as an invention, using domain general cognitive resources, but introduced a selective advantage for mutations that facilitate rapid language learning and increasingly sophisticated constructions – an example of what biologists call a “Baldwin effect,” named after the philosopher, J.M. Baldwin (Deacon 1997). Language is socially transmitted and may have been invented, securing its status as a cultural item, but, if nativists are night, it is now transmitted by specialized innate machinery, which makes it bio-cultural. The idea that we can acquire traits from biology and culture, and that these two interact, has been called dual-inheritance theory by Boyd and Richerson (1985).
Dual-inheritance theory suggests that cultural evolution need not be an alternative to biological evolution, but rather, can interact with it. In some cases, cultural changes may actually exert a biological force. On the other hand, cultural evolution may tend to reduce the impact of biology. Consider niche construction again. If human beings can alter their environments through technology, they can mitigate the effects of external variables that might otherwise drive natural selection (Laland et al. 2001). Thus, the capacity for cultural learning may render biological transformations unnecessary. Cultural change is faster, more flexible, and driven by forethought. The extent to which biology contributes to human variation across cultures is, therefore, a matter of controversy. Evolutionary psychologists emphasize the biological contributions to variation, dual-inheritance theorists emphasize bio-cultural interactions, and their critics suggest that the human capacity for cultural transmission reduces the import of biology. The latter perspective gains some support from the fact that many dramatic cultural differences have no known biological causes or effects.

3. Examples of Cultural Influence

Philosophers have long speculated about cultural variation, raising questions about whether people in different cultures differ psychologically. Clearly people in different cultures know different things, believe different things, and have different tastes. But one might also wonder whether culture can influence the way we think and experience the world. And one might wonder whether differences in taste are a superficial veneer over underlying normative universals, or whether, instead, culture plays a role in shaping normative facts. Cognitive science offers empirical insights into cultural differences that have been taken to bear on these enduring questions. What follows is a survey of some areas in which empirical investigation has been very active.

3.1 Language

20th century linguistics was born out of anthropology, and anthropological studies of language built on the efforts of European missionaries to understand the languages of human societies that had been isolated from European contact. Within this context, the study of language principally involved radical translation—attempting to translate the vocabulary of another language when there is no bilingual interpreter to tell you what words mean. Anthropologists observing this practice, such as Franz Boas, were struck by how different the world's languages can be, and they began to wonder whether these differences pointed toward differences in how cultural groups understand the world.
Philosophers entered into such speculation too. Quine (1960) famously used the activity of radical translation as a springboard to present his theses about limits on a theory of meaning. When trying to construct a translation manual for a foreign language based on verbal behavior, there is a problem of underdetermination. If the language users say “gavagai” when and only when a rabbit is present, they may be referring to rabbits, but they may also be referring to rabbit time slices or undetached rabbit parts. Absent any resolution of this underdetermination, there would always be a degree of indeterminacy in our theories of what other language users mean. Quine's behaviorism led him to think that these indeterminacies are not merely epistemic; linguistic behavior is not just evidence for what people mean, but the source of meaning, so there is no further fact that can settle what people mean by their words. This led Quine to be skeptical about the role of reference in his semantic theory, but he didn't become a meaning nihilist. Without determinate reference, the meaning of words can be understood in terms of inferential roles. But Quine (1953) had earlier argued that there is no principled distinction between those inferences that are constitutive of meaning, and those that merely reflect beliefs about the world (the analytic/synthetic distinction). Thus, the meaning of a word depends, for Quine, on the total role of that word in its language; Quine is a meaning holist. In the context of radical translation, this raises a striking philosophical possibility. When we encounter a word in another language, we cannot determine what it refers to, so we must specify its meaning in terms of its total inferential role; but inferential roles vary widely across cultural groups, because beliefs diverge; thus, the meaning of a word in a language spoken by one cultural group is unlikely to have an exact analogue in other languages. Meanings vary across cultures. In this sense, radical translation is actually impossible. One cannot translate a sentence in another language, because one cannot find synonymous sentence in one's own. At best, one can write paragraph-, chapter-, or book-length gloss on inferential links that help convey what foreign speakers mean by their words.
This conjecture leads quickly to another that relates even more directly to psychology. Many philosophers have assumed a close relationship between language and concepts. Words are sometimes said to constitute concepts and, more often, to express them. Corresponding to the linguistic inferential roles that constitute meanings for Quine, one might posit isomorphic conceptual roles, and, if meanings are not shared, then it might follow that concepts are not either: people in different groups might conceptualize the world differently. The idea that languages may not be intertranslatable suggests that there may also be incommensurable conceptual schemes.
This idea is challenged by Davidson (1974), who offers a kind of dilemma. Suppose we encounter a group whose beliefs and linguistic behaviors differ from ours but can nevertheless be accurately characterized with patience and time. If we can understand these other people, then their concepts must be shared with ours. Suppose, however, that we cannot ever understand what they mean by their words because they say things that can be offered no coherent translation. Then it's best to assume they are not really saying anything at all; their words are meaningless noises. Either way, there is no proliferation of conceptual schemes. Davidson's argument, which is only roughly presented here, controversially presupposes a principle of charity, according to which we should not attribute irrational (e.g., inconsistent) beliefs. Davidson may also be overly demanding in requiring accurate translation between languages as opposed to some weaker criterion of comprehension (see Bar-On 1994; Henderson 1994).
Well before Quine and Davidson were debating the incommensurability of meanings, linguists had been exploring similar ideas. Edward Sapir (1929), a student of Boaz, had proposed two interrelated theses: linguistic determinism according to which language influences the way people think, and linguistic variation, according to which languages have profound differences in syntax and semantics (these terms are not Sapir's, but exist in the literature). Together, these two theses entail linguistic relativity: the thesis that speakers of different languages differ in how they perceive and think in virtue of speaking different languages. Sapir's student, Benjamin Whorf (1956), speculated that languages encode fundamentally different “logics,” which become so habitual to language users that they seem natural, resulting in fundamentally different ways of understanding the world. For example, Whorf speculates that speakers of Hopi are anti-realists about time, since tense in that language is expressed using epistemic modals, which describe events as recalled, reported, or anticipated, in lieu of past, present, or future. Sapir and Whorf's relativism about language has come to be known as the Sapir-Whorf hypothesis. These two have been criticized for offering insufficient support. They had limited knowledge of the languages they discuss, and throughout their discussions, they infer cognitive differences directly from linguistic differences rather than testing whether language causes (or even correlates) with difference in thought.
The Sapir-Whorf hypothesis went out of fashion with the advent of Chomskyan linguistics. Chomsky argued that linguistic differences are superficial and scientifically uninteresting. Languages are united by a universal grammar, and differences simply reflect different settings in universally shared rules. A further setback for the Sapir-Whorf hypothesis came with early testing. Heider (1972) set out to see whether color vocabulary influenced color perception. She investigated the Dani of New Guinea, who have only two color terms (“mili”, for dark cool colors, and “mola,” for light and warm colors). Heider found that the Dani divide color space in much the same way as English speakers, and performed like English speakers on color memory tests. There was also a failed effort to show that Chinese speakers, who lack a counterfactual construction, have difficulty with subjunctive thought (Bloom 1981; Au 1983). Evidence for psychological differences across speakers of distinct languages were hard to come by.
More recently, however, some researchers have claimed to find such differences. Lucy (1992), for example, found that speakers of Yucatec Mayan, a language that lacks count nouns, made errors on memory tasks that required keeping track of specific quantities of items. English speakers are more likely to notice when two pictures differ in the number of chickens as compare to the amount of grain, because “chicken” is encoded with a count noun, and “grain” is encoded with a mass noun.
Pederson et al. (1998) investigated speakers of Tzeltal, a language that expressed absolute frames of reference (like “north” and “south”) but lacks terms for relative frames of reference (such as “left” and “right”). When performing spatial tasks, such as replicating a sequence of objects in two different locations, they preserved the arrangement relative to absolute coordinates even in conditions where English speakers would use relative coordinates.
Gordon (2004) studied numerical cognition among the Pirahã, a language with a number vocabulary limited to words that mean, roughly, “one,” “two,” and “many.” He found that Pirahã made frequent numerical errors when copying drawings of groups of lines, dropping nuts into a can, and reproducing a series of beats.
Even color perception, which was once regarded as immune to Sapir-Whorf effects may be influenced by language. Kay and Kempton (1994) found that speaker Tarahumara, a language that does not distinguish green and blue, were more accurate than English speakers at rating the similarity of color pairs within the blue-green range (see also Roberson et al. 2000). Winawer et al. (2007) found that speakers of Russian, which has separate lexemes for light blue and darker blues, show categorical perception effects for light blue, not found in English speakers. In categorical perception, differences between stimuli that cross a categorical boundary are perceived as greater than equal differences within a category. For Russian speakers, a light and medium blue may look more different than a light and dark blue, even if two pairs are equidistant in colorspace.
Boroditsky et al. (2003) also found that gendered articles have an influence on conceptualization. Speakers of Spanish and German associate stereotypically gendered adjectives with common nouns as a function of the gender of those nouns in their languages, even when they are tested in English. For example, the German word for key is Schlüssel, which is masculine, and the Spanish word, llave, is feminine. German speakers may describe keys as hard, heavy, and useful, while Spanish speakers describe them as lovely, little, and intricate.
These kinds of findings are now plentiful, but the Sapir-Whorf Hypothesis has not gone unchallenged (for a review, see Bloom and Keil 2001). For example, Li and Gleitman (2002) showed that Tzetal speakers can reproduce object arrays using relative reference frames in a simplified version of the experiments performed by Pederson et al. (1998) (see Levinson et al. 2002 for a reply). Frank et al. (2008) found that Pirahã could match large quantities with accuracy, but failed to do so when they relied on memory. Such experimental critiques suggest that Sapir-Whorf effects are fragile, and may be hard to show under certain conditions, but they also confirm that language plays a role in encoding information, and cognitive differences arise when memory is involved. Studies on color perception and color comparison suggest that the effects are not limited to memory, and Boroditsky's study of gendered pronouns suggest that language can have an enduring impact on how we think about familiar categories.
In summary, it might be said that cognitive science has found evidence in support of the hypothesis that language can influence thought. Because language is a cultural item, linguistic effects on thought can be characterized as cultural effects. But the interest of such effects is open to debate. Neo-Whorfians will say that language can establish modes of thinking that distinguish one group from another, while critics say these differences are modest and don't imply the radically incommensurable worldviews advertised by Whorf.

3.2 Perceiving and Thinking

Research on the Sapir-Whorf hypothesis looks for ways in which language influences perception and thought. But language is not the only way that a culture can influence cognition. Other research looks for cultural differences in language and perception that are not necessarily mediated by language. For example, there is research suggesting that cognition can be affected by methods of subsistence or social values.
In the decades after World War II, psychologists began to do research on “cognitive styles.” Witkin (1950) introduced a distinction between field-dependent psychological processing and field-independent psychological processing. Field-dependent thinkers tend to notice context and the relationship between things, whereas field-independent thinkers tend to abstract away from context and experience objects in a way that is less affected by their relationships to other things. For example, field-independent thinkers do better on what Witkin called the embedded figure task, in which one shape (left) must be found embedded in another (right).
EmbeddedFigureTask
Witkin's test was designed to study individual differences within his own culture, but Berry (1966) realized that it could also be used to investigate cultural variation. He was interested in how different forms of subsistence might influence cognition. One hypothesis is that hunters and gatherers must be good at differentiating objects (plants or prey) from complex scenery. Horticulturalists, on the other hand, must pay close attention to the relationship between the many environmental factors that can influence growth of a crop. To test this, Berry studied Inuit hunters and Temne horticulturalists in Africa, and found that the latter are more field-dependent than the former. (See also Segall et al. 1966, who found that hunter-gatherers are less susceptible to the Mueller-Lyer illusion, because—the authors argue—they don't live in a “carpentered world, full of right-angled buildings.”)
Berry was interested in isolated, small-scale societies, but the same research methods and principles have also been applied to much larger cultural groups. Cultures of every size differ on a number of dimensions. One distinction that has been extremely valuable in cross-cultural research is the contrast between individualist cultures and collectivist cultures (see Triandis, 1995). Individualists place emphasis on individual achievements and goals; they value autonomy and disvalue dependency on others. Collectivists place emphasis on group membership and often value group cohesion and success above personal achievement. Following Triandis, we can define more precisely as follows:
Collectivism: a social pattern in which individuals construe themselves as parts of collectives and are primarily motivated by duties to those collectives
Individualism: a social pattern in which individuals see themselves as independent of collectives and are primarily motivated by their own preferences and needs
The difference can be brought out experimentally by giving people in different cultures tasks that assess how much they value autonomy and how much they value inter-dependence. For example, when asked to pick a colored pen from an array of pens, individualists tend to pick the most unusual color, and collectivists tend to pick the most common.
Individualist and collectivist cultures are distributed widely across the globe. Countries in Western Europe, North America, and Anglophone Australasia score high in individualism. Collectivism is more common in East Asia, South Asia, the Middle East, the Mediterranean, and South America. It should be obvious that these are vast and remote regions of the globe and highly diverse, culturally speaking. Any large nation, such as India or America, will have scores of subcultures each of which might vary along these dimensions. The point is not that all collectivist cultures are alike. Differences between collectivist cultures and within collectivist cultures are often greater than between collectivist and individualist cultures. The point is simply that collectivist cultures share this one dimension of similarity, and that dimension, as we will see, has an impact on cognitive style. Likewise for individualists. Future research will offer more finely grained distinctions, but at present, research on the cognitive effects of individualism and collectivism offers some of the strongest evidence for cultural differences in thought.
Some researchers trace individualism and collectivism to material conditions. For example, many Western cultures are individualistic and trace their seminal cultural influence to ancient Greece, which had an economy based on fishing and herding. Far Eastern countries trace their seminal cultural influence to China, which had intensive agriculture. In the West, free mercantilism and capitalism emerged long ago, emphasizing individual achievement. In the East, capitalism and free trade is comparatively new. So the East/West contrast in collectivism and individualism may have its origins in how people made their livelihood in past centuries. Once these differences are in place, they tend to be reflected in many other aspects of culture. Far Eastern languages use characters that require a fine sensitivity to relationships between parts; Eastern religion often focuses on relationships between human beings and nature; Eastern ethical systems often emphasize responsibilities to the family (Nisbett, 2003). These cultural differences can be used to transmit and preserve psychological differences from generation to generation.
Nisbett et al. (2001) present a large body of research, which suggests that members of individualist and collectivist cultures tend to have measurably different cognitive styles. Nisbett and his collaborators (mostly East Asian psychologists) talk about field-dependence and field-independence, but also introduce the closely related terms: holistic and analytic cognitive styles. They postulate that, as collectivists, East Asians will process information more holistically, seeing the relation between things, and collectivists will process information more analytically, focusing on individual agents and objects. They show that these differences come out in a wide variety of psychological tasks. Here are some examples reviewed by Nisbett.
Westerns are more likely than Easterners to attribute a person's behavior to an internal trait rather than an environmental circumstance. In many cases, such attributions are mistaken (social psychologists call this the Fundamental Attribution Error).
Easterners are more likely to see both sides of a conflict when faced with counter-arguments in a debate; Westerners dig in their heels. The Eastern responses are more dialectical, whereas Westerners are guided by the principle of Non-Contradiction. This is a principle central to modern logic in the West, which asserts that a claim and its negation can't both be right.
Westerners tend to categorize objects based on shared features (cows go with chickens because they are both animals), whereas Easterners focus more on relationships between objects (cows go with grass, because cows eat grass).
When looking at a fish tank, Westerners first notice the biggest, fastest fish and ignore the background. Easterners are more likely to notice background features and relational events (a fish swimming past some seaweed), and they are less likely to recall individual fish on a memory test. In studies of expectations, Westerns tend to expect things to remain the same, whereas Easterners are more likely to expect change.
In assessing the import of these differences, it is important to realize that they are often subtle. In some cases, it is possible to get a Westerner to respond like an Easterner and conversely, if subjects are properly instructed or primed (Oyseman & Lee, 2008). But the results show that there are predictable and replicable differences in default cognitive styles as a function of culture.
Several philosophical ramifications deserve note. First, variation in cognitive styles can be used to challenge the idea that the rules used in thought are fixed by a hard-wired mental logic. This idea was promulgated by Boole (1854) in his work on formal logic, and it helped pave the way for the advent of computing and, ultimately, for the computational theory of mind. If there is no fixed mental logic, then the study of reasoning may owe more to nurture than has often been assumed, and the traditional computational theory of mind might even need a re-examination. Cultural differences do not refute computational approaches, but they raise a question: if some cultures tend to rely on formal principles and others rely on stochastic approaches to reasoning, then we should not by default assume that the mind naturally functions like a classical computer as opposed to, say, a connectionist computer.
Second, variation in reasoning can also be used to raise questions about whether certain cognitive norms (such as a preference for the principle of non-contradiction) are culturally inculcated and contestable. This issue is related to contemporary debates about whether classical logic is privileged. It was also the subject of a provocative paper by Winch (1964), who, following ethnographic work by Evans-Pritchard on the logic of witchcraft among the Azanda, argued that the Western allegiance to bivalence is culturally contingent, rather than normatively compulsory.
Third, variation in perception raises questions about modularity; if values can influence how we see, then seeing may be more amendable to top-down influences than defenders of modularity have supposed. Citing work on the Mueller-Lyer illusion, Fodor (1983) argues that modularity is consistent with the possibility that cultural settings can, over protracted time periods, alter how information is processed. But this concession may be inadequate: perceptual processing styles can be altered very quickly by priming cultural values such as individualism and collectivism. Moreover, unlike the Mueller-Lyer illusion, which may involve bottom-up perceptual learning, research on individualism and collectivism suggests thatvalues can influence how we see. That's close in spirit to the idea that perception is theory-laden, which was the central thesis of New Look psychology—the theory that the modularity hypothesis is supposed to challenge (Bruner, 1957; Hanson, 1958).

3.3. Emotions

Emotions are a fundamental feature of human psychology. They are found in all cultures, and arguably, in all mammals. Indeed, we seem to share many emotions with other animals. Dogs, for example, show signs of fear (they cower), sadness (they cry), and delight (they wag their tails giddily). This suggests that emotions are evolved responses. There is a good explanation for why emotions would be selected for: they help us cope with challenges that have a tremendous impact on life and well-being. Fear protects us from dangers, sadness motivates us to withdraw when resources or kin are lost, and joy registers accomplishments and motivates us to take on new challenges. Thus, it seems highly likely that emotions are part of human nature. But emotions can also be influenced by nurture. Some researchers even suggest that emotions can be socially constructed—they say some emotions come into existence through social learning. The thesis is controversial, of course, but the claim that culture has an impact on emotional states is hard to deny (for a review, see Mesquita and Frijda, 1992).
To see how culture might impact emotions, consider various things that normally occur when people have emotional responses. There is some elicitor of the emotion; there is characteristically some appraisal of that elicitor; this occurs along with feelings; and these are associated with motivational states as the body prepares to react; the emotion is alsoexpressed; and can lead to a decision about what actions to carry out, including complex strategic actions extended over time. Each of these things can come under cultural influence.
Begin with elicitors. Culture can clearly influence what arouses our emotions. In Bali, crawling babies are said to arouse disgust (Geertz, 1973: 420), and in Japan, disgust can be caused by failing an exam (Haidt et al., 1997). In Sumatra, an encounter with a high status individual can cause shame (Fessler, 2004). In Iran, a woman without a headscarf might cause anger, and in France, a woman with a headscarf might cause the same reaction.
Feelings can differ cross-culturally, as well. For example, it has been reported that, while anger is typically associated with high arousal in the West, in Malay, anger (or murah) is more strongly associated with sullen brooding (Goddard, 1996). There are corresponding differences in motivational states. Anger might instill a disposition to aggress in the West, whereas sulking behavior may be more typical in Malaysia. In Malay, aggression is associated with amok, which refers (as the imported homophone does in English) to a frenzied state. Thus, there seems to be no exact synonym for anger: a state that is prototypically aggressive but not frenzied.
Culture can also impact expressions of emotions. This is sometimes done through active suppression. Ekman and Friesen (1971) present evidence that public expression of negative emotions is discouraged in Japan. New expressions may also be cultivated culturally. There is evidence that tongue biting is used by women to express shame in parts of India (Menon and Shweder, 1994). There are also cultural difference in gestures used to express anger, such as the middle finger in North America or the double finger salute in Britain. What North Americans interpreted as an “okay” sign would be interpreted as a sexual insult in Russia or Brazil. As these gestures become habitual, they may become incorporated into automatic ways of expressing emotions in some contexts.
In addition to emotional expressions, cultures can promote highly complex behavioral responses. Love is sometimes taken to be grounds for marriage, but less so in cultures where marriage is arranged. Grief in Biblical contexts might have been expressed by tearing ones clothes or covering oneself with dirt. Shame can require culturally specific behaviors of self-abasement, such as bowing low. Hope may promote the use of lucky charms or prayers, depending on one's cultural beliefs.
These examples suggest that culture can impact emotional response in a wide variety of ways. As a consequence, emotions that are widely recognized in one culture may go unnoticed or uninstantiated in another. One example is amae, a Japanese emotion construct, which is characterized as a positive feeling of dependency on another person, group, or institution (Doi, 1973). Another example is the Samoan emotion of musu, which expresses a person's reluctance to do what is required of him or her. In more isolated societies, it has even been argued that none of the named emotions correspond exactly to emotions that we would recognize here. This may be the case among the Ifaluk, a small group in Micronesia (Lutz, 1988).
In arguing for cultural variation in emotions, researchers often cite differences in emotional vocabulary. Such differences would not be especially powerful evidence were it not for independent evidence (just discussed) that culture can exert a causal impact. Vocabulary differences may also be evidential in another way. The very fact that a label exists in a language may have a causal impact on the frequency or manifestation of a psychological state. This is what Hacking (1999) calls a “looping effect.” This can sometimes be seen in the case of pathological emotions. For example, incidence and symptoms of depression may increase as a consequence of public discourse about depression (Ryder et al., 2008; see also Murphy, 2006). Depression as we know it may be culturally specific in the way it presents, even if there are related disorders in other cultures, such as melancholia and acidia in medieval Europe (Jackson, 1981). Some emotional disorders may be common in one society and virtually unheard of elsewhere. One example is latah, a disorder found among women in parts of South East Asia, in which victims enter a trance-like state, shout obscenities, repeat what others say to them, and exhibit an extremely strong and sensitive startle response (Simons, 1996).
In light of such cultural variation, some argue that emotions are socially constructed (Averill, 1980; Harré, 1986; Armon-Jones, 1989; also see the entry naturalistic approaches to social construction.) Others resist this idea, arguing that emotions are innate biological programs, shared across the species despite differences in emotion vocabulary. The latter position has been associated with evolutionary approaches to emotion (Plutchik, 2001), and research on universal recognition of emotional facial expressions (Ekman et al. 1969).
Ekman and his collaborators studied an isolated culture, the Fore, in Papua New Guinea. These people had little contact with the West, and Ekman wondered whether they assign the same significance to emotional expressions as we do. He identified six emotions that are very reliably identified in Western nations (joy, sadness, anger, fear, surprise, and disgust), and found corresponding words in Fore. He asked his respondents to look at photos of expressions and identify which faces go with which words. He also described various scenarios (such as seeing an old friend or smelling something bad) and asked them to choose the face that best expressed how someone in those situations would feel. Using these methods, he was able to show that the Fore give responses that are very similar to the responses we give in the West. Ekman concluded that emotional expressions are not cultural inventions, but rather, are biologically determined.
A close look at Ekman's data suggests that he may exaggerate the degree of universality. The Fore do indeed respond similarly to their Western counterparts, but not identically. For example,they are more likely to label as fear the faces that we identify as surprise, and they also associate sadness with the faces we label angry. So the dominant response among the Fore differs from ours in two of six cases. And even where they agree with our labeling, the level of agreement is often surprisingly low, with less than 50% giving the expected response. Moreover, the Fore who had more exposure to outsiders also gave answers that were more like outsiders', suggesting some cultural influence (see Russell, 1994, for more discussion).
It doesn't follow that emotions are mere social constructions. Rather, it seems that we have biologically basic emotions that can be altered by culture. Whether these alternations qualify as different emotions or simply different manifestations of the same emotion depends on what one takes emotions to be. The nature of emotions is a matter of considerable debate (Prinz, 2004). For those who take emotions to essentially involve judgments, constructivist theories of emotion are attractive, because culture can influence how people construe situations (Solomon, 2002). Constructivism is also appealing to those who think of emotions as analogous to scripts, which include everything from canonical eliciting to conditions to complex behavioral sequalae (Russell, 1991; Goddard, 1996; Goldie, 2000). Those who see emotions as automatic behavioral programs or patterned bodily changes have been less inclined towards constructivism (James, 1884; Darwin, 1872; Ekman, 1999; though see Prinz, 2002). Griffiths (1997) has argued that emotions are not a natural kind: some are culturally constructed scripts, others are automatic behavioral programs, and others are evolved strategic responses that unfold over longer timescales.
It might seem that we can't settle on the question of whether culture shapes emotions without deciding between these theories of what emotions are. On the other hand, the evidence suggests that culture can influence every aspect of our emotional responses, and this suggests that, whatever emotions really are, culture can have an impact. It is open to debate whether the impact is sufficiently significant to warrant the conclusion that some emotions are social constructs.

3.4 Morality

Few deny that biology makes some contribution to morality. There is a vast literature on prosocial behavior in primates, moral behavior in early childhood, and universal dispositions to empathy and altruism (e.g., Warneken and Tomasello, 2009). But no account of moral psychology can stop with biology. Morality is also influenced by culture. This raises traditional philosophical question about moral relativism.
Evidence for cultural variation in values is easy to come by (see Prinz, 2007). Consider, for example, attitudes towards various forms of violence. Cannibalism, slavery, honor killing, headhunting, public executions, and torture have been widely practiced by a range of societies, but are reviled in the contemporary West. There is also considerable diversity in the sexual domain: polygamy, cousin marriage, masturbation, bestiality, pre-marital sex, prostitution, concubinage, homosexuality, and other practices are accepted in some places and morally condemned elsewhere. The anthropological record suggests that just about every behavior that we consider immoral has been an accepted cultural practice somewhere. Of course, a society wouldn't survive very long if it encouraged random killing of next-door neighbors, but societies that encourage murder of people in the next village can endure indefinitely (see Chagnon, 1988, on the Yanamamo).
One can find further support of moral diversity by conducting psychological experiments on members of different cultures and subcultures. Nisbett and Cohen (1996) compared Americans from Southern States with Americans from the North, and found that Southerners were much more likely to endorse violence of various forms in response to moral transgression (killing to defend property, corporal punishment, gun possession, and so on). They explain this by noticing that many Southerners are descendents of Scots-Irish immigrants who had to develop a “culture of honor” to survive under harsh, comparatively lawless conditions in Northern Ireland before coming to the United States.
Cultural differences in morality have also been tested using economic games (Henrich et al., 2005). One example is the ultimatum game, in which one person is told that they must divide a sum of money (say $100) with a stranger. If the stranger rejects the division, no one gets any of the money. In the U.S., most people offer relatively equal splits. If they offer too little, the other person typically rejects the split out of spite, and both players go home empty handed. This is a measure of moral attitude towards fairness, and there are subtle differences across cultures. The Machiguenga of Peru, who have an economic system that does not depend much on cooperation, make lower offers on average than Americans, and they accept lower offers. Among the Au of New Guinea, people sometimes reject “hyper-fair” offers—that is offers over 50%. In the U.S., a hyper-fair offer would be happily accepted, but the Au routinely reject such generosity; a similar pattern has been found in Russian and other former Soviet states (Herrmann et al., 2008). Hyper-fair offers may be regarded as ostentatious or as trying to achieve some kind of dominance by making the recipient feel indebted.
Some philosophers have resisted the claim that there is cultural variation in morality. Rachels (2003: chap. 2), for example, argues that some differences are merely apparent. Inuits tolerate infanticide, but so would we if we lived in the Arctic tundra where resources are rare. Against this kind of reply, one might argue that, in fact, values don't tend to change right away when we change environments (the U.S. Southern culture of honor may be a hold-over from hard times in Northern Ireland prior to U.S. immigration; Nisbett and Cohen, 1996). Moreover, the fact that our attitudes toward infanticide might shift in the tundra might be taken as evidence for relativism rather than evidence against it; morality is highly sensitive to environmental variables.
Other critics have argued that we cannot adequately assess whether cultures differ in values. Moody-Adams (1997) argues that, absent a complete understanding of another culture's beliefs, we might mistake differences in factual beliefs for moral differences. For example, did the Aztecs really think cannibalism was okay, or were they driven to this practice because of a cosmology that made them think this was the only way to appease the Gods? We may never know. On the other hand, anyone who is willing to concede that culture can alter people's non-moral beliefs might also concede that values can be altered.
The most enduring philosophical debate about moral variation concerns metaethical relativism. Does moral diversity imply that there is no single true morality? On its own, the answer is no. But some relativists argue that that there is no source of morality other than our attitudes (e.g., they argue for subjectivism), so cultural variation implies that morality is relative (Prinz, 2007). Others argue that appeals to cultural history adequately explain why we have moral values, so there is no pressure to posit a further domain of values that transcend culture (Harman, 1977). These views do not entail that any morality is possible. There may be a plurality of acceptable value systems, given human nature and the situations we find ourselves in (Wong, 2006). Opponents of relativism think such pluralism is still too generous. Demands of reason (Kant), intrinsic goods (consequentialism), natural conditions for flourishing (Aristotle), ideal observers (Smith), and divine commands have all been explored as sources of absolute values.

4. Philosophical Intuitions and Culture

Cultural variation bears on traditional philosophical questions, such as questions about moral relativism, the modularity of perception, and incommensurability of meaning. Cultural variation also bears on the practice of philosophy itself. Some have argued that philosophical theories are culturally informed, and that, therefore, philosophers who take themselves to be seeking universal truths must either revise their aspirations or alter their methodology.
One place where this issue has been confronted is comparative philosophy. For example, scholars of philosophical traditions in East Asia sometimes wonder to what extent these are related to traditions in the West. A skeptical view would say that the starting assumptions, guiding questions, and dominant methods are so different that comparison is of limited value. On the other extreme, one might think that one can simply treat Eastern and Western philosophers as if they were part of a single domain, and compare them easily just as one might compare two figures coming out of the same cultural heritage.
The idea that philosophical ideas are culturally informed has also been investigated empirically. Experimental philosophers have converted standard philosophical thought experiments into survey studies in an effort to see whether untutored intuitions align with those that have been endorsed by professional philosophers. Some experimental philosophers have used the survey method to do cross-cultural comparisons, most often comparing philosophical intuitions in the United States to those in China and other East Asian Countries. The results suggest that there is cultural variation.
In one pioneering study, Weinberg et al. (2001) looked at epistemic intuitions. Within recent Western epistemology, the most influential thought experiments owe to Edmund Gettier (1963), who devised them in an effort to argue against the prevailing view that knowledge is justified true belief. These cases are supposed to show that a belief can be justified and true, without being an intuitive case of knowledge. For example, it is a majority view within Western philosophy that the following Gettier-inspired case does not qualify as knowledge:
Bob has a friend, Jill, who has driven a Buick for many years. Bob therefore thinks that Jill drives an American car. He is not aware, however, that her Buick has recently been stolen, and he is also not aware that Jill has replaced it with a Pontiac, which is a different kind of American car. Does Bob really know that Jill drives an American car, or does he only believe it?
Weinberg et al. gave this vignette to college students of European, East Asian, and South Asian descent. Most European Americans shared the intuition that Bob does not know that Jill drives an American car, but the majority of East and South Asians had the opposite intuition.
Another cross-cultural study of philosophical intuitions is reported by Machery et al. (2004). They turned from epistemology to semantics, and found that one of the most influential thought experiments in the philosophy of language elicits different intuitions across cultural groups. The thought experiment owes to Kripke (1979), who was arguing against descriptive theories of reference. According to descriptive theories, a proper name refers to the individual who satisfies the descriptions most associated with that name. For example, descriptivists would say “Gödel” refers to the person who proved the incompleteness of arithmetic. Kripke objects by constructing an imaginary case in which someone else came up with the proof and the person we know as Gödel merely took credit for it. Kripke's intution is that, even if this were so, “Gödel” would continue to refer to the same person, not to this other guy who discovered the proof. That intuition counts against descpritvism and in favor of a causal-historical theory of reference. Machery et al. show that American college students with a Western cultural background were much more likely to share Kripke's intuitions than students in Hong Kong in cases of this kind (for objections, see Marti, 2009; and replies in, Machery et al. 2009).
In another study, Huebner et al. (2010) use cross-cultural methods to test an intuition that has been important in consciousness studies. Block (1979) argued against functionalism using thought experiments in which the functional organization of a human mind is realized by a population of people rather than a biological brain. Block's intuition is that this collective would not be conscious, and therefore functional organization in not sufficient for consciousness. Huebner et al. show that students in Hong Kong are significantly more likely than American students to be willing to ascribe consciousness to collectives. They conclude that the intuitions underlying Block's argument are not cross-culturally shared.
The authors of these studies emphasize two points. First, the standard method of drawing philosophical conclusions by consulting intuitions may be problematic because those intuitions are not consistently held across cultures. If philosophers seek to discover the nature of knowledge, reference, or consciousness, by analyzing the corresponding concepts, they must reckon with the fact that these concepts vary, and no single analysis is likely to emerge. Second, some of the variance may be accounted for by cultural variables. This suggests that concepts are culturally influenced, and that philosophical theories based on concepts may reflect the attitudes of a cultural group, rather than universally shared understanding of the target domain. From this perspective, philosophy based on intuitive judgments begins to look more like auto-anthropology than a window into absolute truths.
Opponents of experimental philosophy argue that surveys of college students reveal less about concepts than the intuitions of professional philosophers. These critics suggest that intuitions among professional philosophers engaged in careful discussion and argumentation are more likely to converge and are more reliable. But, this prognosis may be overly optimistic. Professional philosophers within the same culture do not converge, so there is little prima facie reason to expect cross-cultural converge. Moreover, it's important to bear in mind that intuitions are tapping into semantic knowledge, and semantic knowledge is not based on recollection of perfect forms in Plato's heaven. Rather, it is informed by everything from explicit instruction to language use within the community, and salient exemplars. These sources of semantic knowledge may vary cross-culturally. Thus, it remains possible that cherished philosophical theories are more parochial than we assumed. If so, research on the cognitive science of culture has important implications for philosophical practice.