Neuroscientific Evidence for the Influence of Language on Color PerceptionCategory: Cognitive Neuroscience • Cognitive Psychology
Posted on: March 20, 2008 4:03 PM, by Chris
You know, just the other day, on this very blog, I swore I would never read another (cognitive) imaging paper again, but between then and now, I've read 5 of 6, so apparently my oath didn't take. It's sort of like my constantly telling myself, as I ride the bus to campus in the morning, that I'm going to stop drinking coffee. As soon as I get off the bus, I walk 30 or so feet to the little coffee stand where they have my 16 oz. coffee waiting for me, 'cause they know as well as I do that I ain't quittin'. Cognitive neuroscience is like coffee.
Anyway, one of the imaging papers I've read since swearing off cognitive neuroscience altogether was published just last week in the Proceedings of the National Academy of Sciences (PNAS, pronounced like... well, you can guess what it's pronounced like), and is an imaging study on linguistic relativity. For blogging purposes, such a paper is doubly awesome, because it gives me an opportunity to blog about wo of my favorite topics: 1.) The influence of language on thought and perception, and 2.) How much cognitive neuroscience sucks. And I can do both by presenting previous studies in contrast to last week's PNAS (pronounce it as you read, it makes this post funnier) paper. So I'll start with research published way back in the year 2006.
I've written a lot about linguistic relativity (a soft version of the Sapir-Whorf hypothesis) in the past (see here, here, and here), so I won't go into it in too much detail here. For now it will do simply to say that linguistic relativity has been a hot topic off and on since the first half of the 20th century, and each time it's become hot again, one of the main focuses has been on the influence of language on color perception. If you can show the influence of language on, say, temporal reasoning, that's interesting, but it's conceptual, and we know that words and concepts are pretty intertwined. However, if you can show that language influences low-level perception, like color perception, then you will have demonstrated something exciting. In the 1960s, there was a bunch of research suggesting that color words do influence color perception, but in the late 60s and early 70s, further research suggested this was not the case. Then, in the 2000s, researchers revisited the question, and again found evidence that color words influence color perception in a variety of different tasks.
At this point, at least until another Eleanor Rosch comes around, the evidence for some sort of interaction between language and color perception is pretty strong. The main problem in interpreting this evidence, and most of the evidence related to linguistic relativity more generally, is that it is difficult, if not impossible, to tease apart linguistic and cultural influences. The key to doing so would be to make some sort of prediction about the interaction of color terms and color perception that relies on our knowledge of the unique properties of language processing. If you can provide support for predictions like that, then you can make a pretty good case that the influence of language is direct, rather than mediated by cultural differences that are correlated with linguistic differences.
This brings us to the neuroscience. The one part of the brain that we know a whole hell of a lot about is the visual system, and the early visual system in particular. Neuroscientists can basically tell you exactly what happens to visual information from the time a photon hits a photoreceptor in the back of the retina to the time it reaches the visual cortex, and beyond (notable exceptions are the amacrine cells, the functions of which are a bit of a mystery). For example we know that information from the retina of the right eye crosses over to the left side of the brain at the optic chiasm, and then travels to the left hemisphere of the visual system. The information from the left eye goes in the opposite direction.

When it comes to things outside of the visual system, we know considerably less. However, if there's one area that we know more than a little bit about, it's language processing. Most importantly, for our purposes, we know that for right-handers, the left hemisphere is doing the bulk of the language processing work. Knowing this, combined with our knowledge of where visual information from each eye gets processed, we can make a prediction about how language will affect perception. That is, we can predict that, because information from the right eye ends up being processed on the left side of the brain, and language is, for the most part, processed on the left side, we should see stronger effects of language on perception for information that comes in through the right eye. And over the last couple years, a series of papers have been published presenting studies that test this prediction.
The first paper, by Gilbert et al.(1) used a simple visual search paradigm. This involves putting a target stimulus in an array with a bunch of distractors. In this case, the targets were squares of a particular color, and the distractors were squares of a different color. In some cases, the distractors and target shared the same color label (e.g., "blue"), while in others they had different labels (e.g., "blue" and "green"). Research in a bunch of different domains have shown that it's easier to discriminate members of different categories than members of the same category, even when the perceptual distance between the two is the same, a phenomenon usually called categorical perception. In this case, it should be easier to discriminate "blue" from "green" than "blue" from "blue," even when the difference between the shades of blue is the same as the difference between the "blue" and "green" shades. Previous research using the visual search paradigm has shown that people are faster at finding targets among perceptually similar targets when they're from a different color category than when they're from the same color category(2). The twist in Gilbert et al.'s study is that half the time, the target appeared in the right visual field (i.e., appeared to the right eye), and half the time it appeared in the left. If the labels really are affecting color perception, then we'd expect to find the categorical perception effect much more strongly for targets presented in the right visual field than those presented in the left.
Of course, that's what they found. Participants' reaction times were significantly faster for between-category target-distractor searches than for within-category searches when the targets were in the right visual field, but there was no difference between between and within-category searches for targets presented in the left visual field.
In their second study, Gilbert et al. gave participants a verbal interference task (silently repeating an eight-digit number), and the effect for the right visual field reversed: between-category searches took longer than within-category searches. The opposite was the case for the left visual field (though the difference between within and between-category searches was not significant in the left visual field). This suggests that it really is the category label that is causing the categorical perception effect, because the verbal interference task does just what it says: it interferes with language processing. Since this processing takes place primarily in the left hemisphere, it should only affect targets presented to the right eye, as it did Gilbert et al.'s study.
Similar studies by Drivonikou et al.(3), one using a visual search task with more color categories and more distractors, and one asking participants to indicate whether a colored dot is different from a colored background, showed the same effects with more color categories and, in the visual search task, more distractors. Below is a graph from one of their studies (from their Figure 2, p. 1099), which clearly illustrates the effect of visual field (RVF = right visual field, LVF = left visual field).

In perhaps the coolest of the papers in this line of research, Gilbert et al.(4) conducted another visual search task, but this time they used non-color categories, like animals (e.g., dogs and cats). In this case, there'd be a bunch of cats in a circle, and one dog (see below, from their Figure 2, p. 3), and the task is to indicate which side of the circle the dog is on. As in the previous studies, the dog was either in the right or left visual field, and we would expect that the effect of label (i.e., the faster times for between-category searches) would be stronger in the right visual field than the left.

As in the color perception studies, the categorical perception effect was significantly stronger in the right visual field than in the left, and it disappeared when participants were given a verbal interference task.
Now, for me, those studies are pretty convincing. In each case, the effect was stronger when perceptual processing took place in the same hemisphere where language is processed, and the effects disappeared when you interfered with language processing. That seems like pretty direct evidence that language is influence categorical perception in color and other domains. But why be satisfied with convincing evidence when you've got an fMRI machine and twenty thousand dollars, right? Enter Tan et al.(5)
Tan et al.'s task was much simpler than in the Gilbert et al. and Drivonikou et al. studies. All their participants had to do was decide whether two color squares were of the same or different colors. Granted, the squares were only presented for 100 ms, but still. They used colors with six different names in Mandarin, three of which were easy to name, and three of which were difficult to name (based on data from a pilot study). Given that the colors were only presented for a brief moment, the effects of language should only show up for the easily (i.e., quickly) accessed color labels.
Now, they didn't find any behavioral differences between the easy and hard to name conditions. That is, people were equally fast at naming the colors in both conditions. But they did find differences in brain activation. Both conditions produced activation in areas associated with color vision (medial frontal gyrus, mid-inferior prefrontal cortex, insula, right superior temporal cortex, thalamus, and cerebellum. The left superior temporal gyrus, left precuneus, and left postcentrual gyrus, all areas associated with language processing, showed more activation in the easy name condition than in the hard name condition.
Aside from pretty pictures of the brain, what has the Tan et al. study taught us that the previous studies hadn't? Well, considering the fact that there were no behavioral differences observed, it's hard to know exactly what was going on, but at most, all these data suggest is that when presented quickly, easy-to-name colors prime their labels, while hard-to-name colors do not. Not only is this not interesting in itself, but in the context of linguistic relativity, it doesn't even suggest the right direction of influence. That is, without behavioral differences, the imaging data doesn't suggest that language processing is influencing perception, but instead that the perception is priming particular lexical items. That's just, well, boring. I mean, duh. But again, cool brain pictures. Coffee.
Are you starting to see why I find cognitive neuroscience so frustrating? The first series of studies -- those by Gilbert et al. and Drivonikou et al. -- are excellent lessons in using neuroscience to test hypotheses. They took things we know about the brain (things we knew about the brain long before imaging technology existed), came up with hypotheses based on them, and then developed behavioral predictions from those hypotheses. The Tan et al. study, on the other hand, doesn't really test any hypothesis directly relevant to linguistic relativity. We can't, from their data, make any behavioral predictions, and we can't infer that the increased processing in language areas of the brain that they observed had any influence on the processing in the visual areas that were active. And I guarantee you that the Tan et al. study cost more. In all likelihood, the single study in Tan et al. cost more than the eight studies presented in the other three papers combined! A simple cost benefit analysis of the Tan et al. study therefore gives us a ratio of 0: costs a bunch, and we've learned jack.
I'm never reading another imaging study again, or drinking anymore coffee.
1Gilbert, A.L., Regier, T., Kay, P., Ivry, R.B. (2006) Whorf hypothesis is supported in the right visual field but not the left. Proceedings of the National Academy of Sciences, 103(2), 489-494.
2Roberson, D. & Davidoff, J. (2000) The categorical perception of colours and facial expressions: The effect of verbal interference. Memory & Cognition, 28, 977-986.
3Further evidence that Whorfian effects are stronger in the right visual field than the left. Proceedings of the National Academy of Sciences, 104(3), 1097-1102.
4Gilbert, A.L., Regierd, T., Kaye, P., & Irvy, R.B. (In Press). Support for lateralization of the Whorf effect beyond the realm of color discrimination. Brain and Behavior.
5Tan, L.H., Chan, A.H.D., Khong, P.L., Yip, L.K.C., & Luke, K.K. (2008). Language affects patterns of brain activation associated with perceptual decision. Proceedings of the National Academy of Sciences, 105(10), 4004-4009.
http://scienceblogs.com/mixingmemory/20 ... _for_t.php