Culture in music cognition
Culture in music cognition refers to the influence that a person's culture has on their music cognition affecting preferences, emotion, and musical memory. Variables such as age, language, and personality type also mediate the impact of familiarity on music preferences. Both culturally-specific and universal structural features of music influence how people classify the emotion of a musical piece. Enculturation also positively affects a person's musical memory; this and neural activity in the brain support the idea of a separate memory system for music.
Contents |
[edit] Preferences
[edit] Effect of culture
Culturally-bound preferences and familiarity for music begin in infancy and continue through adolescence and adulthood. People tend to prefer and remember music from their own cultural condition. Soley and Hannon conducted a study with Western and Turkish infants, in which 4- to 8-month old infants listened to samples of music with Western, Turkish, or arbitrary meter styles.[1] Looking times indicated that Western infants preferred the Western meter in music, while Turkish infants preferred both Turkish and Western meters, Western meters not being completely unfamiliar in Turkish culture. Both groups preferred either meter when compared with arbitrary meter. These results suggest that familiarity for culturally regular meter styles is already in place for young infants of only a few months' age.
In addition to influencing preference for meter, culture affects people's ability to correctly identify music styles. In a study Teo, Hargreaves, and Lee conducted, adolescents from Singapore and the UK heard examples of Chinese, Malay, and Indian music styles.[2] Neither group demonstrated a preference for the Indian music samples, although the Singaporean teenagers were able to recognize it. Participants from Singapore showed higher preference for and ability to recognize the Chinese and Malay samples; UK participants showed little preference or recognition for any of the music samples, as those types of music are not present in their native culture.
Bilingualism typically confers specific preferences for the language of lyrics in a song. Abril and Flowers presented monolingual (English speakers) and bilingual (Spanish and English speakers) sixth graders with the same song played in an instrumental, English, or Spanish version.[3] Ratings of preference showed that bilingual students preferred the Spanish version, while monolingual students more often preferred the instrumental version. The children's self-reported distraction was the same for all excerpts. Spanish (bilingual) speakers also identified most closely with the Spanish song. Thus, the language of lyrics interacts with a listener's culture and language abilities to affect preferences.
[edit] Personality
Preferences for unfamiliar music are greatly impacted by a person’s personality type and their emotional responses to the music. Ladinig and Schellenberg utilized the Big Five Personality Trait theory’s constituent characteristics to examine how changes in personality might affect preferences for music.[4] The researchers also investigated how stylistic elements of the music would effect emotional responses. Ratings of happiness were higher for faster tempi than slow tempi, and mixed happy and sad feelings were present most often with minor rather than major modalities. Regarding personality traits, participants that scored high in agreeableness, a trait characterized by capacity for empathy and compassion, had relatively intense emotional responses to the music. Introversion and openness to experience were linked with preferences for listening to music that evoked sadness. Lastly, if the participant had more years of music training they were more likely to prefer music that evoked mixed feelings of happiness and sadness.
[edit] Bimusicalism
Bimusicalism is a phenomenon in which people well-versed and familiar with music from two different cultures exhibit dual sensitivity to both genres of music. In a study conducted with participants familiar with Western, Indian, and both Western and Indian music, the bimusical participants who had been exposed to both Indian and Western styles of music showed no bias for either music style in recognition tasks and did not indicate that one style of music was more tense than the other. In contrast, the Western and Indian participants more successfully recognized music from their own culture and felt the other culture's music was more tense on the whole. [5] These results indicate that everyday exposure to music from both cultures can result in mental sensitivity to music styles from those cultures.
[edit] Emotion
The perception of music’s emotion is facilitated in part by cultural familiarity, but universal (psychophysical), structural features are also immensely important.
[edit] Musical cues
The cue-redundancy theory of emotion recognition differentiates between universal, structural auditory cues and culturally-bound, learned auditory cues. Balkwill, Thompson, and Matsunaga (2004)[6] assert that individuals exposed to familiar music (i.e., music from their own cultural tradition) utilize both types of acoustic cues in identifying emotionality. Conversely, perception of intended emotion in unfamiliar music relies solely on universal, psychophysical properties (e.g., tempo). Japanese participants demonstrated sensitivity to intended emotion among music samples outside their cultural convention. They accurately categorized angry, joyful, and happy musical excerpts from more familiar traditions (Japanese and Western samples), as well as those from a relatively unfamiliar tradition (Hindustani). Tellingly, structural properties of the music samples predicted participants' emotion assessments. Simple, fast melodies received joyful ratings, whereas simple, slow samples received sad ratings. Finally, loud, complex excerpts tended to be perceived as angry. The strong relationships between emotional judgments and structural acoustic cues suggest the importance of universal musical properties in categorizing unfamiliar music.
Kwoun (2009)[7] replicated and extended Balkwill et al.’s work with American and Korean listeners. Both Korean and American participants judged the intended emotion of Korean folksongs. The American group’s identification of happy and sad songs did not statistically differ from levels observed for Korean listeners. Even more surprisingly, despite their lesser familiarity with Korean folksongs, Americans exhibited greater accuracy in anger assessments than the Korean group. The latter result implies cultural differences in anger perception occur independently of familiarity, while the similarity of American and Korean happy and sad judgments indicate the role of universal auditory cues in emotional perception.
[edit] Tempo vs. timbre
Categorization of unfamiliar music, utilizing specific psychophysical or universal cues, appears to vary by the emotion represented in the music. Recall that Balkwill et al. (2004) identified different structural profiles associated with happy, sad, and angry percepts. In a separate study, Balkwill and Thompson (1999)[8] showed that timbre mediates Western listeners’ recognition of angry and peaceful Hindustani songs; flute timbre supported the detection of peace, whereas string timbre aided anger identification. Happy and sad assessments, on the other hand, relied primarily on relatively “low-level” structural information such as tempo. Notably, both low-level cues (e.g., slow tempo) and timbre aided in the detection of peaceful music, but only timbre cued anger recognition. Communication of peace, therefore, takes place at multiple structural levels, while anger seems to be conveyed nearly exclusively by timbre. The exclusive dependence of anger recognition on timbre may reflect human evolutionary history. Growl-like timbres, typically rough and low in pitch, convey anger and aggression in social context. Similarities between aggressive vocalizations and angry music (e.g., roughness) may contribute to the salience of timbre in anger assessments.[9]
Methodological limitations of previous studies preclude a complete understanding of the roles of psychophysical cues in emotion recognition. Hunter, Schellenberg, and Schimmack (2008),[10] for example, demonstrated that divergent mode and tone cues elicit “mixed affect,” demonstrating the potential for mixed emotional percepts. Use of dichotomous scales (e.g. simple happy/sad ratings) may mask this phenomenon, as these tasks require participants to report a single component of a multidimensional affective experience.
[edit] Memory
[edit] Neuroscience
The appreciation and understanding of music depends on both long-term and working memory systems. Long-term memory enables a listener to develop musical expectations based on previous experience, and, because music develops over time, working memory is necessary to relate pitches in a phrase, between phrases, and throughout a piece to one another. In fact, areas of the brain involved in working memory, such as the orbital area of the left inferior frontal gyrus, are more activated when listening to phrased than unphrased music.[11]
Evidence suggests that memory for music is, at least in part, special and distinct from other forms of memory. [12] The neural processes of music memory retrieval share much with the neural processes of verbal memory retrieval including activation of the left inferior frontal cortex and the posterior middle temporal cortex.[13] The left inferior frontal gyrus is generally thought to be involved with executive function, and has also been shown to be active specifically in executive control of verbal retrieval,[14] and the posterior middle temporal cortex is activated during semantic retrieval.[15] However, musical semantic retrieval also bilaterally activates the superior temporal gyri containing the primary auditory cortex.[13]
[edit] Effect of culture
Despite the universality of music, enculturation has a pronounced effect on individuals' musical memory. A study by Demorest et al. (2008)[16] demonstrated that people are best at recognizing and remembering music in the style of their native culture. Furthermore, they are also better at recognizing and remembering music from a familiar but nonnative culture than they are for music from an unfamiliar culture. This evidence suggests that people develop their cognitive understanding of music from their culture. Part of the difficulty in remembering culturally unfamiliar music may arise from the use of different neural processes when listening to familiar and unfamiliar music. For instance, brain areas involved in attention, including the right angular gyrus and middle frontal gyrus, show increased activity when listening to culturally unfamiliar music compared to novel but culturally familiar music.[11]
Morrison et al. (2008)[17] extended Demorest’s findings by demonstrating that musical enculturation occurs in early childhood. Like the adults in Demorest’s study, children demonstrated the same ability to better remember novel music from their native culture than from an unfamiliar one. Although children were not as proficient as adults at remembering complex examples of music from their own culture, they were as good as their adult counterparts at recognizing simple excerpts from their own culture, suggesting that this enculturation effect on music memory occurs prior to complete development of an individual’s cognitive schemata for music. Some evidence suggests that music enculturation may begin in children as young as one year old.[18] One way in which culture is thought to affect children’s developing music cognition is through language. Trehub et al. (2008)[19] show that Canadian children appear to develop the ability to identify pitches from familiar songs at 9 or 10 years old. This same ability develops in Japanese children at as early as 5 and 6 years of age. The authors suggest that the Japanese language’s use of pitch accents encourages them to develop better pitch discrimination at an early age, where as such fine pitch discrimination is not necessary for learning English, which utilizes stress accents.
Enculturation also biases a listener’s musical expectations. Curtis and Bharucha (2009)[20] presented American college students with a series of pitches from either a culturally familiar Western scale or a culturally unfamiliar Indian scale, and then presented them with a test tone and instructed them to indicate whether this pitch was included in the previous series of pitches or not. Although the experimenters found no significant difference in reaction times, they did find a significant difference in false alarm rates, or the frequency with which participants indicated a pitch had been previously presented when it actually had not been presented. False alarm rates were greater when participants were presented with a test tone from the Western scale than when they were presented with a test tone from the Indian scale. In effect, participants were more likely to indicate that they had heard a culturally familiar tone that was not actually presented and less likely to indicate that they had heard a culturally unfamiliar tone. The experimenters argue that this difference is a result of music enculturation and that listeners are biased in that they expect to hear tones that correspond to culturally familiar modal traditions.
[edit] Limits of enculturation
Despite the powerful effects of music enculturation, evidence indicates that cognitive understanding of and affinity for different cultural modalities is somewhat plastic. Although listeners’ music cognition may be biased towards culturally familiar music, Wong et al. (2009)[5] provide evidence for bimusicalism, a musical phenomenon akin to bilingualism. While they found that participants who listened only to either Western or Indian music displayed biases in recognition memory and perceptual judgments of tension, those who listened to both Indian and Western music regularly showed no such biases.
Other evidence suggests that changes in appreciation and understanding of unfamiliar modalities can also occur over a short period of time. Loui et al. (2010)[21] created original melodies from the Bohlen-Pierce scale, a non-diatonic scale not used in traditional Western music, and exposed passive listeners to the melodies for half an hour. After this time, participants demonstrated increased recognition memory for melodies generated from this scale, as well as a greater affinity for these melodies. Although the Bohlen-Pierce scale is derived from the twelve chromatic pitches in the Western octave, the non-diatonic nature of the scale presented listeners with an unfamiliar musical grammar. These findings suggest that, in addition to the long term effects of music enculturation, short term exposure to unfamiliar musical grammars can rapidly affect music perception and memory.
[edit] See also
- Music Cognition
- Cognitive Musicology
- Embodied music cognition
- Music and the brain
- Cognitive neuroscience of music
- Music therapy
[edit] References
- ^ Soley, G.; Hannon, E. E. (2010). "Infants prefer the musical meter of their own culture: A cross-cultural comparison". Developmenal Psychology 46: 286-292. doi:10.1037/a0017555.
- ^ Teo, T.; Hargreaves, D. J; Lee, J. (2008). "Musical preferences, identification and familiarity: A multicultural comparison of secondary students from Singapore and the United Kingdom". Journal of Research in Music Education 56: 18-32. doi:10.1177/0022429408322953.
- ^ Abril, C.; Flowers, P. (55). "Attention, preference, and identity in music listening by middle school students of different linguistic backgrounds". Journal of Research in Music Education: 204-219. doi:10.1177/002242940705500303.
- ^ Ladinig, O.; Schellenberg, E. G. (2011). "Liking unfamiliar music: Effects of felt emotion and individual differences". Psychology of Aesthetics, Creativity, and the Arts. doi:10.1037/a0024671.
- ^ a b Wong, P. C. M.; Roy, A. K.; Margulis, E. H. (2009). "Bimusicalism: The Implicit Dual Enculturation of Cognitive and Affective Systems". Music Perception: An Interdisciplinary Article 27. doi:10.1525/MP.2009.27.2.81.
- ^ Balkwill, L.; Thompson, W. F.; Matsunaga, R. (2004). "Recognition of emotion in Japanese, Western, and Hindustani music by Japanese listeners". Japanese Psychological Research 46: 337-349. doi:10.1111/j.1468-5584.2004.00265.x.
- ^ Kwoun, S. (2009). "An examination of cue redundancy theory in cross-cultural decoding of emotions in music". Journal of Music Therapy 46: 217-237.
- ^ Balkwill, L.; Thompson, W. F. (1999). "A cross-cultural investigation of the perception of emotion in music: Psychophysical and cultural cues". Music Perception: An Interdisciplinary Journal 17: 43-64.
- ^ Tsai, C.; et al. (2010). "Aggressiveness of the growl-like timbre: Acoustic characteristics, musical implications, and biomechanical mechanisms". Music Perception: An Interdisciplinary Journal 27: 209-222.
- ^ Hunter, P. G.; Schellenberg, G., & Schimmack, U. (2008). "Mixed affective responses to music with conflicting cues". Cognition and Emotion 22: 327-352.
- ^ a b Nan, Y.; Knosche, T. R.; Zysset, S.; Friederici, A. D. (2008). "Cross-cultural music phrase processing: An fMRI study". Human Brain Mapping 29: 312-328. doi:10.1002/hbm.20390.
- ^ Schulkind, M. D.; DallaBella, S.; Kraus, N.; Overy, K.; Pantey, C.; Snyder, J. S.; Tervaniemi, M.; Tillman, M.; Schlaug, G. (2009). "Is Memory for Music Special?". Annals of the New York Academy of Sciences 1169: 216-224. doi:10.1111/j.1749-6632.2009.04546.x.
- ^ a b Groussard, M.; Rauchs, G.; Landeau, B.; Viader, F.; Desgranges, B.; Eustache, F.; Platel, H. (2010). "The neural substrates of musical memory revealed by fMRI and two semantic tasks". NeuroImage 53: 1301-1309. doi:10.1016/j.neuroimage.2010.07.013.
- ^ Hirshorn, E. A.; Thompson-Schill, S. L. (2006). "Role of the left inferior frontal gyrus in covert word retrieval: Neural correlates of switching during verbal fluency". Neuropsychologia 44 (12): 2547-2557. doi:10.1016/j.neuropsychologia.2006.03.035.
- ^ Martin, A.; Chao, L. L. (2001). "Semantic memory and the brain: structure and processes". Current Opinion in Neurobiology 11 (2): 194-201. doi:10.1016/S0959-4388(00)00196-3.
- ^ Demorest, S. M.; Morrison, S. J.; Beken, M. N.; Jungbluth, D. (2008). "Lost in translation: An enculturation effect in music memory performance". Music Perception 25 (3): 213-223. doi:10.1525/mp.2008.25.3.213.
- ^ Morrison, S. J.; Demorest, S. M.; Stambaugh, L. A. (2008). "Enculturation effects in music cognition: The role of age and music complexity". Journal of Research in Music Education 56: 118-129. doi:10.1177/0022429408322854.
- ^ Morrison, S. J.; Demorest, S. M.; Chiao, J. Y. (2009). "Cultural constraints on music perception and cognition". Progress in Brain Research 178: 67-77. doi:10.1016/S0079-6123(09)17805-6.
- ^ Trehub, S. E.; Schellenberg, E. G.; Nakata, T. (2008). "Cross-cultural perspectives on pitch memory". Journal of Experimental Child Psychology 100 (1): 40-52. doi:10.1016/j.jecp.2008.01.007.
- ^ Curtis, M. E.; Bharucha, J. J. (2009). "Memory and musical expectation for tones in cultural context". Music Perception 26 (4): 365-375. doi:10.1525/MP.2009.26.4.365.
- ^ Loui, P.; Wessel, D. L.; Kam, C. L. H. (2010). "Humans rapidly learn grammatical structure in a new musical scale". Music Perception 27 (5): 377-388. doi:10.1525/MP.2010.27.5.377.
|

