medigraphic.com
SPANISH

Salud Mental

ISSN 0185-3325 (Print)
Órgano Oficial del Instituto Nacional de Psiquiatría Ramón de la Fuente Muñiz
  • Contents
  • View Archive
  • Information
    • General Information        
    • Directory
  • Publish
    • Instructions for authors        
  • medigraphic.com
    • Home
    • Journals index            
    • Register / Login
  • Mi perfil

2009, Number 1

<< Back Next >>

Salud Mental 2009; 32 (1)

La respuesta emocional a la música: atribución de términos de la emoción a segmentos musicales

Flores-Gutiérrez E, Díaz JL
Full text How to cite this article

Language: Spanish
References: 33
Page: 21-34
PDF size: 190.49 Kb.


Key words:

Music, emotion, emotion terms, attribution, inter-valuator agreement.

ABSTRACT

Even though music is usually considered a source of intense, diverse, and specific affective states, at the present time there is not a standardized scientific procedure that reveals with reliable confidence the emotional processes and events evoked by music. The progress in understanding musical emotion crucially depends in the development of reasonable secure methods to record and analyze such a peculiar and universally-sought affective process. In 1936 Kate Hevner published a pioneer study where she used a list of 66 adjectives commonly used to categorize musical compositions arranged in a circle of eight groups of similar emotions. The volunteers selected the terms that seemed appropriate to categorize their emotional experience while they listened to masterpieces by Debussy, Mendelssohn, Paganini, Tchaikovsky, and Wagner. The results were presented in histograms showing a different profile for each piece. Subsequent studies have advanced in the methods and techniques to assess the emotions produced by music but there are many still unresolved difficulties concerning the criteria to choose the musical pieces, the terms of emotion, the design of the experiment, the proper controls, and the relevant statistical tools to analyze the results. The present study was undertaken in order to test and advance an experimental technique designed to evaluate and study the human emotions evoked by music. Specifically, the study intends to prove if different musical excerpts evoke a significant agreement in the selection of previously organized emotion terms within a relatively homogeneous population of human subjects. Since music constitutes a form of acoustic language that has been selected and developed through millennia of human cultural evolution for the expression and communication of emotional states, it is supposed that there will be a significant agreement in the attribution of terms of emotion to musical segments among human evaluators belonging to a relatively homogeneous population. The attribution system allowed both to obtain objective responses derived from introspection and to analyze the data by means of an appropriate statistical processing of data obtained in groups of subjects submitted to carefully selected musical stimuli. Volunteer subjects were 108 college-level students of both sexes with a mean age of 22 years from schools and universities located in the central Mexico. The audition and attribution sessions lasted for 90 min and were conducted in a specially adapted classroom located in each institution. Four criteria were established for the selection of the musical excerpts: instrumental music, homogeneous melody and musical theme, clear and distinct affective tone, and samples of different cultures. The ten selected pieces were: 1. Mozart’s piano concerto no. 17, K 453, third movement; 2. A sound of the magnetic spectra of an aurora borealis, a natural event; 3. Mussorgsky’s Gnome, from Pictures at an Exhibition orchestrated by Ravel; 4. Andean folk music; 5. Tchaikovsky’s Fifth Symphony, second movement; 6. «Through the Never», heavy metal music by Metallica; 7. Japanese Usagi folk music played with koto and shyakuhachi; 8. Mahler’s Fifth Symphony, second movement; 9. Taqsim Sigah, Arab folk music played with kamandja, and 10. Bach’s Inventions in three parts for piano, BMW 797. The selected fragments and their replicas were divided in two to five musically homogeneous segments (mean segment duration: 24 seconds) and were played in different order in each occasion. The segments were played twice during the test. During the first audition, the complete piece was played in order for the subjects to become familiar with th e composition and freely express their reaction in writing. During the second hearing, the same piece was played in the separate selected segments and the volunteers were asked to choose those emotion-referring terms that more accurately identified their music-evoked feelings from an adjunct chart obtained and arranged from an original list of 328 Spanish words designing particular emotions. The terms had been previously arranged in 28 sets of semantically related terms located in 14 bipolar axes of opposing affective polarity in a circumflex model of the affective system. The recorded attributions from all the subjects were captured and transformed into ranks. The non-parametric Friedman test of rank bifactorial variance for k related samples was selected for the statistical analysis of agreement. All the data were gathered in the 28 categories or sets of emotion obtained in the previous taxonomy of emotion terms and the difference among the musical segments was tested. The difference was significant for 24 of the 28 emotional categories for α=0.05 and 33 degrees of freedom (Fr ≥43.88). In order to establish in which segments were the main significant differences, the extension of the Friedman test for comparison of groups to a control was undertaken. Thus, after applying the appropriate formula, a critical value of the difference | R1 - Ru | was established at ≥18.59. In this way it was possible to plot the significance level of all 28 emotion categories for each music segment and thereby to obtain the emotion profile of each selected music fragment. The differences obtained for the musical pieces were established both for the significant response of individual emotion, groups of emotions, and the global profile of the response. In all the pieces used, one or more terms showed significance. Sometimes as many as seven terms appear predominant (Mahler, Mozart). In contrast other segments produce only one or two responses (aurora borealis, Arab music). In most musical segments there were null responses implying that there was an agreement concerning not only the emotions that were present, but also those that did not occur. Concerning the global response, there were several profiles recognizable among different pieces. The histogram is slanted to the left when positive and vigorous emotions are reported (Tchaikovsky, Bach). The predominance of emotions in the center-right sector corresponds to negative and quiet emotions (Arab music) or in the fourth sector of negative and agitated emotions (Mahler). Sometimes a «U» shaped profile was obtained when vigorous emotions predominated (Mahler, Metallica). A bell-shaped response was obtained when calm emotions were reported, both pleasant and unpleasant (Japanese music). There is also music that globally stimulates one of the four quadrants defined in the affective circle, such as pleasant (Mozart), unpleasant (Mussorgsky), exciting (Metallica) or relaxing emotions (Japanese music). The only segment that produced scattered responses in the f our sectors of emotions was the aurora borealis. Very similar profiles were obtained with very different pieces, such as the identical responses to Mozart and Andean music. It is necessary to analyze the individual emotion terms to distinguish them. Several common characteristics can be detected in these two pieces, such as fast speed in tempo allegro, binary rhythm, counterpoint figures, and ascending melody, well known features in music composition. In contrast other segments evoked unpleasant responses (Mussorgsky), where fear, tension, doubt or pain was reported. The listener probably concedes a high value to a piece that evokes emotions that normally avoids in the context of a controlled artistic experience. The cultural factor seems to have influenced the results, since the responses were more defined and robust with familiar types of music so that Japanese and Arab music evoked the least distinctive and robust responses. It would be of interest to perform a similar experiment in Arab and Japanese subje cts to compare the response.If the terms of emotion selected by the volunteer subjects actually corresponded to relatively discrete emotional states, most musical segments appear to evoke an emotional response that is similar among the subjects due to their composition and interpretation characteristics. Musical segments selected for their expressive characteristics seem to generate an emotional response which is similar among listeners with comparable cultural and educational backgrounds. This technique may be useful to generate and analyze specific emotional status in experimentally-controlled situations of musical audition.


REFERENCES

  1. Díaz JL, Flores E. La estructura de la emoción humana: un modelo cromático del sistema afectivo. Salud Mental 2001;24(4):20-35.

  2. Dissanayake E. Antecedents of the temporal arts in early mother infant interaction, in origins of music. Cambridge, Mass.: MIT press; 1999.

  3. Baumgartner T, Lutz K, Schmidt CF, Jäncke L. The emotional power of music: How music enhances the feeling of affective pictures. Brain Res 2006;1075(1):151-164.

  4. Huron D. Is music an evolutionary adaptation? Ann NY Acad Sci 2001;930:43-61.

  5. Fukui H. Music and testosterone: a new hypothesis for the origin and function of music. Ann NY Acad Sci 2001;930:448-451.

  6. Wallin NL, Merker B, Brown S (Eds.). The origins of music. Cambridge,Mass.: The MIT Press; 2001.

  7. Besson M, Schön D. Comparison between language and music. Ann NY Acad Sci 2001;930:232-258.

  8. Brown S. Are music and language homologues? Ann NY Acad Sci 2001;930:372-374.

  9. Ashby F, Isen A, Turken U. A neuropsychological theory of positive affect and its influence on cognition. Psychological Review 1999;106(3):529-550.

  10. Allman JM, Hakeem A, Erwin JM, Nimchinsky E, Hof P. The anterior cingulated cortex: the evolution of an interface between emotion and cognition. Ann NY Acad Sci 2001;935:107-117.

  11. Platel H, Baron JC, Desgranges B, Bernard F, Eustache F. Semantic and episodic memory of music are subserved by distinct neural networks. Neuroimage 2003;20:244-256.

  12. Peretz I, Zatorre R. Brain organization for music processing. Annu Rev Psychol 2005;56:89-114.

  13. Bhattacharya J, Petsche H, Pereda E. Long-range synchrony in the gamma band: role in music perception. J Neurosci 2001;21:6329-6337.

  14. Castro-Sierra E. Development of pitch perception in children. Stanford University (tesis doctoral); Stanford: 1989.

  15. Peretz I, Gagnon L, Bouchard B. Music and emotion: perceptual determinants, immediacy, and isolation after brain damage. Cognition 1998;68(2):111-41.

  16. Bard P, Mountcastle VB. Some forebrain mechanisms involved in expression of rage with special reference to suppression of angry behavior. J Nervous Mental Disease 1948;27:362-404.

  17. Blood AJ, Zatorre RJ, Bermudez P, Evans AC. Emotional responses to pleasant and unpleasant music correlate with activity in paralimbic brain regions. Nat Neurosci 1999;2(4):382-387.

  18. Parsons LW. Exploring the functional neuroanatomy of music performance,perception, and comprehension. Ann NY Acad Sci 2001;930:211-230.

  19. Janata P, Birk JL, vHorn JD, Leman M, Tillman B et al. The cortical topography of tonal structures underlying western music. Science 2002;298:2167-2170.

  20. Halpern AR, Zatorre RJ, Bouffard M, Johnson A. Behavioral and neural correlates of perceived and imagined musical timbre. Neuropsychologia 2004;42(9):1281-1292.

  21. Blood AJ, Zatorre RJ. Intensely pleasurable responses to music correlate with activity in brain regions implicated in reward and emotion. Proc Natl Acad Sci USA 2001;98:11818-11823.

  22. Khalfa S, Schon D, Anton JL, Liegeois-Chauvel C. Brain regions involved in the recognition of happiness and sadness in music. Neuroreport 2005;16:1981-1984.

  23. Koelsch S, Fritz T, vCramon DY, Muller K, Friederici AD. Investigating emotion with music: An fMRI study. Hum Brain Mapp 2006;27:239-250.

  24. Flores-Gutiérrez EO, Díaz JL, Barrios FA, Favila-Humara R, Guevara MA et al. Metabolic and electric brain patterns during pleasant and unpleasant emotions induced by music masterpieces. Int J Psychophysiol 2007;65(1):69-84.

  25. Hevner K. Experimental Studies of the Elements of Expression in Music.American J Psychology 1936;48:246-268.

  26. Ramos J, Corsi-Cabrera M. Does brain electrical activity react to music? Int J Neurosci 1989;47(3-4):351-357.

  27. North A, Hargreaves D. Liking, arousal potential, and the emotions expressed by music. Scand J Psychol 1997;38(1):45-53.

  28. McFarland RA, Kadish R. Sex Diferences in finger temperature response to music. Int J Psychophysiology 1991;3:295-298.

  29. Siegel S. Estadística no paramétrica. México: Trillas; 1985.

  30. Ramos J, Guevara MA, Martinez A, Arce C, Del Rio Y et al. Evaluacion de los estados afectivos provocados por la música. Revista Mexicana Psicología 1996;13(2):131-145.

  31. Russell JA, Carroll JM. On the bipolarity of positive and negative affect.Psychol Bull 1999;125(1):3-30.

  32. Watson D, Tellegen A. Toward a consensual structure of mood. Psychol Bull 1985;98:219-235.

  33. Fischer B, Krehbiel S. Dynamic rating of emotions elicited by music. Proceedings of the National Conference on Undergraduate Research (NCUR). University of Kentucky, March 15-17. Lexington, Kentucky; 2001.




2020     |     www.medigraphic.com

Mi perfil

C?MO CITAR (Vancouver)

Salud Mental. 2009;32