Michael Arbib, University of Southern California, Los Angeles, CA 90089–2520, U.S.A.
Tecumseh Fitch, University of Vienna, Austria
Peter Hagoort, Max Planck Institute for Psycholinguistics, Nijmegen, NL
Larry Parsons, Dept. of Psychology, University of Sheffield, U.K.
Uwe Seifert, Musikwissenschaftliches Institut, 50923 Köln, Germany
Paul Verschure, Laboratory for the Synthetic, Perceptive, Emotive and Cognitive Systems - SPECS - Universitat Pompeu Fabra, Roc Boronat, 138. 08018 Barcelona
How do music and language convey meaning about the practical and social world, including the interplay of the emotions? And how do brain mechanisms supporting music and language in humans relate to mechanisms for communicative and song-like behavior in other species. This Forum will explore harmonies, the processes shared by language and music (and by songs with words and dance), as well as dissonances, mechanisms of brain and body that music seems not to share with language, and vice versa.
If an archeologist from another era were to come across several musical scores, how might he determine he was not seeing fragments of a written language? And if he were to find samples of written language, how would he decide they were not a musical notation? In today's world, music can become abstract and language knows many variations, yet a good basic criterion might be that musical texts are those characterized in great part by variations on a theme while a written text in general does not.
In terms of our own experience, music can stir the emotions yet cannot convey unambiguous information about actions, agents and events, whereas language can represent such items clearly or with calculated ambiguity and lead us to an emotional response which builds upon that more or less explicit knowledge it conveys.
And when we turn to songs with words, we find music and language intertwining, as the language inherits more of the themes and variations of the music and each reinforces the emotional power of the other. Moreover, memory of words and music may each support the other, as we remember a word because it fits both the semantic envelope of other words and the rhythmic structure of the music. In opera, we see the powerful integration of words, music, scenery and action to engage us in drama and comedy with a heightening of emotions though perhaps at the expense of some level of narrative subtlety.
And finally we note when we speak, our vocal gestures are enriched by prosody which makes the words say more than the words alone, and by facial and manual gestures that can both enrich what we say and add emotional shading. On the other hand, music can engage our body in many ways, whether in tapping out the rhythm or swaying to the dance. As such, music has a strong social component, extending beyond the dyadic nature of face-to-face conversation.
The proposed Forum would focus investigation of the scientific challenges which these comparisons and differences present for studies of the evolution and function of the modern human brain. This Forum, while self-contained in its interlocking themes, would also serve to extend discussion from the Biological Foundations and Origin of Syntax, chaired by Derek Bickerton and Eörs Szathmáry (July 13–18, 2008), where the underlying assumption was that syntax was the key to language and a main focus was on the evolution of syntax in abstracto. The present proposal will be based on the view that this perspective is too limiting – both because it gives too restrictive a view of language, and because it limits our insights into brain mechanisms that are not specific to language and yet are crucial to the processes of language performance and understanding, with all their social and emotional underpinnings. This Forum will consider syntax as only one aspect of a broader investigation: how do music, including song and dance, and language convey meaning about the practical and social world including the interplay of the emotions. For just one example of the neurological challenge here, we note that human brain imaging and lesion studies have implicated different coalitions of brain regions in syntax, semantics and phonology, and yet the role of Broca's area in relation to musical structure may well overlap its role in language, a role that further implicates other cortical areas and the basal ganglia. What further commonalities and differences will the discussion reveal?Top of page
The Forum would bring together experts in the study of action, language, emotion, music and dance who are intrigued by the challenge of confronting their own studies with the search for harmonies and dissonances with at least one of the other areas, all within the over-arching theme of the mysterious relationship of language, music and the brain. In each particular domain, experts from outside neuroscience would be challenged to assess core capabilities versus exceptional skill in the domain, while neuroscientists would bring to the discussion current knowledge of aspects of the domain for which neural data are already available. Cutting across the particular subgroups, discussion at the Forum would assess what is shared across domains, and how specifics of a domain may build on these mechanisms. For example, mirror system studies would underwrite exploration of the relation between action and language, emotion and empathy while studies of birdsong could assess its relation to phonology and syntax while noting the uniqueness of the human linkage between syntax and semantics.
There has been much attention within the overarching perspective of neuroscience to the linkage of action and language, and some attention to the linkage of music and language. Recent workshops (see, e.g., the one introduced by Liebal, Müller and Pika, 2005) on gesture delimit similarities to and differences from language. Studies have linked emotion to brain mechanisms for empathy, and for the facial expression of emotion (e.g., Decety and Jackson, 2004; Lee and Siegle, 2009). However, those studying syntax and semantics tend to do so in a symbolic framework that is seldom integrated with the study of prosody and the emotional "meaning" of language. And the latter link back to the social power of music. Many of the "pairwise connections" are in place. The aim of this Forum would be to carefully delimit what is needed to integrate these diverse efforts to gain a fuller understanding of how music and language, and their integration in song, convey meaning about the practical and social world and the interplay of the emotions – while exploring how brain mechanisms overlap and differ that serve these various functions.
The aim of these discussions will be to better understand brain function in terms of networks of competing and cooperating structures which integrate internal state information with goal-biased perceptual data to yield patterns of performance. This raises an important general question that will run through all four themes: "What are the basic brain mechanisms versus those which reflect a strong cultural influence on the individual's development?" For example, dancing to the beat of a drum is a far cry from being part of an appreciative audience for the Alvin Ailey Dancers (where one may at least tap one's foot in some relation to the music), let alone the more cerebral appreciation of a Merce Cunningham dance linked to the soundscapes of John Cage. Similarly, for language we know that those who grow up literate have different brain structures from those who grow up illiterate, responding to the "recent" invention of writing. But what then of the debate over whether language itself is a cultural invention that builds atop and adaptively changes brain mechanisms evolved for praxis and protolanguage rather than language in its rich modern complexities?Top of page
For humans, a song will often combine music with words, and understanding the interplay of these components will help us better approach that "mysterious relationship" between music and language. Such songs often have a strong social component, as in the singing of hymns where the individual may "lose himself" in shared expression within a larger group.
Yet what has been labeled as "song" has been described in multiple non-human species including songbirds, hummingbirds, parrots and cetaceans. Only one nonhuman primate species, the gibbon, has a song-like behavior. Yet the nature of these "songs" is vastly different from much of human music and certainly lacks the specificity of semantics afforded by the combination of words. On the other hand, we may chart species-specific calls in monkeys and group-specific gestures in the great apes. This raises two complementary issues: What are the homologous mechanisms in nonhuman primates to the language-related structures of the human brain, and to what extent may they best be correlated with communicative or praxic actions? To what extent do the (to a greater extent non-homologous) brain mechanisms supporting song in other species provide clues as to the nature of the human brain mechanisms supporting music and language?
To what extent do humans have a shared framework neural framework for song and dance? How do these relate to the distinction between rhythmic and discrete actions? How does music relate to mechanisms of social solidarity versus individual motivation? How are mechanisms for producing and perceiving music related to emotion? And how do mechanisms supporting rhythm in song and dance relate to the emotional and prosodic function of language?
Various attempts have been made to assess the syntactic structure of music in parallel with the syntax of language, while other efforts have related language structure to the structure of compound actions. We need to assess what is common and what is different in the brain mechanisms – such as those in Broca's area and the basal ganglia – which support these various "syntaxes". And what do the insights of Group 1 add to this discussion? It may be that non-human song is closer to human phonology than to syntax or semantics.
How does music stir emotion? How do tone poems more explicitly link music to meaning? To what extent do these evocations overlap with the basic levels of meaning that can be expressed in language? How do compositionality in scenes and actions relate to compositionality of meaning? How are goals and subgoals established? How does the integration of perception and goal structures provide a flexible hierarchical structure that unfolds into adaptable courses of action? How does observing the actions of others affect one's own course of action? How are the internal functions of emotion (goal-setting, etc.) related to the external functions of emotion (affecting social coordination)?
The evolution of language poses various challenges that will enrich the proposed assessment of the relation between language and music in the brain. One approach suggests that language has gestural origins, with pantomime providing a key bridge from praxic action to a "protosign" system that could support an open-ended semantics. On this view, syntax and phonology are secondary to the expression of complex meaning. This theory has placed special emphasis on the role of mirror neurons in the recognition of praxic and communicative actions, yet such neurons have also been implicated in the sharing of emotion. Another theory notes the emergence of song across many species, and suggests that it was the evolution of a protomusical ability that provided the necessary substrate for the evolution of speech. On either case (or on other scenarios) the challenge will be to relate this to modern studies of the neural underpinning of phonology, syntax and semantics – a task which will clearly rest on thoughtful integration of the findings of the other three groups.Top of page