Springe direkt zu Inhalt

Sounds and Structures: A workshop on relations between language and music

11.12.2009 - 12.12.2009

 

Since Leonard Bernstein’s unanswered question (1976), many scholars have been intrigued by the idea of modeling musical structures within a linguistic framework. Several formal approaches originally designed to capture generalizations about the syntax of language have been successfully re-applied to capture a broad range of constituency and dominance relations in musical structure. Despite sharing many of the same theoretical goals, these pioneering works differ considerably with regard to which aspects of musical form they seek to describe: phrasing, rhythm, melody, harmony, or macrostructural configurations such as opening, closure, tension, release and the like. No consensus seems to have been achieved yet on which aspects of musical syntax can adequately be captured in linguistic terms. This lack of scholarly agreement may be due, at least in part, to the fact that the variety of syntactic theories employed reflects the diversity of theoretical persuasions held by linguists about the architecture of grammar. Lehrdal & Jackendoff (1983), arguably the most influential work in the field, develop their theory by adopting, albeit with some modifications, theoretical developments which figure prominently in the Government and Binding approach to syntax of the1980s. Steedman (1984) analyzes jazz chords resorting to categorial grammar, while Tojo et al. (2006) describe melodic sequences within Head-Driven Phrase Structure Grammar. More recently, Pesetsky (2007) analyzes musical form in terms of current Minimalist Syntax, whereas Rohrmeier (2007) chooses Lexical-Functional Grammar. It remains open to argument whether these different formal treatments should be regarded as alternative ways of accounting for the same object of inquiry or as complementary approaches modeling different aspects of musical form.

Opinions differ not only about which linguistic framework should be adopted for which specific aspect of musicological analysis, but also about the way abstract similarities should be interpreted within a general theory of human cognition. Some scholars claim that the syntax of music is identical to the syntax of language (Pesetsky 2007), while others hold more skeptical views about such unifying theories (Lehrdahl & Jackendoff 1983). It should be clear that we are not merely dealing with an issue of theoretical fashions here. In fact, this question is of paramount importance for the interface between syntax and cognition, with consequences for the architecture of grammar as a whole. Many influential linguistic theories hold that syntax is autonomous, i.e. specific to the faculty of language. To the extent that music has ways of structure building that are homomorphic to the organization of sentence form, a natural question to ask is whether language and music are really subject to disjunct sets of domain-specific principles. Proponents of an autonomous syntax should strive to identify organizational differences between the two systems, whereas those who tend towards more general, domain-independent explanations for cognitive systems are faced with the task of developing an abstract generative system which combines tones into melodies just as it combines words into phrases. The obvious differences between language and music would then turn out to be reducible to different primitives and interactional purposes (see Jackendoff & Lehrdahl 2006, Patel 2008). The task of linguistic syntax is to encode propositional content on the basis of lexical content, in ways typically amenable to a compositional analysis. Musical syntax, by contrast, does not need to encode content, since tones and tunes don’t carry atomic or propositional meanings, respectively. The big question, then, is how we should account for shared syntactic principles in music and language in linguistics, musicology, and in more general theories of human cognition.

In the last years, the hypothesis of domain-independent syntactic principles has also been investigated in cognitive neuroscience. Within this line of research, experiments have been designed in order to answer the question if musical syntax is neurally independent from linguistic syntax or not. According to Patel (2008: 267-298), the body of empirical results available at present has led to the shared syntactic integration resource hypothesis, which states that there are both domain-specific syntactic representations and shared neural resources. Neuroscientists are currently seeking to identify which syntactic relations exactly are shared by music and language, and designing experiments in order to refine the hypothesis of a two-level syntax, in which domain-independent and domain-specific principles are disentangled.

Yet another intriguing research topic is the universality of syntactic principles across different languages and musical traditions. Many scholars reject linguistic analyses of music, claiming that some genres of music, in particular non-western genres – such as Javanese gamelan or Indian raga – cannot successfully be analyzed in terms of constituency or dominance. To linguists, this discussion sounds very familiar, since Chomskyan grammar was likewise criticized on more than one occasion for being anglocentric, or at least for being inspired too strongly by specifically Standard Average European linguistic features. In the end, the definition of syntactic principles of musical form that can be applied not only for Beethoven, but also for Charly Parker and gamelan, may well turn out to be a daunting task.

The Freie Universität Berlin houses two major research groups with strong interests in the relations between music and language. First, the musical encoding of emotions is a central topic of the Cluster of Excellence Languages of Emotion. Within this line of inquiry, special attention has been paid to the psychology and neuronal base of emotions, language and music. Research on emotions – which for some is the counterpart of semantics in music –brings a new and promising perspective on the complex relationship between music and language. Second, syntacticians and phonologists from Berlin and Potsdam are currently investigating a fundamental dichotomy of form building principles. The overaching question is which aspects of linguistic form can best be described by means of patterns or templates (also known as constructions in the sense of Goldberg 1995) and which lend themselves more naturally to an analysis in terms of algorithms or rules (in the sense of derivational operations like merge and move, cf. Chomsky 1995). Apart from genuinely linguistic research, the comparison of structures in music and language constitutes a major interdisciplinary project of this research group.

At our workshop, we will hear contributions to answer some of the following questions:

 

  • What is the optimal syntactic theory to describe the syntax of music?
  • Which aspects of musical structure (rhythm, harmony, melody, voice-leading, etc.) can be analyzed in a linguistic framework?
  • How far does the linguistic analysis of musical strcutures reach? Is it possible to extend the analysis to all musical genres and traditions?
  • How should we acount for domain-independent structural principles in a theory of the architecture of the human cognitive system?
  • Is there recent empirical research in neuroscience which can help to differentiate between musical and linguistic syntax? • Can we discover similarities in the encoding and modeling of emotions in music and language?

 

 

Programm:

 

Friday, 11th of December 2009

Saturday, 12th of December 2009

10:00-10:40

GISELA KLANN-DELIUS, GUIDO MENSCHING & ULI REICH (FU BERLIN)

Introductions

10:00-11:00

ALBRECHT RIETHMÜLLER (FU BERLIN)

Pitch, Sound, Gesture in the animated cartoon

11:00-12:00

MANFRED BIERWISCH (ZAS BERLIN)

Language and Music - Types of Signs and their Consequences

11:15-12:15

GERAINT WIGGINS (LONDON)

A statistical model of musical melody which also predicts phonetic segmentation in language.

12:00-12:15

Coffee break

12:15-12:30

Coffee break

12:15-13:15

IAN CROSS (CAMBRIDGE)

Music as Pragmatic Primitive

12:30-13:30

STEFAN KÖLSCH (FU BERLIN)

Towards a neural basis of processing musical syntax and semantics

13:15-14:15

Lunch

13:30-14:30

Lunch

14:15-15:15

STEFAN MÜLLER (FU BERLIN)

Syntactic Theories: Commonalities and Differences

14:30-15:30

SONIA KOTZ (MPI LEIPZIG)

Neural substrates of rhythm, timing and language

15:15-16:15

MARTIN ROHRMEIER (CAMBRIDGE)

It don’t mean a thing: Forms and functions of musical syntax.

15:30-16:30

ULI REICH (FU BERLIN)

Musical beats and linguistic accents in the prominence contour of the speech signal.

16:15-16:30

Coffee break

16:30-16:45

Coffee break

16:45-18:15

DAVID PESETSKY & JONAH KATZ (MIT)

Identity Thesis for Language and Music

 

16:15-17:15

GERT-MATTHIAS WEGNER & ULRIKE KÖLVER (FU BERLIN)

Newar Drum Languages

 

AT NIGHT

Berlin is music! Explorations into the musical landscape of the city

17:15-17:45

PLENUM

Perspectives

 

Organisation:

Interdisziplinäres Zentrum Europäische Sprachen

Strukturen – Entwicklung – Vergleich

Prof. Dr. Uli Reich

 

Cluster of Excellence

Languages of Emotion

Prof. Dr. Gisela Klann-Delius

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Zeit & Ort

11.12.2009 - 12.12.2009

FU Berlin, Habelschwerdter Allee 45, Hörsaal 2