porsinal  
ArtigosCategoriasArtigos Científicos
Josélia Neves
Josélia Neves
Professora
Music to my eyes… Conveying music in Subtitling for the Deaf and the hard of hearing
0
Publicado em 2010
In Bogucki, Łukasz & Krzysztof Kredens (eds). Perspectives in Audiovisual Translation. Lódz Studies in Language. Vol. 20. Frankfort am Main: Peter Lang GmbH. Pp123-145
Josélia Neves
  Artigo disponível em versão PDF para utilizadores registados
Resumo

A capacidade que as pessoas surdas têm para processar a música é superior àquela com que processam a fala pois a música é feita de várias componentes – melodia e ritmo e frequentemente palavras – passíveis de serem captadas mesmo por quem não ouve, desde que reunidas condições ambientais adequadas. Verifica-se que muitas pessoas que não conseguem ouvir a voz humana conseguem percecionar o ritmo e as vibrações, bem como os tons com frequências mais baixas. Essa experiência auditiva dá-se através de todo o corpo, sente-se na pele, nos músculos e nos ossos e processa-se no cérebro e na alma. E a música oferece mais do que uma experiência estética ou recreativa… traz em si força terapêutica, memórias, histórias pessoais, de povos e de espaços e de momentos marcantes nas vidas de pessoas individuais e de povos. Com ela se faz história e se contam histórias – patrimónios a que TODOS têm direito – mesmo aqueles que não conseguem ouvir através do aparelho auditivo. Essas pessoas continuarão a ter “audição cultural” e uma memória auditiva que não é necessariamente sensorial. Neste artigo, reflete-se particularmente sobre o papel da música no constructo fílmico e reflete-se como formas de promover a “audição” da banda sonora, através da leitura de legendas. Esse esforço vê-se concretizado através da análise das legendas do filme “Déjà vu” (2006) de Tony Scott.

Anybody watching the opening and closing ceremonies of the 2008 Beijing Paralympic games or a concert by the percussionist Evelyn Glennie, the Japanese pop star Ayumi Hamasaki or by the opera singer Janine Roebuck will agree that music can be and is part of the lives of many deaf people around the world. A rapid incursion into the Deaf world will show that music is far more important in the lives of deaf people than it is given credit for. Music can be “heard” in numerous ways and medical and sociological reports (Sandberg 1954, Gouge 1990, Darrow 1993, Sacks 1990) prove that the world of the deaf is all but silent. It is far more vibrant than that of many hearing people because sound is perceived in intensity through all the senses in amplified versions of what hearers take in mainly through their auditory apparatus. Laborit (1998: 17) describes her kinaesthetic view of music in the following way: “Music is a rainbow of vibrant colours. It’s a language beyond words. It’s universal. The most beautiful form of art that exists. It’s capable of making the human body physically vibrate.” The “auditory nature” of such vibrations has been scientifically proven by assistant professor of radiology at the University of Washington, Dr. Dean Shibata, who testifies (in University of Washington 2001) that:

Deaf people sense vibration in the part of the brain that other people use for hearing – which helps explain how deaf musicians can sense music, and how deaf people can enjoy concerts and other musical events. These findings suggest that the experience deaf people have when “feeling” music is similar to the experience other people have when hearing music. The perception of the musical vibrations by the deaf is likely every bit as real as the equivalent sounds, since they are ultimately processed in the same part of the brain.

The ability that deaf people have to process music places it at an advantage in relation to speech. It is often found that people who cannot hear speech can hear music because most musical instruments fall into the low tone frequencies that are easier to pick up than the higher tone frequencies used in speech. Petit (2003), who works with music in deaf education, carries this forward by ascertaining that “of the two main elements of music, melody and rhythm, they [deaf children] could perfectly understand the second, in other words, they could have access to 50% of music and, if we consider that in some cultures rhythm in itself is an artistic expression, there is a large field for doing music work with deaf students”.

This ability that deaf people have to pick up the sound of music physically would be, in itself, reason enough for all those working in the sphere of accessibility services for deaf people, and subtitlers in particular, to pay special attention to the conveyance of music.

There are, however, other strong reasons to push this issue further: Music is more than something you process at a given moment. Music plays an important role in landmarking significant experiences and spaces in people’s lives. Songs and melodies have a shared life which goes beyond their compositional existence. Like other people, deaf and particularly late deafened people continue to “hear” music relating it to previous experiences. Burke (2008) testifies just this:

When I am much older, if I ever hear music in my ear, it probably will be “Do, re, mi.” That's the song that stuck with me throughout childhood, from Sound of Music. Older people who lose their hearing (as opposed to those of us who have been deaf our whole life), commonly experience the sensation of hearing music in their ears, usually songs/music that they remember from earlier in life.

Recalling music may mean remembering its lyrics, its tempo or melody, or simply the context in which it was experienced. Film is frequently the context that carries memorable music, as happens in the case above. Classics such as Dr. Jivago (1965), 2001: A Space Odyssey (1968) or Chariots of Fire (1981), only to mention a few, have lived in time much due to their ear catching musical scores making it obvious that sound, and music in particular, is one of the most important elements in filmmaking.

According to Blandford et al. (2001: 156) the primary function of music in films is to provide emotional support to the story, a notion that is shared by Kivy (1997: 322), who underlines that “music warms the emotional climate.” Taking on a different perspective, Gorbman (1987: 2-3) considers that “music taps deeply in cultural codes, giving it rich cultural associations and potential meaning, a “veritable language” that can contribute significantly to a film’s overall meaning”. Monaco (1977: 182) sums up the issue by addressing the whole of the sound track as one and talks of the compositional interplay of sound in movies by saying that “it makes no difference whether we are dealing with speech, music, or environmental sound: all three are at times variously parallel or counterpunctual, actual or commentative, synchronous or asynchronous”, and it is in this interplay that filmic meaning grows beyond the images and the whole becomes artistically expressive.

One might thus place music among the richest of filmic codes and among those that deserve special interpretative skills both on the part of movie watchers and particularly on that of linguists working in the field. In the context of subtitling, and in order to proceed with linguistic transfer of acoustic messages, translators will need to be sensitive to this interplay between images, speech, sound and music to be able to decode their inherent messages and to find adequate and expressive solutions to convey such sensations verbally. In so doing they will be aiming at the best translation possible, which, to quote Forster (1958: 6), “fulfils the same purpose in the new language as the original did in the language in which it was written”. If we are to transpose this notion to the context of sound, and to that of music in particular, subtitles will need to serve the purpose of the acoustic component of the audiovisual text in all its effects.

Although it may be difficult to find words that fully convey the expressive force of sound, the translator working on SDH should try to produce equivalent narrative and aesthetic effects that will be meaningful to people who might have never perceived sound before. But most important of all, translators will need to see how these elements interact with the rest of the cinematic codes, explicitly or implicitly modifying the whole that is more than the sum of speech, images and sound. They must listen to every nuance and decode every message so that intentional effects may be conveyed as fully as possible.

By finding different, yet equivalent solutions to render the acoustic messages in the original text, translators will need also to find a way to make such information blend in naturally with the visual component of the still present original text, whilst guaranteeing that all that is written in the subtitles makes sense, and is thus relevant, to their receivers. In SDH this balance is hard to achieve. Translating contextually occurring sound and music into written language will demand transcoding expertise that will pull the translator between the intended meaning of the acoustic messages, their function in the text and the effect any rendering may produce on the deaf viewer. The achievement of what Nida calls a “natural rendering” (2000 [1964]: 136) will be a difficult aim, particularly because relevance is receptor bound, and most translators doing SDH seldom truly understand their receivers’ socio-cultural context. Nida (ibid.) clarifies what is expected of such “natural” renderings by saying that they “must fit (1) the receptor language and culture as a whole, (2) the context of the particular message, and (3) the receptor-language audience”. The whole focus is definitely on the way the receivers perceive the message and much less on the way that message resembles the original.

Taking all the above said about the importance of music in people’s lives and its importance in the making of audiovisual texts, and of films in particular, is it clear that, when addressing the issue of subtitling for the deaf and the hard of hearing (SDH) it should deserve special attention. All scholars and professionals in the field justify the inclusion of information about sound effects and music in SDH on the premise that, otherwise, people with deafness would miss out on important aural information. All guidelines and codes of good practice contain recommendations to this effect. However, careful analyses of actual subtitles on screen show that the issue is complex and only partially tackled. Theoretical works on SDH echo such complexity by only touching upon the matter without going into specificities. On the whole, what might be inferred is that translators and scholars of translation are more aware of the transfer that occurs at a linguistic level than at a semiotic level. They also know that the transcoding of acoustic signs, and music in particular, into visual (verbal or iconic) signs calls for specific interpretative skills that need to be developed. Further to translating words, translators will need to interpret what goes unsaid and can only be perceived. Like poetry, sound and music need to be felt in order to be translated and, once felt, they need to be understood as distinct signs that must be re-codified into visual signs suggesting equivalent effects. Given that translators working on SDH need to spell out what they perceive in subliminal ways, the translation process that takes sound effects and music into subtitles might be seen as an instance of exegetic translation which Hervey and Higgins (1992: 50) consider to be a “style of translation in which the TT expresses and explains additional details that are not explicitly conveyed in the ST”, or in other words one in which “the TT is, at the same time, an expansion and explanation of the contents of the ST”.

As happens with the rest of sound effects, choices need to be made on how to convey music in the form of subtitles. Here too, guidelines and actual practice show that even when the topic is explicitly addressed on paper, the actual presentation of information on screen is less systematic.

1. Codes and conventions

An analysis of 15 in-house guidelines used in various countries and for different media (cinema, TV and DVD markets) (Neves 2005) shows that music is systematically included as one of the elements to be conveyed in subtitles. Most of the indications, however, pertain to technical parameters such as colour, font type, positioning or format. A parallel analysis of the conventions used in different media and in different European countries also shows that, in practise, music is only hinted at in subtitles.

One might summarise common practice in half a dozen conventions. Most of the times, information about music is reduced to a minimum and most of the labels provided are simple references to the presence of music.

At times, thematic pieces are identified as such and it is sometimes found that lyrics are included, particularly in the case of musicals or when explicitly relevant to the plot.

One of the recommendations recurrent in guidelines is the inclusion of a symbol to indicate the existence of relevant music. Many subtitling systems do not allow for the inclusion of musical notes [♫] or [♪] as suggested in guidelines so it is common to find a sharp sign [♯] introducing information about music or song lyrics.

On less frequent occasions, full identification or short explanatory notes are to be found:

The question remains: “how much is, in fact, being given in these almost telegraphic subtitles, and what else might be done to convey music to deaf viewers?”

2. A tentative proposal for a new approach to subtitling music

A utopian (even if technically possible) approach to the matter might envisage offering multi-sensorial solutions for the conveyance of film sound and music to cinema goers, among which deaf viewers will be naturally included. 4D cinema will certainly have the means to transfer acoustic messages into multisensory codes, banking on technical devices to produce multiple sensations (through seat movement, vibration, the manipulation of seat temperature and overall atmospheric conditions, among others). Such solutions, however promising, are way beyond the possibilities available in democratic media, such as television and the DVD, even in their most advanced formats.

It may be true that the new technologies used in advanced iDTV or Blu-ray DVDs may offer a new array of technical solutions that may be used to advantage to improve present subtitling standards, however, these will be of little use if those providing accessibility services to deaf viewers do not grasp the full significance of music in order to transfer it into whatever technical means might be available. Technology alone will not respond if those providing the service do not have a clear understanding of the role(s) that sound and music play in the audiovisual text and/or do not master intersemiotic translation techniques that can be adequately activated in non-acoustic conveyance of acoustic messages. Building on what was posited for the training of subtitlers working in SDH on the need to understand the meaning of sound in film (Neves 2008), and if we are to concentrate on the conveyance of music through subtitles, it might be useful to return to what filmmakers make of music.

Sonnenschein (2001: 155) neatly categorises music in film into four functions:

  • Emotional Signifier – takes us into the make-believe world of film. It allows us to sense the invisible and inaudible, the spiritual and the emotional processes of the characters portrayed.
  • Continuity – a sense of continuity is maintained when music is played over spacially discontinuous shots.
  • Narrative cueing – music helps the audience orient to the setting, characters, and narrative events, providing a particular point of view.
  • Narrative unity – music can aid in the formal unity of the film by employing repetition, variation, and counterpoint, thus supporting the narrative as well.

If subtitlers keep these aspects – emotional significance, role in the continuity, narrative cueing (time, space, viewpoint) and narrative unity – in mind whilst analysing film and writing up subtitles, perhaps they will come up with richer subtitles that will help deaf viewers “see” music and direct their attention to specific features or even enhance their selective/residual hearing ability(ies). If they also take into account that music can create a convincing atmosphere of time and place and if they find a way to convey those explicitly or implicitly suggested landmarks, they may be contributing towards promoting a different kind of “hearing”, one imbedded in knowledge and culture. This is by no means an easy task. It implies that subtitlers have acquired highly proficient listening skills and have also developed music decoding knowledge that are still not seen as vital in subtitler training programmes and only come naturally to the few subtitlers who might have personal musical skills or education.

Music plays such an important role in film that one might wonder why it hasn’t gained special interest among more translators and scholars working in the field. The answer might be found in the fact that apparently music falls outside the sphere of language or that one will need specialised knowledge or skills to approach it. Both beliefs are not quite true. All translators really need is to put some effort into understanding the role that music plays in the work they are subtitling and while other multi-sensorial solutions are not widely available, finding words to convey it whenever it is obviously relevant to the plot.

3. DéJà Vu’s 8 minute opening sequence

A practical example of what has just been said might be had in the analysis of the 8 minute long opening sequence of Tony Scott’s crime thriller DéJà Vu (2006), in which Denzel Washington plays the part of Doug Carlin, an ATF Agent, who is brought in to investigate an explosion on a ferry boat that took the lives of hundreds of soldiers and their families. The film is built around Doug Carlin’s attempt to unravel the crime and avoid the tragedy by going back and forward in time thanks to a secret intelligence device. This complex time paradox give the film a science fiction dimension intertwining four different timelines in what is a rather complex and unnerving narrative. Despite the discrepant reviews, which range from qualifying the film as very good to very poor, it is certainly an example of how music and sound effects have contributed towards building this complex narrative.

The opening sequence is minutely orchestrated to provide (hearing) viewers with a number of narratives which are greatly built on sound, and music in particular, which plays an interesting game with the visual cues. A close analysis of this sequence, as it comes in the DVD format, using English SDH, allows one to put all the theoretical formulations set forward in this paper into focus.

If we are to watch the sequence with the sound track off, we start off by seeing the opening credits, first an animated presentation of the logo of Touchstone Pictures, then that of Bruckheimer Films, that the audio describer says to be “a view of the road ahead of my vehicle travelling at speed down a desert road. The scene goes into reverse then repeats itself. Lightning strikes a tree by the roadside and the tree comes into leaf.” The film itself then begins with the opening credits intertwining with the opening scenes. The first 4 minutes are a succession of images depicting what appears to be a joyous moment with young people celebrating, soldiers going on board the Algiers Ferry, either as groups of merry young lads or with their girlfriends or families. The atmosphere appears to be light-hearted and energetic and quite celebratory. Even the images in slow motion seem to help us dwell on the joy of those boarding the boat. We are given a view of the harbour and the boat through various angles and even though the two parts of the boat (the upper deck with the people and the lower deck with the cars) are shown, everything seems to converge towards the excitement of starting off on a regular trip. A first hint of disruption comes when a little girl lets her doll fall into agitated waters, but even that is played down by the rest of the images showing the excitement of all those on board.

Images create a counter punctual effect when the camera cuts between the upper and the lower deck. However, only the ferry hand’s facial expression shows some signs of concern. As had happened with the doll’s episode minutes before, this too is played down by the joyous atmosphere that comes with the brass band playing and the overall playful and relaxed atmosphere that is given from the deck above. The event of something being wrong is hinted at when we are given close ups of the key hanging in the truck to which the ferry hand had been attracted and that of a missing number plate. The space that goes between the close-up of explosives and the explosion itself is just enough for the viewer not to be caught by surprise when the boat gets blown into pieces. Events are shown sequentially and there is no real sense of crescendo even if tension does build up a little as we follow the movements in the deck below. The minute that follows the explosion is filled with visual tension which seems to be broken with the arrival of the ATF Agent that is shown with close camera shots in opposition to the scenes around him that are mainly shown in medium to long shots. The camera soon takes on the agent’s viewpoint as it scrutinizes the disaster area. Emotional intensity is given through a number of slow motion sequences that seem to be driven by the main character’s thoughts.

If we are now to go through the same initial 8 minutes using sound alone, we are given quite a different story. The opening sequence of DéJà Vu is strongly held together by two pieces of original scores, “Algiers Ferry” (3:06) and “The Aftermath” (4:30), composed and conducted by Harry Gregson-Williams, which are intersected, at times, by another two pieces, the traditional “The Saints Go Marching In” and The Beach Boy’s “Don’t Worry Baby”. The quality of the original pieces is sad and romantic with a slightly noirish edge. Initially a gentle piano sound breaks the silence and a few high pitched woman vocals interact with it while almost indistinct crowd noises make their way into the soundscape. The rhythm picks up to give way to the sound of a foghorn which is followed by a rhythmic even if unsteady thumping, resembling a subdued heartbeat. Various sound effects interact with the music suggesting a geographical setting that involves the hustle and bustle of a busy place where people, cars and seagulls and a ship convene. There aren’t many speech lines in the first 3 minutes (initially a man’s voice saying “let’s get these boys to the party” and later a child screaming “mama”). The sudden introduction of a jazzy “When the Saints Go Marching In" breaks the down tone atmosphere and gives it a hearty lift that is then taken up by a radio broadcast introducing the 1964 pop song “Don’t Cry Baby” by The Beach Boys. The next minute is a coming and going between these two songs with an odd line of speech. An explosion comes in after an insistent ticking noise and silences all music. The minute that follows is inhabited by sound effects that speak of peril and anguish and sounds of sea and rain. The next theme, “The Aftermath”, creeps in underlining the sounds of a disaster zone written with screams, sirens, choppers and people’s cries. The music picks up the previously heard drumming heartbeat sound and languid strings produce a mournful atmosphere. The music is sombre even if rhythmic and blends in with the sound effects (e.g. a cell phone ringing) as if underlining them with a tragic tone.

If yet another exercise is to be done, where the images are followed with SDH alone, little is added to the visual cues through the 17 subtitles that appear during these initial 8 minutes. Deaf people watching will basically receive the additional information of the speech lines, which are all in tone with the images, and the odd subtitle on major sound effects. All that is given about the music is the title of the two well known songs “When the Saints come Marching in” and “Don’t Worry Baby”.

By addressing the film through image or through sound alone, one may think these are two totally different stories. At least as far as the initial 4 minutes are concerned image and sound tell of different tales. If we bring together all the parts together and read them in their interplay, we will come up with a somewhat new tale. Sound effects and image seem to play the “telltale” game in which one underlines the other and both, synchronously, speak of the same thing. Where the equation changes is when we include the musical score(s) and match them against both the images and the sound effects. There we might find a case of what Sonnenschein (2001: 156) calls “anempatheic music” in which the music almost ironically contradicts what is seen on screen.

Quite a lot has been written about Gregson-Williams’ scores in Déjà Vu. A review on itunes (http://ax.itunes.apple.com) reads:

His implementation of great ambient violins and synth sounds, and combining it with his signature percussions and catchy sound effects make him an artist who stands out on his own. You can feel the tension of each song, and if you have seen the film, remember the intensity and moods that each seen [sic] brings.

This comment encapsulates exactly what Gregson-Williams’ music brings to the film, “tension, intensity and mood”, which are heightened with the counter punctual effect the well known songs (“When the Saints come Marching in” and “Don’t Worry Baby”) bring to the movie. In other words, it might make sense to see how each of the music pieces relates to the image and sound effects at each point and how they interact among themselves.

A detailed look at the way music is used in this film in relation to image (described in the audio description) and sound effects and even to the SDH subtitles provided in the DVD version (see appendix) shows how little was taken into account when making the film accessible to deaf viewers and how much more could be done to convey a little more of the narrative value its music has. The reference to music we are given in subtitles 3, 8 and 10 seem rather insufficient if we take the following into account:

Music is present in the film even before the first image appears. In fact, the theme music interacts with the short animation which leads to the logo of Bruckheimer Films even before the narrative begins. The music draws the animation into the film as a visual metaphor encapsulating the time bending experience the film is all about. The first piano notes to be heard are light but sad and sets the film off on a sombre note. The mood is dark and tragic right from the start and seems totally out of character when heard against the bright, happy images. From the very beginning hearing viewers know that there is something not quite right in the whole visual setting. This, deaf viewers cannot pick up because all the visual cues are telling a happy story. Here and in other places in the film, it could be useful to drop a subtitle or two hinting at the mood the counter punctual music brings to the film. Every time the music varies to change the mood, this should be equally given, as is the case of the thumping heart-beatlike rhythm that comes as the visuals move in a crescendo highlighting people’s joy. Not mentioning the initial score at all leaves deaf viewers with yet another vacuum: the introduction of “When the Saints come Marching In” has no real impact. It is in synch with the image and only adds the cultural element, which is not that strong. Even though it is underlining the American spirit that is obviously present in the soldiers and sailors we see in the pictures, it is in tone with the context for most brass bands will play this song. The subtitler does well in introducing the subtitle when the song comes on but then there is no other reference to it when it comes back again and interacts with the Beach Boys’ song in the cat and mouse chase between bliss in the upper deck and near disaster in the lower deck.

The game that Scott makes with these two popular songs also goes unseen to deaf viewers. Initially, “When the Saints come Marching In” is the top deck song. It is merry, patriotic in its jazzy gait. Initially, too “Don’t Worry Baby” is the bottom deck song. Cutting between them means going above and below in the parallel narratives that are taking place in the ferry. But slowly the music from below takes over and keeps on playing while the images shown are from above. At that stage, the lyrics become perfectly audible and even front staged to the point when they ironically match the picture of a baby yawning in the arms of a veteran soldier. It is clear to hearers that danger lurks from below even if the song speaks of “fun, sun and auto parts” as Panfile (n/d) puts it in his analysis of “Don’t Worry Baby”. The two narratives are about to become one and hearers know it, mainly because the music is telling them so. Here again, with very few subtitles that are given, deaf viewers cannot perceive this implicit narrative. Perhaps, in this case, it would help to have the subtitles with the lyrics of the Beach Boys’ song.

The third part of this introductory sequence picks up anew on the film’s original theme score. “The Aftermath” picks up from where the “Algier’s Ferry” left off. It may have lost the sweet tones of the piano and the woman’s vocals to be found in the first piece but it has grown in tension and in insight. Particularly after Doug comes on screen, the music highlights the main character’s feelings. In this part of the film it underlines emotions. Again, it establishes mood. Here however it has lost the counter punctual effect it had at the beginning. It is in synch with the visual cues in that it speaks of the same tragedy only it displaces it from its reality and embodies it with a personal an emotional atmosphere. Even when the mobile phone in a dead body bag rings, Doug has difficulty in breaking free from his emotional reverie and this we know because the music is telling us so.

Having said all this and in an attempt to offer a fuller account of the musical force in the opening sequence of Scott’s film, I have now proposed a new set of subtitles without touching the ones that have been offered on the DVD that served the purpose of this reflection (see appendix for cross referencing).

( ♪ soft piano notes)
( ♪ sad piano with woman’s vocals)
( ♪ tension mounts)
(LAUGHING AND CHEERING)
( ♪ drums mark heartbeat tempo)
(FOGHORN SOUNDS)
( ♪ rhythm accelerates)
( ♪ music grows in intensity)
Can’t believe it.
They’re right on time.
Let’s get these boys to their party.
( ♪ sad music covers voices)
Mama!
(BAND PLAYING WHEN THE SAINTS GO MARCHING IN)
MAN ON RADIO: It’s 10:48 on Fat Tuesday, Mardi Gras.
Now let’s go back in time to 1964.
The Beach Boys on 105.3FM,
the heart of New Orleans.
( ♪ Back to THE SAINTS)
(DON’T WORRY BABY PLAYING ON RADIO)
( ♪ Back to THE SAINTS)
Okay. Take it out wide.
Give the pig some room.
(DON’T WORRY BABY PLAYING ON RADIO)
( ♪ Back to THE SAINTS)
[ ♪ ] Well it's been building up inside of me
[ ♪ ] For oh I don't know how long
[ ♪ ] I don't know why but I keep thinking
[ ♪ ] Something's bound to go wrong
[ ♪ ] Don’t worry Baby
[ ♪ ] Everything will turn out all right
(BEEPING)
(SCREAMING)
( ♪ sad mood)
( ♪ moaning strings and grave drums)
(SIRENS WAILING)
My daughter’s on that ferry.
Please. Please.
Oh, God, my daughter!
( ♪ drums in a heartbeat rhythm)
(CELL PHONE RINGS)
( ♪ sad emotional atmosphere)
(music breaks)
Do the other side?
( ♪ music returns, energetic yet grave)

This is no more than an academic exercise. The solutions proposed may prove to be less adequate when actually placed on film. Conventions such as colour codes, font types and positioning would necessarily need to be revised and subtitle cueing would also need to be addressed with care. However, I truly believe it is worth trying to capture and to put into words all that music has to say through its multiple codes.

It is no easy chore to convey all these subtleties through written subtitles. They always seem out of character and bulky, particularly when they appear in upper-case letters sprawling across the top of the screen. Perhaps SDH should consider refining its conventions when it comes to conveying music. As we have them today, subtitles are still mainly verbal. Visual codes such as font and colour changes, positioning or the inclusion of odd signs, are certainly poor solutions to convey the richness of film music. Technology and software developers have still not exploited synesthesia to the full to come up with easy-to-use solutions that might convey sound through non-acoustic codes. It has long been understood that humans have the ability to call up different senses to gain access to the world around them and as Sachs (2007: 193) puts it, “nowadays synesthesia is understood to be a sensory phenomenon as well as a conceptual reality, ‘a union of ideas rather than sensations’”. If we are to take this further and draw from all that has been said and done about conveying sound and music through other media (Isac Newton’s Opticks dating back to 1704 already related sound frequencies to light refraction in his metaphorical colour music wheel, for instance) perhaps we can come up with a solution that might conjure deaf people’s synesthetic abilities to bring music to them just in the same way that “the look, the feel, the taste, and the crunch of a Granny Smith apple all go together” Sacks (ibid.: 194). After all, still as Sachs (ibid.: 177) points out, “there are no less than eighteen densely packed columns on “Colour and Music” in The Oxford Companion to Music” and there are a number of interesting studies on how music is synesthetically related to colour and even to taste (e.g. research by Gian Beeli, Michaela Esslen and Lutz Jäncke) or even with light, shape and position (e.g. research by Sue B.) 1.

Even if still read as utopian, who knows if some time in the future, subtitles will take on completely new shapes and will find a way to convey the wealth of meaning(s) music brings to life in general and to films in particular. While this is still not available to all, deaf viewers will certainly benefit from subtitles that carry speech, sound effects and music, even if only in the form of descriptive, explanatory or interpretative tags. Deaf viewers may continue to demand verbatim or near-verbatim subtitles of the dialogue exchanges in films, but will necessarily need to trust the subtitlers’ expertise when it comes to decoding music and finding the means to convey all the hidden meanings that are often only felt by hearers. Making subliminal messages explicit is no easy task, but it is definitely worth the effort because all music is there for a reason.

Notas

1 See Paul Harrison’s (http://tinyurl.com/muj9gl) and Angela Meder and Andreas Mengel’ (http://tinyurl.com/n8dnpp) webpages for numerous links on the issue.

Bibliografia

Blandford, S. et al. (2001). The Film Studies Dictionary. London: Arnold.

Burke, J. (2008). “Music in the ear”. Retrieved from http://tinyurl.com/myhyrh [7-12-2008]

Cytowic, R. (n/d). Synesthesia rhymes with “anesthesia”. Retrieved from http://tinyurl.com/lrmjpr [7-12-2008]

Darrow, A. A. (1993). “The role of music in deaf culture: Implications for music education”. In Journal of Research in Music Education, 41(2), 93-110.

Forster, L. (1958). “Translation: An introduction”. In: A. H. Smith (ed). Aspects of Translation: Studies in Communication 2. London: Secker and Warburg, 1-28.

Gorbman, C. (1987). Unheard Melodies: Narrative Film Music. London: Indiana University Press, Bloomington, and British Film Institute.

Gouge, P. (1990). “Music and profoundly deaf students”. British Journal of Music Education, 7(3), 279-281.

Hervey, S. and I. Higgins (1992). A Course in Translation Method: French to English. London: Routledge.

Kivy, P. (1997). “Music in the movies: A philosophical enquiry”. In: Allen, Richard and Murray Smith (eds). Film Theory and Philosophy. Oxford: Claredon Press, 308-328.

Laborit, E. (1998). The Cry of the Gull. Washington: Gallaudet University Press.

Monaco, J. (1981). How to Read a Film. The Art, Technology, Language, History, and Theory of Film and Media. New York and Oxford: Oxford University Press.

Neves, J. (2005). Audiovisual Translation: Subtitling for the Deaf and Hard-of-Hearing. University of Surrey-Roehampton: PhD Dissertation Retrieved from http://tinyurl.com/na33eo

Neves, J. (2008). “Training in subtitling for the d/Deaf and the hard of hearing”. In: Díaz Cintas (ed). The Didactics of Audiovisual Translation. Philadelphia and Amsterdam: John Benjamins. pp.171-189.

Nida, E. (2000). (1964). “Principles of correspondence”. In: Venuti, L. (ed). The Translation Studies Reader. London and New York: Routledge, 126-140.

Panfile, G. (n/d). Mind of Brian 9: Don't Worry Baby. Retrieved from http://tinyurl.com/m9cmln [8-12-2008]

Petit, B. C. (2003). “Music for Deaf persons”. Disability World, 20, September-October. Retrieved from http://tinyurl.com/me3kyh [8-12-2008]

Sacks, O.(1990). Seeing Voices. A Journey into the World of the Deaf. New York: Harper Perennial.

Sacks , O. (2007). Musicophilia: Tales of Music and the Brain. New York: Knopf.

Sandberg, M. W. (1954). Rhythms and music for the deaf and hard of hearing. Volta Review, 56(6), 255-256.

University Of Washington (2001). “Brains Of Deaf People Rewire To "Hear" Music”. ScienceDaily (November 28). Retrieved from http://tinyurl.com/m5undg. [8-12-2008]

Filmography

Chariots of Fire (1981)
Dir. Hugh Hudson
UK

DèJá Vu ( 2006)
Dir. Tony Scott
USA

Dr. Jivago (1965)
Dir. David Lean
USA

2001: A Space Odyssey (1968)
Dir. Stanley Kubrick
USA

Comentários