![]() |
![]() |
|
HITHER GATE MUSIC | with additional funding from North Wiltshire District Council |
TOPIC | WHICH IS ABOUT | DETAILS |
---|---|---|
PURPOSE | the role of the sound designer in films etc. & music |
|
RECORDING TECHNIQUES | recording sounds and transferring to computer |
|
LISTENING | sounds: hearing, listening, imagination | |
AUDIO & VISUAL IMAGES | how sounds and images can interact | |
SOUND TRANSFORMATIONS | how sounds can be altered and developed, using computer technology |
|
MUSICALITY | the way basic form archetypes can help make sounds more emotive |
The Purpose & Context of 'Sound Design'
Introduction
When we are absorbed in a radio programme, TV show, film, animation or video, we are so much absorbed by the images and action, that we tend not even to notice the music that is often accompanying the story. And woven into the music, there are just as often many sounds, both familiar and strange. These sounds are seldom just recordings of real-life action.
Thus the music and sounds weave an emotional fabric which enhances the experience of what we see, and often guides how we interpret what we see, e.g., as benevolent or as sinister.These sounds are in fact carefully chosen and developed in order to support and increase the emotional effect of the action. This work is done by a person known as the 'sound designer', who may be the same as the composer of the music, or a separate individual. The role of the sound designer is now an accepted and standard part of the creation of sound tracks.
Another important new role is that of foley artist. 'Foley' sounds are those which accompany the movements of the actors which make noises, such as footsteps or a creaking chair. These are normally not the natural sounds made by the actors. Usually they are specially created by foley artists and held on a separate audio track. It takes tremendous ingenuity and skill to think how to duplicate natural sounds and then do so in sync with the action. For example, the sound of all those punches in films is not created by knuckle on bone, but by hitting thick beef steaks or crushing melons. [based on Sonnenschein, p.41].
Having said this, it is also important to realise that composers today are using sounds to create large-scale pieces of music. This usually (but not always) involves extensive sound generation and processing techniques. Both for composer and for listener, this involves coming to terms with a whole new musical aesthetic.
Exploring sound transformation techniques will be the main focus of these Sound Design Workshops. Fantastic things can now be done to transform and develop sounds. These processing techniques can be used to create and hone sound effects as well as to create passages and whole pieces of music using sounds, or a mixture of sounds and musical instruments. They therefore form the core toolset and primary innovative area of sound work today. The purpose of the Workshops is to introduce participants to some of the amazing possibilities available, and in the process point towards the various career opportunities for sonic artists.
Data
Here are a few of the top sound designers of our time and a little bit about their work and approach:
- Gary Rydstrom: Toy Story 2. In this film, the sounds are so imortant that they were identified before the animations were created. Slinky Dog was identified by the sound of a slinky, Ham the piggy bank by clinking coins etc. Some characters didn't really make a sound, so one had to be invented for them. For example, Etch-A-Sketch's sound was made by a razor blade drawing on glass covered with sand, or swishing through it. All the sounds were meant to be believable, but funny, larger than life, or even different from real life. There is an interesting dynamic in the Toy Story films: on the one hand, from the point of view of the toys, the world they inhabit is larger than life. Thus, for example, the sound of an aeroplane can be substituted for the sound of a car. On the other hand, from the point of view of us the viewers, they are always toys. Thus their own sounds tend to be "funny little squeaks and plastic sounds". An example of this is Al's burp. We hear the sound (made from a sea lion mixed with bubbling drain sounds) start in his stomach and work its way up to his mouth – but only a puff of air comes out. Foley artists made the movement sounds for the charcters. Without these movement sounds, the charcters tend to go dead on the screen. The sound track, then, is a combination of orchestrated music, foley and sound design. [AM March 2000]
- David Farmer: Lord of the Rings. At one point the Ringwraiths enter the hobbit's bedroom. The composer of the music accompanies this with a strange choir. But then the music stops and we only hear the subtle sounds of footsteps and wraith armour scraping on things – making it intimate and close, as if they were in the same room with us (the audience). [AM Jan 2002]
- Per Hallberg & Wylie Stateman (sound supervisors), Greg Russell (re-recording mixer), Scott Gershin, Martin Lopez, Mike Reger, Alan Rankin and Tony Lamberti (sound designers): Godzilla. Besides creating a sense of size (400 foot lizard!), the designers also wanted to give him a personality which could evoke a bit of affection. Low grunting and breathing sounds, some almost wimpering, were developed for this. They listened to the way birds and other animals reacted to help capture the 'baby' stage. The 'classic roar' was extensively researched. More than 2000 sounds were gathered, such as baby elephants, hippos, badgers, trumpets being blown in the stairwell and metal being scraped, as well as vocalisations by Gary Hecker. Various combinations of these sounds created a variety of Godzilla's vocalisations and roars. Sounds of trumpets were distanced from the original by using multiple harmonizers to alter the pitch (dividing the pitch to create a more dissonant, complex sound) and multiband distortion to diffuse the strong fundamental pitch and give it more grit.
Other sounds included helicopters, shooting missiles, cannons, guns, explosions, glass, Godzilla running and screaming. The sound of a jet booming by would sometimes be 'sweetened' by mixing in a cannon blast or a thunder hit. Giant footsteps, explosions, crashing debris and smashing noises form huge sound collages. The massive footsteps used multiple explosions, slowed down and layered, always an assemblage of sonic elements. Reverbs of various lengths varied the footsteps, and multiple delays captured the reports off the buildings. Other sounds included recordings with metal on the Foley stage, and the scraping sounds of dry ice.
At the same time, in this sound design extravaganza, there was constant rain, wind machines, cars blowing over, debris flying everywhere and, somehow, conversations. Instead of overcomplicating the sound track by putting too many of these sounds together at the same time, they focused on the sounds which went with what the viewer was seeing at each moment. [AM June 1998 and SS June 1998]
- Graham Headicar: Chicken Run. A dog bites a gnome in the film. This was done by combining the sounds of a dog chewing an apple and eating crunchy dog food. Other nasty dog growling sounds were unaltered real-life sounds taken from a particularly vicious guard dog. [AM July 2000]
- Dane Davis: Red Planet. When the astronauts walk on the surface of Mars for the first time, "all the foley sounds, the movement of the suits are recorded as if inside the suit....We had a helmet which we put into a little baffle box and every voice, all the breaths and all the talking space suits, all of that was played back through a speaker inside the helmet and re-recorded there." "the sound-in-space concept was an anathema to Dane, no sound in a vacuum. But this is only a film after all and there has to be some kind of respect for the dramatic focus. 'My proposal for all the exterior space shots was to create the sound of magnetic attaction of the mass of objects, instead of making it about sound. I created this magnetic sound...'". Also outstanding in this film are the sounds of Amy, the robot, an increasingly threatening series of strange clicks, whizzes and bangs. [AM December 2000]
- Jon Johnson (sound editor): U-571. "'I researched what sort of power plant they used on submarines like these – both German and American. I also learned about batteries, diesel engines, gear shafts, plus the pneumatic and hydraulic systems used for the periscopes. I then drew up a list of the sounds we needed to record specifically for the film.'" "He also used dry ice to create the sounds of underwater bubbles. 'Melted dry ice in water sounds far more realistic than compressed air.'" [AM July 2000]
- Gary Rydstrom: A.I. "'I wanted the sound effects a lot of the time to be ethereal like the music and fit in with it....Simple changes in pitch, or playing backwards, or layering things is what I try to use as my processing.'" Gary doesn't like to do too much processing because he feels it starts to sound artificial and less organic. "'I just try to find good, interesting, original source sounds and use those as raw as I can.'" [AM September 2001]
- Elliott Koretz & Richard Anderson: Antz. This film is set in two very different environments: the safe, underground home of the ants and the dangerous world outside. The sound design work focused on reinforcing the nature of these two contrasting places. In the underground home the sound of digging is constantly heard in the background as a natural everyday part of ant life. This is a pre-metal world, and many of the sounds were made by digging with a large bone in earth and gravel. Above ground, events can be cataclysmic, and strong low rumbling sounds along with explosions and cracks are used extensively to give the impression that the viewer is seeing things from the point of view of the ants, at their size and level. Sounds were mixed and layered. For example, the wing-beating sounds of various flying insects were mixed with pitched down aeroplane sounds. "There is very little in Antz that was a straight pull from our library, or that had not been processed in some way or another. This was a key element in making sure that the sounds you hear are sounds which you probably won't have heard before." [AM November 1998].
- Dane Davis: The Matrix. Work on the sound design for this amazing film filled with "digital transformations, flying killing machines, gun battle ballets, flexible time scales catching up with bullets, hundreds of whirring and clicking future machines" began with Dane going through his 94,000 strong library of sounds, then collecting additional raw material as needed, extracting "little bits and pieces into the software. 'I virtually always originate sounds with something real.'" For example, he hooked up 60,000 volts to a Tesla coil and recording it to get a sizzling, shocking sound for the power plant, a "giant rhythmic cycle of electricity that is building and popping every three seconds." [AM June 1999]
- Greg Hedgepath (sound designer) and Paula Fairfield (specialised 'Gun Editor'): Universal Soldier: The Return. Paula used the sound of arrows whizzing by as part of the gun's ricochet (instead of the more common buzzing sounds), plus firecrackers, shotgun blasts and other guns, with layering and splicing to get create "huge gunshots". Putting a voice through a pitch shifter several times with extreme settings created a human voice with a "non-human metallic flavour". [AM August 1999]
- Anthony Faust and Kenny Clark: The Human Body (an IMAX film). The film is physically huge, and the sound has to be similarly large, usually with low and high frequencies added so that it doesn't sound too thin. The appropriate low and high frequencies are often achieved by mixing together (layering) different sounds. There is a passage in the film showing brain cells dying (we lose about 10,000 a day!). They used a popping sound and found that they had to add more and more layers until it was big enough – 60 tracks by the time they had finished. A breathing sequence used a recording of a heavy smoker, digestion sounds used the squishing of a mixture of pasta and wallpaper paste. Stomach sounds were achieved by Anthony recording his own stomach after starving himself, using a microphone within a stethoscope. The IMAX cameras made a lot of noise, and notch filters were used to remove it, plus some extra sonic camouflage where needed. [AM August 2002]
Observations
These few examples tell us a number of things:
- The role of the sound designer is essential and usually complementary to that of the composer of the music. The two intermingle and often become fused into one sonic perception. Getting the right amplitude balance between the musical and sound design components is very important.
- Many different sound sources are needed. Many of these are real-life recordings, sometimes used without alteration, but more often developed in some way in order to fit in with and enhance the emotional dimension of the action.
- Sounds are used to build landscapes. They are an essential part of what makes the visual images seem real. They also add emotional depth and, with it, some degree of built-in interpretation of what is being seen. The music, the sound effects (natural and processed as required) and the foley art all combine to create the full soundtrack. Sometimes 100-200 tracks are being mixed! The concept of a (sonic) landscape is very important and useful when composing with sounds: placement left-center-right, surround sound, panning (moving through space), depth achieved with degrees of reverberation and filtering, relative volumes – these and other techniques may be employed when creating a landscape.
- Sounds are often used as part of the 'signature' that identifies a character.
- Judging what qualities the sound should have is a crucial aspect of sound design: a recording of a real natural sound? a composite sound which sounds more real than real? a sound effect which is not realistic at all, but conveys the desired feeling? a funny or a strange sound? Feeling the emotional impact of a sound for oneself is crucial.
- What makes a sound funny, or unnatural and strange?
- Movement is sometimes achieved by recording something passing by the microphone, such as an arrow or a car. At other times, and perhaps more often, it is done by panning the sound in the studio.
- Sounds have to be created for things which make no known sound, and it has to be believable and emotionally effective.
- The sound designer has to be very inventive in what he or she records, how it is recorded, and how the sonic material is put together into specific effects. The nature of the source sound and the way it is recorded takes the sound designer a long way towards the desired result.
- Electroacoustic music works with the sound itself, i.e., without accompanying images. Sometimes the sounds are natural and create a visual scene. At other times, they are more abstract, developing purely sonic images through extensive processing and mixing. Principles of (musical) formal patterning become paramount when working with sound alone.
Basic Considerations & Techniques of Sound Recording
Introduction
OK, so let's get some sounds to study, explore and use. There are many CDs of sound samples available. It is useful when these can be used – but then, after transferring the sound from the CD to the computer's hard disk, some transformations of the original are often needed. For many foley, period, or imaginary sounds, actual field recording is the only way to achieve what you want. For the purposes of this workshop, making our own recording provides a useful way to connect the original sound with what happens to it when it is transformed.
Data
- Microphones: Condenser, dynamic, and ribbon – Microphone design is a fascinating subject, with lots of scope for innovative work. The microphone needs to transfer airbourne sound waves into an electrical (or in development, digital) signal. We will be using an electret condenser microphone with cardioid field. The following gives a brief rundown on what this means.
- In the condenser microphone, the vibrating diaphragm forms one plate of a condenser. Early versions were large (housing a vacuum tube as large as a light bulb), had high impedence and were affected by moisture in the air. More recently, transistors and batteries ('phantom power') enabled them to be smaller and generate a stronger signal, while the 'electret' version pre-polarised the condenser capsule so that the supporting electronics only needed to provide for impedence matching. These were of reasonable quality and able to be mass produced, so were less expensive and very popular.
- In the dynamic microphone, a coil of wire was attached to the diaphram and suspended in a magnetic field. These didn't need DC power to operate, were less sensitive to moisture and had a low impedence. Though rugged, their fidelity was only mediocre. Very useful for field recording when high fidelity was not an issue.
In ribbon microphones, the ribbon serves as both diphragm and wire. These had excellent fidelity, but were fragile. These were state of the art for a long time, but were large and competition developed from smaller, improved condenser microphones.- The contact microphone is actually in physical contact with the vibrating object and therefore picks up the sound with great detail and seeming exaggeration. This is not unlike the effect of holding a seashell to your ear.
- This information is based on 'Microphone Design past, present & future' by David Royer and John Jennings (Audio Media, July 2000).
- Directionality – This is an important consideration when recording. Directionality refers to the size and shape of the area or 'field' in which the microphone will pick up sounds. An 'omni' mike picks up most of 360°, while the 'cardioid' microphone has a field shaped like a heart, with the tip of the microphone at the bottom of the heart. Thus the 'omni' is good for recording ambience, and the 'cardioid' for recording specific sounds, with as little interference as possible from other sounds in the environment.
- The preamp – In order to avoid excessive noise when recording, the signal level of the microphone is relatively low. Before recording onto tape or hard disk, this signal needs to be boosted, which is the role of the 'preamp': 'pre-amplifier'. This can be a separate piece of hardware or built into a mixer or audio processing device.
- Field recording – This usually means outdoor recordings. The robustness and directionality of the microphones are important factors, and a windshield is often used to cover the end of the mike in order to eliminate wind, breath and vocal pops. Whether to capture ambient soundscapes or specific sounds, field recording is often needed – and one of the best ways to become aware of how much sound there is in the environment. (Even the middle of the night isn't safe!). Field recording usually leaves some ambient noise on the recording, which can be reduced by filtering or digital noise removal techniques, when it is necessary to do so.
- Close miking – Larger than life sounds can be achieved by placing the microphone very close to the object being recorded, or even by using contact microphones, which actually touch the vibrating surface, such as vocal cords or a vibrating piece of metal.
- Noise reduction – This and other post-recording sound processing helps to prepare the sound for its ultimate destination. Initially, the recording usually has extra material before and after the main sound, so this has to be CUT and the amplitude envelope DOVETAILED so that it begins and ends without clicks. NOISE REDUCTION by filtering or noise comparison methods can help to reduce the soft fizzing of 'white noise', COMPRESSION can keep the dynamic level within a safe area, and sometimes a bit of REVERB enlivens the sound, though this is done later if other sonic transformations are on the agenda.
- Anechoic chambers ('no echoes') – These are specially built rooms designed to eliminate all external noise or internal reverberation. The aim is to capture the sound itself and nothing else. The result is a 'clean' sound which can be further processed and mixed without bringing along with it unwanted acoustic elements.
- Recording studios – Usually a custom built room adjacent to the main mixing studio, they occupy a middle ground between a completely anechoic chamber and a field recording, though much closer to the former than the latter!
- Monitor speakers – These need to be of sufficient quality that we can adequately hear the detail in the recording. Otherwise, we can fail to realise what we have in the recording and not use it properly, e.g., by over filtering it and unwittingly cutting out some of its brightness, because we never heard it was there in the first place. (The microphone needs always to point away from the speakers, or a feedback loop will be created.)
Observations
For professional sound designers, recording techniques are a major issue, and great expertise is brought to bear on getting the right recording for the intended purpose: ambient sounds, specific sounds recorded on location, such as those made by period engines or other machines, foley sounds, vocal sounds, and instrumental sounds.
Recording for our Workshops on Sound Design are, in effect, field recordings, using an electret condenser microphone with cardioid directionality, and the discrete preamp facilities of the Focusrite Trakmaster Channel Strip (includes a high pass filter and 48V power). We may also do some close miking with a contact microphone, depending on what sound-making objects are brought along and what scenarios are envisioned by the participants.
The recording will be captured directly on hard disk, with the computer's sound card converting the analogue signal to a digital bit-stream. Thus the recording path is: sound-making object –> microphone –> Trakmaster preamp –> sound card –> hard disk. The post-processing described above will then be done on computer. Alternatively, the recording can be captured on cassette or minidisc and then transferred to the computer. A decent field microphone and minidisc recorder are now adequate for all but the most professional field recording situations.
The main things we will need to pay attention to are:
- that distance and direction from the microphone suits the purpose and the cardioid field, unless a contact mike is being used;
- that all the connections are snug;
- that the software is in record mode and the operator is ready to start and stop the recording;
- that volume levels are adequate to drown out as much as possible of the ambient noise;
- that appropriate post-processing is done after the recording.
The Act of Listening
Introduction
Hearing, listening, and imagining are not the same thing.
- Hearing: There can be sounds around us that we don't hear at all. We can also notice sounds – hear them – but take them for granted and scarcely pay attention to them.
- Listening: At other times, we can pay close attention to sounds. This is especially the case when being aware of them is important for our survival: e.g., the sound of an oncoming car, or that of a dangerous animal in the wild. The sound designer listens to sounds with acute perception and awareness of their emotional impact.
- Imagining: This is inner listening. We can conjure up remembered sounds and listen to them, such as a favourite song or the sound of a clarinet. We may even use our imaginative powers to enhance the remembered sounds, e.g., hear an old record without its clicks and pops. Furthermore, we can create new, previously unheard sounds in our aural imagination. The sound designer often has to do this, either to get a composite or transformed sound 'just right', or to create a sound for a new situation, such as someone moving about on Mars.
Data
To help improve our ability to listen closely to sounds, we can think about the qualities and parameters of sound. Here are some suggestions.
- loudness – is the sound loud or soft? do you have to strain to hear it? does it make your feet or your stomach vibrate?
- amplitude shape ('envelope') – does the sound appear suddenly, or does it gradually emerge from silence or from a background of other sounds? what is it like at the beginning and the end: 'sharp' or 'tapered'? is it rough and tumble or gently undulating? does it pulsate?
- ambience – is the sound all by itself? is it surrounded by silence or by other sounds? does it shimmer with reverberations? does it fade away in repeating echoes?
- pitch level(s) & content – is the sound focused on a single pitch? is it high or low or in the middle? are there many pitches in the sound, or even some noise? is there one 'note' or many note events, a complex tumble of sonic happenings? does it vibrate, going up and down in pitch, slowly or quickly? does the sound slide upwards or downwards (glissando)?
- tonal colour – thin and high? deep and low? a rich, fat sound? hollow in the mid-range (leaving high and low separated by emptiness)?
- surface texture – is the sound smooth or rough? is there a fine, grainy feel to it, like stepping on sugar or sand granules?
- duration – how long is the sound? do we have a sound that takes its time, or is it a mere blip? how does its length relate to our own body rhythm?
- rhythm – speaking of rhythm, does the sound have regular repetitions in it, as do motors? are there different repetitions at different speeds at the same time? are there any flavourings of dance rhythms in the sound? are there repetitions, but at irregular, unpredictable times?
Exercises
Finally, try some of these listening exercises. ...
- Silence – Perhaps the most important exercise of all is experiencing silence. Stop what you are doing, stand still or lie down and breathe deeply and slowly for a while. Then make as little noise as possible yourself and just listen. Try this at different times of day and night. Afterwards, write down what you have heard.
- Draw – Listen to or think of a sound and try to draw a picture of it. What qualities have you picked up on for your sonic image?
- Describe – Listen to a sound and write down what you hear as fully as possible. E.g., a fan, a motor, a train, a washing machine, a food mixer, a coffee machine, birdsong, a dog or cat, knives & forks and people eating...
- Compare – Listen to two different sounds and compare them: how are they alike and how do they differ?
- Go inside – Consider a sound you are hearing or remember hearing. Now place yourself inside that sound and you are hearing what it sounds like from the inside (and very close up!).
- The unknown – Think of something happening that has never happened before, or that makes a sound outside your experience: i.e., you don't know what kind of sound it makes or might make. Spend a few moments imagining what that sound might be – and then think what you might do to make such a sound.
- Vocalise – Make as many wierd and wonderful sounds as you can with your own voice.
- Research – As you go through a normal day, carry a notepad with you and write down all the sounds you hear, jotting down a brief description in a few words.
Varied Relationships between Sonic & Visual Images
Introduction
"The juxtaposition of image and music is fascinating. The chemical combination of the two produces a third, greater, effect and it's the understanding of the parameters of music that allows you to shape that emotional effect. Music works on a subliminal emotional level and it gives you access to that most powerful aspect of entertainment – peoples' imagination." (Trevor Jones, in Audio Media, Sept. 02)The brain receives information from the senses and has to turn that information into something meaningful: i.e., it has to interpret the data. The process of interpretation follows different pathways in the mechanism of the brain for visual and for audio information. Visual information makes a double loop before the interpretation is finalised, whereas audio information goes straight to the emotional centres of the body, such as along the vagus nerve, which runs from the ear to the solar plexus. Thus music and sound provide an emotional colouration which is direct and essentially different from what is seen. As it happens more directly and therefore more quickly, this emotional colouration becomes part of how the visual images are interpreted. Besides adding excitement or gentleness, the character of the music & sound can make the same scene seem tranquil or ominous. This is why it is such an important part of the whole experience.
Data
- Natural real life sounds – Sometimes the type of realism involved favours the use of the natural sounds made by the actors. At other times, period engines and related sounds add to the realism of scenes from the past, as in Pearl Harbor. There are times when a bit of processing is needed for real life sounds. For example, Harry's voice in Harry Potter and the Philosopher's Stone broke a bit too soon. Pitch transposition (higher) and filtering (less bass) was used to make it sound younger again! Natural sound is also recorded on location to provide a realistic ambient sound background, and then mixed into the final result.
- Imaginary sounds that are more real than real – As noted above, the foley artists often seek to reproduce natural sounds by some cunning means. Recorded independently of the visual action, there is then complete control over the sound quality, volume, or any additional processing which may enhance the emotional quality of the scene. An example is polystyrene rubbed together so that it squeaks – a sound used for footsteps in pristine Arctic snow. Slinky Dog in Toy Story is accompanied by the sound of a slinky – or the spring of an overhead garage door when he's stretching. (In Toy Story, the sounds were actually identified before the animations were done!) Lots of vocally-produced sounds were used in the pie making machine in Chicken Run.
- Larger than life – This is often required. To have the desired emotional impact, the sounds have to be clear, loud and 'big'. Close-miking techniques are often used to achieve this, such as in Time Bandits. Besides increasing the volume, many times the sounds used are also composites. For example, the shutting of the flood doors on the Titanic had huge emotional overtones. This sound combined that of an empty oil drum being dropped off a lorry with that of a big prison door slamming. Layering tracks and mixing puts different sounds together.
- Comic variants – Comedy often exaggerates, and sound processing can help to exaggerate the characteristics of a sound, or turn the ordinary into the bizarre. Exaggerated pitch or amplitude contours, pitch glissandi, or sudden changes in style, rhythm or tone quality (the unexpected) are some ways to achieve this. Audio gags abound in Toy Story.
- Capturing the emotions rather than the sounds in a scene – Irwin Bazelon in Knowing the Score describes a dramatic scene in Planet of the Apes: the men suddenly come upon unearthly scarecrows, their shock is expressed not with shouts, but by the metal twangs produced by stainless steel mixing bowls – directly expressing their emotional gut reaction and capturing the sudden rush of fear and adrenalin rather than using the natural sounds which might have been present at the event itself.
- Sounds that differ from and contradict the image – In music, 'counterpoint' occurs when two melodic lines (usually different) overlap and play at the same time. This can be done with sound and image as well: the 'wrong' rhythm or feel accompanies the action. This suggests hidden layers and dimensions. For example, in Philip Glass's Powaqqatsi (Coppola/Lucas), his dynamic driving rhythms accompany images of people moving in slow motion.
- Music or sounds that pick up on the rhythm and flow of the action – Most often than not, the sound track picks up on the character of the action. Tempo (pace of the beat) and rhythmic intensity are a large part of such duplication. The perception of rhythmic flow is often where a piece of music is first begun. We are perhaps most aware of this in cartoons.
- Spatial placement – A more subtle aspect of sound design is where the sounds are placed in vertical and horizontal space, and how they move through this space. With the advent of surround sound and the full spherical possibilities of Ambisonic sound, this is becoming increasingly important. Careful spatial placement helps to reinforce the realism of the action: e.g., footsteps moving with the person on the screen. Imagine what was involved in spatially placing all the sounds used in the famous Quidditch match!
Observations
There are, then, many considerations and possibilities when putting together the sound and the image. We can summarise these as:
- degree of realism, location recording and recording the sounds made by machines etc. from an earlier era
- foley and how to make seemingly realistic sounds by cunning means, sounds often better than the 'real thing'
- tempo / pace / rhythm: duplicating or counterpointing the action
- audio gags
- degree of processing required
- spatial placement
A Selection of Sound Transformation Functions
Introduction
We now come to the core of our work, where we will explore a group of sound processing techniques, and use them to point in the direction of many more. Having recorded our sound or selected our sample, we now set about the task of adapting it to the emotional and dramatic requirements of our image/scenario.
Transformation processes take place in two essentially different domains.
- The time-domain manipulates the e.g., 44100 samples per second, which can be imagined as a string of beads which capture the amplitude of the sound at the time that each bead occurs. Thus time and amplitude are the key components. (I don't know how it can do this any more than I understand how the grooves of a record can capture the sound of a whole orchestra.)
- The spectral-domain manipulates windows of analysis data. Analysis is a process by which the time and amplitude data of the samples is converted into frequency and amplitude data. It does this in a series of frequency bands (called 'channels') from low to high, seeing what there is in each band and making a note of it – sometimes a channel is empty). 'Frequency' in this context means pitches, all the pitch content which a sound might contain: a fundamental frequency, harmonic and inharmonic partials. This can be imagined as a complex layer cake of pitches. This pitch content is what gives the sound its 'tone colour', and it is usually changing all the time during the duration of a sound, thus creating a 'timbral envelope'.
The purpose is to give some idea of what is now possible and enjoy some surprises as the transformations do their work altering the original sound. Extensive and refined experience with using techniques such as these makes it possible to hone sounds to the nth degree, whether to add a bit of bite to a drum sound or create something utterly strange.
Data
- BLUR
- We go into the spectral domain for this technique. The sound has been 'analyzed' into a series of (overlapping) windows, each of which contains frequency and amplitude data. Here we decide how many windows to have in a group (the blur factor), and the software averages all the data within each group, thus blurring the sound.
- ENVELOPE TRANSFER
- The amplitude 'envelope' gives the loudness contour of the overall sound. One can extract this loudness contour envelope from one sound and impose it onto a different sound. Or a hand-crafted envelope can be superimposed.
- FILTER OUT LO / HI
- A sound has many frequencies in it, from low to high. Filtering enables us to set a level above which we remove the frequences, thus keeping the lower ones (= low-pass), or a level below which we remove the frequencies, thus keeping the higher ones (= high-pass). This is not done with a sharp edge, but with a gradual slope, known as Q. Q can be quite steep or fairly gradual. If steep, the retained band is more restricted and single tones may emerge. If fairly gradual, more of the original sound comes through.
- GRANULATE
- This is a way of texturing the surface of the sound. The original sound is broken up into very tiny grains, like grinding up a cube of sugar until it is a pile or grains, or even a fine powder. Then there are lots of ways to play with these grains on an individual basis, such as by changing their size, degree of repetition (time-stretching), pitch level, amplitude (loudness) or spatial location. The results can vary from fine grainy 'lines' of sound to huge, complex textures.
- INTERLEAVE
- Two (or more) different sounds can be combined by interleaving them, a section of one followed by a section of the other. These sections can be tiny or somewhat larger, a fine or a coarse texture resulting. Fine waverings, sharp contrasts, deep churning can all be achieved with this function.
- LOOP SEGMENTS
- Here we are focusing on breaking a sound into small sections – but not as tiny as 'grains' – and then looping not the whole sound but each segment. This can be done while separating the segments in time, or having them overlap.
- PITCH SHIFT (TRANSPOSE)
- This function operates in the time domain, where lowering the sound deepens the tone while elongating it, and raising it thins the tone while shortening it: e.g., growling low voices or mice-like high voices. The transposition can also take place gradually, over a specified period of time, thus producing a rising or falling glissando.
- RANDOM MIX
- This is a strange process usually full of surprises. The software compares the analysis windows of several files for loudness, and keeps, on a window to window basis, only the window of the loudest file. The result is a sound made up of only the loudest windows from the various inputs. This produces a seamless mix of the files, but you can never tell in advance just which parts of each will be retained.
- REVERSE
- The sound plays starting at the end and moving towards the beginning, i.e., backwards. The most noticeable audio feature of reversal is the amplitude contour. For example, piano tones begin sharply and then fade (they are really percussive sounds). When reversed, the volume rises slowly as it fades in, sounding very much like an organ, but then it ends abruptly. The spoken word is altered to a surprising degree, mainly due to the reversal of consonants.
- RING-MODULATE
- This is a simple process in which the original frequencies in a sound are separated by both adding and subtracting a given value. This can make the sound appear to be 'hollow'.
- SCRAMBLE SEGMENTS
- In this case, we not only segment the sound, but also jumble up the order of these segments. If the segments are taken from places near each other, the sound may not change all that much, but if from distant locations, it will soon become something quite different.
- SPECTRUM SHIFT
- This process takes most or a part of the frequencies of a sound and moves them up or down, leaving the remaining frequencies in place. Because it is a spectral process, the duration of the sound is not changed, but the tonal qualities are. When the shift takes place over a specified length of time, the effect creates glissandi inside the sound.
- SPLICE
- A sound can be repeated simply by splicing copies of it end to end. Several soundfiles can be put together end to end, making one long, changing, sound. Reversed soundfiles can be included with ones going forward. This is one way to develop sound material to input to other sound transformations.
- TIMESTRETCH
- A sound can be extended in time by various means, but this particular technique is in the spectral domain and enables the sound to become longer (or shorter) without changing its pitch. This elongation at the same pitch stretches out the spectral envelope so that it evolves more slowly. Imagine words being spoken very slowly so that every sound and syllable is ddd-rrr-aaww-nnnn out.
- TRACE
- Also in the spectral domain, we can thin out a sound by deciding how many of the loudest frequencies to retain (as found in the analysis channels), and remove all the rest. If we retain about half or even a quarter of the frequencies, the sound might just sound a bit 'cleaner', but if we reduce the number to something under 25 channels, we start to get just a 'trace' of the original sound.
- WAVECYCLE DISTORTION
- Among many possible ways to distort a sound, we will be using a wavecycle method. Wavecycles are irregularly sized portions of soundfile between 'zero-crossings' (where the amplitude curve passes from positive to negative and v.v.s). When these wavecycles of varying durations are manipulated, the irregularity produces distortion.
Observations
There is a wonderfully extravagant guitar solo in Back to the Future, in which Michael Fox lets rip with distortion, pitch bending, fierce strums and glissandi, and strong chords – all very loud! Compared with this, the dreamy waltz which it interrupts seems more than a little tame. This scene epitomises what has been going on with music over the past 100 years: ever since the Dadaists & Futurists started using machines and urban sounds for musical inspiration.
The difference is in the sound, and what is different about the sound is the degree to which we have gone inside the components of sounds and begun to manipulate those components. At first this was done with tape recorders and analogue radio equipment, then with effects pedals, MIDI synthesisers, and samplers. Now we are also using computers for digital sound processing.
The above 14 sound transformation software programs bring us deep inside sounds, such that their inner structure and tonal qualities become as much a part of musical thinking as melody, harmony and rhythm. The next section takes us further into this way of thinking while seeking to connect it with enduring principles of musicality.
Micro-form Archetypes
Introduction
Music is the art of arranging things in time. This statement is deliberately open-ended. Pitches, melodies and harmonies need not be involved: sounds can be arranged in time. Indeed, even 'things' which are not sounds can be arranged in time, can be 'musicalised', such as normally non-musical aspects of theatre: movement and lighting.
This approach helps us to expand the way we think about music without losing touch with what music is. It also helps us to perceive the various musical qualities present in the music of all different styles, periods and cultures.
The 'micro-form archetypes' described below are generic (open-ended) ways to arrange things in time. Thinking about them and playing with these ideas can help towards using sounds in a 'musical' way. They thus become not just sounds in isolation, but sounds arranged into time-patterns. These patterns can match and reinforce images, or create contrasts, suggesting the presence of other feelings and possibilities than what is appearing on the screen.
Data
The following are a brief selection from about 80 micro-form archetypes that I have identified so far. The list is actually endless, because music and society are constantly evolving, and new forms are always being invented.
- accumulate – retain previous while adding new
- Possible application: throw more and more things into the pot, until it is seething and overflowing, or a crisis point is reached
- collage – mix varied ingredients
- Possible application: populate a sonic 'landscape' with a variety of items, some of which will share a theme, and some which will be quite uncomfortable together; express diversity, lots of things happening at once, different groups in the same place
- contrast – put the dissimilar in close proximity
- Possible application: musically, this would relate to different styles, or sounds which don't belong to the prevailing landscape: things out of place which invade, interfere, turn the action in a different direction, or are just uncomfortable
- direction – create a sense of moving towards
- Possible application: many different musical features can be used to create this sense of moving purposively towards something, i.e., in a relatively straight line, e.g., tempo acceleration, melodic contour, increasingly intricate rhythmic figures, harmonic chord progressions, crescendo – this is a very important dramatic tool
- expand/contract – make space or time between events larger/smaller
- Possible application: a sense of growth, relaxation or increase in power; a sense of containment, frustration or weakness; often achieved by using longer or shorter durations, larger or smaller pitch intervals or frequency levels, esp. if the same motif grows or diminishes
- extend – make longer by adding to in some way
- Possible application: Basic musical technique involves drawing out material so that it fulfills its potential, and new material isn't needed at every turn. Repetition, variation techniques, motive-spinning, letting a figure 'grow' etc. are all ways to realise this idea. It can create a sense of growth, vigour, 'rolling along', or at other times become over-extended, leading to suspense and a sense of danger.
- isolate – separate from other material
- Possible application: Something feeling separate and alone is a common dramatic situation. In general, the isolated item may be on the weak side, quiet, tenuous, quavering. The music needs to create a sense of distance such as by silence, placed at a pitch remote from the key, harmony or pitch level of the other material.
- juxtapose – layer or locate (usually dissimilar) items next to each other
- Possible application: As in film editing in which different scenes follow on without transition, music can place contrasting sounds or harmonies side by side, or even on top of each other: simultaneously. This can create a sense of dislocation or danger. Rapid juxtaposition of similar material can reinforce a sense of bounty or of being overwhelmed.
- layer – coherent linear sequences overlaid in vertical space
- Possible application: This is very familiar as the layering of tracks, with the distinctive sound + rhythm of each linear sequence enabling each one to be perceived as a separate entity. Layering in more tracks can create increased excitement or complexity.
- move – make items dance, exude energy, or go someplace
- Possible application: Movement in music grows out of biological rhythms, physical movements and dance. Movement is life, energy, purpose. Rhythms, melodic contours, accelerandi, larger intervals, figures that start at one pitch and drive strongly towards another, as well as an overall design which begins in one musical area and ends in another – all contribute to a sense of movement.
- repeat – adjacent statements of the same or very similar items
- Possible application: Periodic, iterative movement is part of astronomy, biology and machinery. When the material repeated is the same or very similar, it creates a strong sense of being in a place, whether with joy or with frustration. When combined with crescendo, increased movement etc., repetitions quickly create a sense of dramatic buildup.
- sustain – continue without much change
- Possible application: Often achieved with tones or sounds with long durations, inducing a sense of quiet, calm, enduring, warmth, or over-stretched, tense, in suspense
- sustain + move – combine items which move and items which sustain
- Possible application: a very powerful combination of energy and energy withheld; highly charged atmospheres of dramatic tension; inevitably involves creating layers
Observations
A simple way to 'get into' these micro-forms is to make rough sketches to illustrate for yourself what they seem to be doing. For example, juxtapose could be blocks of different colors, and sustain could be a long squiggly line. Now add a time grid, e.g., vertical lines for every 4 seconds or so. This helps you to feel the shapes, especially when you start bringing the timed shapes to life with imaginary sounds.
A few observations about making music with sounds complete our introduction to the sound design workshops.
Making music with sounds is something new. Of course, there have always been the distinctive sounds made by different instruments: the flutes, clarinets, horns, strings, sitars, koto, gamelan gongs etc., all contributing to musical creations with their tonal qualities, pitch range and rhythmic potential. Melodic and harmonic music is the usual result. Percussive instruments are closer to the sounds we are talking about, and are often used as sound sources: drums, rattles, scrapers etc. Music in our time has made extensive use of percussive instruments.
But in talking about making music with sounds, we actually go much further and include all sounds: the sounds of Nature (like wind and thunder, streams, animals large and small, insects and birds), or voices, traffic and machinery, footsteps, creaking doors, fireworks and explosions, or various forms of noise. These sounds have various associations with their origins, as well as being, very often, too complex to have clear pitches or rhythms. These features need to be respected when using them for music, or they will not sit comfortably in the musical fabric.
Here are a few aesthetic considerations:
- Many natural sounds form randomised textures, such as pebbles rolling with the surf, the overlapping songs of a whole flock of birds, a series of thunderclaps. Putting such sounds into regular rhythms makes the seem forced and artificial. Thus a somewhat randomised rhythmic placement (ultra-'humanised', as it were) is appropriate for many different kinds of sounds.
- The same can be said for pitch. Sometimes randomisation may involve the order of the notes of a scale or chord, but at other times it needs to go further, and use microtones and pitch bending.
- Often, harmony could do with being considerably juicier. Sounds with a reasonably clear pitch content can be assigned to the notes of a chord, but a simple chord would seem like a straightjacket. It is good to explore rich jazzy 9ths and 13ths, original complexes of pitches (called 'pitch configurations' because they don't form a chord of any known type), and also different types of noise. Many sounds have a noise component, and these can be graded qualitatively and selected or mixed together to form 'harmonies' in the very broad sense.
- Chord movement and directionality can also be worked out in terms of patterns made from changing tonal qualities.
- New kinds of texture can be explored: not just the familiar homophonic texture of melody with chordal accompaniment, but also multi-event complexes of strange sounds and glissandi. Thus sounds can be mixed to create all kinds of emotive environments.
Key References
- Sound Design, by David Sonnenschein
- 'The Expressive Power of Music, Voice, and Sound Effects in Cinema'. A fairly new book, one of the first of its kind, it covers the field of sound design in tremendous detail. (Michael Weise Productions, 2001)
- Knowing the Score, by Irwin Bazelon
- 'Notes on Film Music'. Contains many examples of image + sound relationships, along with how the effects were achieved, example scores, and interviews with film composers. (Van Nostrand Reinhold, 1975)
- Articles from Audio Media (AM) referenced above:
- Godzilla, A Monster Mix by Alan James. Audio Media June 1998. pp. 84-86
- Antz by Robert Alexander. Audio Media November 1998, pp. 56-57.
- The Matrix by Julian Mitchell. Audio Media June 1999, pp. 58-59.
- Universal Soldier: The Return by Alan James. Audio Media August 1999, pp. 58-59.
- Toy Story by Julian Mitchell. Audio Media March 2000, pp. 58-59.
- Chicken Run by Julian Mitchell. Audio Media July 2000, pp. 64-70.
- U-571 by Mel Lambert. Audio Media July 2000, pp. 55.
- Microphone Design past, present & future by David Royer and John Jennings. Audio Media July 2002, pp. 108-112.
- Red Planet by Julian Mitchell. Audio Media December 2000, pp. 62-63.
- A.I. by Paul Mac. Audio Media September 2001, pp. 54-55.
- The Lord of the Rings by Julian Mitchell. Audio Media January 2002, pp. 44-49.
- The Human Body by Julian Mitchell. Audio Media August 2002, pp. 36-37.
- Knowing the Score by Michael Wood. Audio Media September 2002, pp. 76-80. (Interview with Trevor Jones)
- Article from Studio Sound (SS) referenced above:
- Godzilla, The thunder of tiny feet by Richard Bushkin. Studio Sound June 1998, pp. 66-69.
Last updated: 29 April 2003
© 2003 Archer Endrich, Chippenham, Wiltshire England