MAIN INDEX
Time Domain Index
Table Index
Sound editors graph the waveform of a sound file as instantaneous amplitude (loudness) against time:
The sound can be manipulated by altering those two parameters: amplitude and time. When amplitude is averaged over a longer time-window, it reveals the changing amplitude envelope of the sound. The CDP ENVEL and ENVNU function groups have just about every conceivable way of reshaping the envelope or imposing one on a sound, as well as fading and trimming functions which can affect the sound's attack and decay characteristics.
Transposition in the time domain (see MODIFY SPEED and STRANS) makes sounds faster (and therefore shorter) when the pitch level is raised, slower (and therefore longer) when lowered. Transposed sounds may also be stacked into chords, or the pitch may be distorted by modulation, especially Ring Modulation.
The humble editing of sounds by cutting and splicing is in fact a very powerful technique, especially when combined with mixing. The SFEDIT function group has many ways of partitioning sounds, including those that MASK portions with silence or switch between sounds (TWIXT & SPHINX). The RETIME program shifts the time of events within a soundfile, by reference to amplitude peaks or events separated by silence, and can synchronise particular events in one soundfile with particular events in another. Several functions cut the sound into multiple output files, permitting re-assembly in a different fashion, or further processing before being re-mixed (for example PARTITION and ISOLATE).
On the pasting and mixing side, various sequencing functions (ranging from JOIN to SEQUENCE2) allow a number of different sounds to be played as events in a rhythmic pattern. For general mixing, the SUBMIX functions have a variety of approaches, the most flexible being MIX, which uses a text mixfile. It has a multichannel equivalent NEWMIX, supporting up to 16 channels.
An important aspect of mixing is spatialisation. Panning is most easily achieved by fixed spatial positioning in a mixfile, or time-variable movement via PAN (MODIFY SPACE 1). CDP has also developed two substantial groups of functions for manipulating sounds in multi-channel space: see the MULTICHANNEL group, especially MCHANPAN, and the MULTI-CHANNEL TOOLKIT, which supports ambisonic and WAVE_EX file formats. (Note that "true" stereophonic sound and, by extension, multi-channel "surround" sound, depend on very subtle timing and phase differences between the component signals, which are hard to simulate it may be better to work with spatially recorded sources.)
Brassage (Mode 5) can also be used to granulate a sound by creating gaps in it. The GRAIN functions manipulate grains in a "grainy" sound by transposing, shuffling, repositioning, reversing, duplicating them etc.
The IRCAM program CHANT, which synthesises the singing voice (example), repeats small enveloped grains called FOFs at a given density to create resonance. By contrast, CDP's PSOW functions try to find FOF-like pitch-synchronous grains in (ideally) vocal sounds and then manipulate them.
The TEXTURE programs repeat the input sound(s), in whole or in part, to create a texture. Each sound is treated as a 'note-event' (in the Texture workshop examples the input sound is typically a single note to reveal the treatment) which may be repeated at a fixed or varying time-interval, or in a defined rhythm, or in groups of events; transposed randomly within a given range or restricted to a defined pitch set; decorated like an musical ornament (DECORATED and ORNATE), or formed into fully-sequenced motifs. The repetitions may be scattered across stereo or multi-channel space, or spatialised in a more controlled manner. Almost all parameters are time-variable, allowing the texture to evolve.
The wide-ranging musical possibilities of the Texture set are summarised here, though beginners should start with TEXTURE SIMPLE. The Release 7 function PACKET, which extracts small enveloped wave-packets from a soundfile, has distinct potential in creating suitable input sounds for Texture programs.
The principle of repetition also applies to the echoing delay line and to reverberation. MODIFY REVECHO and NEWDELAY implement the former, while the REVERB group has programs to simulate classic reverberation techniques for larger and smaller spaces, plus a tapped delay line (TAPDELAY).
Convolution is an important reverberation technique, using a sampled impulse response of a building or other responsive space. Many suitable impulse response soundfiles are available on the internet. FASTCONV implements convolution via the Fast Fourier Transform (FFT); experimentation with "ordinary" sounds (not impulse response files) is also a possibility.
Filtering is a standard time-domain technique for colouring sound, by reducing certain frequency bands, while boosting others to create resonance. The FILTER set has all the classic types of IIR filter, with further possibilities in the Spectral Domain (HILTE FILTER); FASTCONV can also be used as an FIR filter.
Especially useful are the filter bank programs, particularly VARIBANK, in which a text file controls a set of filters, tuned to a specific set of pitches, which are time-variable. This allows each frequency in the set to resonate to the extent that there is energy in that frequency band, and provides a means of harmonising unpitched material or reinforcing particular pitches (c.f. TUNEVARY in the spectral domain).
What has Groucho to do with the Time-Domain? Well, the key to developing the Spectral-Domain programs was the Phase Vocoder program (PVOC) developed by CARL. As the Time-Domain is the other side of the coin, so to speak, its programs were named "Groucho" at the very beginning of CDP.
Return to Main Index for the CDP System