Composers' Desktop Project
Before working through the 12-Step Tutorial for the first time, the following sections may be helpful regarding the overall CDP working environment and other sources of information.
- Working Environment
- Overview of this 12-Step Tutorial
- Supplementary Documents and Charts
- Getting Started with the CDP System
- Additional Specialist Tutorials and Sound Examples
Basic Working Environment
Our recommended Basic CDP Working Environment entails having the following on-screen or to hand:
- One of the 2 available CDP graphic user interfaces. I usually recommend Soundshaper as the most approachable, and Sound Loom for users more familiar with the CDP software.
- The main page of the CDP Reference Documentation: double-click on ccdpndex.htm. Select 'Work Offline' if asked! You can then Park (Minimise) this and only call upon it for more detained information about the software. (You can only do this with one HTML document at a time, so you can't do this while using this Tutorial.)
- Alternatively, especially if you haven't printed it out, you might use CDP Files & Codes as your initial reference document. It goes through all the different types of (mostly input) files used with the CDP software. Double-click on filesfrm.txt to call up the frames version with main text and index. You won't need to do this with this tutorial, because it links to it whenever appropriate.
- A text editor, such as Emacs or Notepad or Wordpad (not Word for Windows, which puts in formatting codes). If you find you are comfortable with creating text files from within the GUI, you won't need it. I find a separate text editor useful, and often use Emacs because of the block CUT and PASTE and especially MACRO functions, which can significantly speed up creating the more complex types of files, with many lines.
- A (preferably hardback) notebook for sketching diagrams and jotting down key information while you're working, especially functions and parameters that achieve good results, and the names of files you've created while working with a specific function.
Overview of this 12-Step Tutorial
This Tutorial is designed to help you get deep into the CDP Sound Transformation Software as quickly as possible. The CDP software may involve a way of working that is unfamiliar because it is less graphic than much of the commercial software, but when understood, it is actually very straightforward and highly flexible with many ways for the user to design patterns and shapes.
- It grows out of a UNIXTM- like environment and can be accessed and used via a command line interpreter. This is mostly done to take advantage of the batch file mechanism.
- The Graphic User Interfaces (Sound Loom and Soundshaper GUIs) are minimal, mainly enabling parameter data entry via dialogue boxes, but also with other graphic or semi-graphic features to help with the creation of breakpoint files columns of numbers and sequences of operations.
- There are (at the time of writing) 50+ different types of file format which you can use to design and present data to the sound transformation software. This enables a tremendous degree of flexibility and in-depth sound patterning.
- Some data entry is in numerical format, such as MIDI Pitch Values (MPVs) for pitch entry and durations in seconds (such as 0.25 for a semiquaver = 16th note at crotchet = 60), but Reference Charts can make this much easier than it seems at first. In fact, this is often quicker and more flexible than graphic systems.
- The keys to success are to 1) use your imaginating to create rough drawings that chart changes over time or other patterns before trying to enter the numerical data into files, and 2) keep the Reference Charts handy.
- The soundfile examples are not made for you. The whole point of the Tutorial is to empower you to use the software. You are therefore recommended to make each example as you go along, working through the 12 steps in order. When everything is made, access the Playlist to Play all the examples for comparison purposes. (Playlist document not prepared yet.)
- It turns out to be a powerful and straightforward way to work, and promises results very hard to duplicate with other software, though each package has its strong points, and the CDP software is designed to be complementary.
Supplementary Documents and Charts
There are a number of supplementary documents and charts for referencing different types of information. They include:
- The Spiral Bound Desk Reference hard-copy
- CDP Files & Codes reference guide to all the codes, text and binary files created by and/or made by users as inputs to the CDP software (with examples)
- Musical Glossary Musical Terms, with special relevance for working with sounds
- Technical Glossary Technical Terms, with special relevance for working with sounds
- Chart of Equivalent Pitch Notations MIDI Note Values, Csound pitch notations, and Frequencies in Hz
- Chart of Loudness different scales of loudness compared
- Note Data File Chart for the TEXTURE Set.
- 2 Charts for the TEXTURE Set, showing active fields.
- Transposition & Shifting Chart of inputs and outputs for the various REPITCH functions.
Getting Started with the CDP System
There are a number of other documents prepared to provide tutorial help with the software. These are:
- CDP DemoDisk CD-ROM with various examples of sounds made with the CDP software, plus some documentation on sound design in general:
- CDP Website Examples (specific worked examples as on the CDP Website)
- DiceDemo (showing the sequence of steps by which several complex sounds were made)
- Sound-Builder Templates (several multi-step processing sequences, each with a full description of each step, a generic batch file, and a Playlist)
- Various Documents about Sound Design (including sound and image relationships, etc.)
- Sound Loom & CDP A comprehensive survey of running the CDP software from the Sound Loom GUI.
- CDP Tour An overview by Robert Fraser of the whole CDP software package, with lots of insights and tips.
- Get Started with CDP in 12 Steps (this tutorial)
- CDP-LITE 45 basic functions introduced on 2 levels, with sound examples provided as Soundshaper Presets and Sound Loom Instruments
- Mastering CDP I (another introductory document)
- A Keyhole View of CDP (focusing in on specific groups of programs)
- On Using MS-DOS (key commands of MS-DOS: the Command Line Interpreter)
- On Using CDP (a now rather elderly intro to the CDP System)
- Key Functions of the Emacs Text Editor
Specialist Tutorials and Sound Examples
The following provide guidance in certain key specialist areas (you can edit these links in a text editor if they are incorrect for your system):
- forGrainMill
- for Drunk
- for the Texture Set: Texture Workshop, on CD-ROM
- for Transposition and frequency shifts: Transposition and Shifting Workshop, on CD-ROM
Return to 12-Step Index
Step 1 is about:
Acquiring Sounds
Sounds brought directly to computer hard disk:
The first two of these types of source sound are already in digital format (i.e., .wav or some other soundfile format). They are therefore transferred to your computer's hard disk directly, without any change of format. The third involves a recording process, for which you need recording software and a soundcard that can handle analogue to digital conversion (AtoD) if required.
- from sample CDs Copy to hard disk.
- Remember to avoid any folder or soundfile names with spaces in them when working with CDP.
- If the soundfiles are in .wav format and you are working in a .wav context, there is nothing more that needs to be done, unless you find that the file isn't recognised by the CDP software. In this case, the problem probably lies in a non-standard header. If you make a copy of it using CDP's COPYSFX, a new header is created, compliant with the full .wav standard, which is what CDP uses.
- If these are .wav and you are working in an .aif context, you will need to convert to .aif (or .afc or .aifc) by making a copy using CDP's COPYSFX.
- Similarly, if these are .aif and you are working in a .wav context, you will need to convert them to .wav using CDP's COPYSFX program.
- If your input sounds are in the .mp3 format, you will need to convert to .wav or .aif using some other software before using the files in CDP.
- We have tried to keep all the source soundfiles used for the examples as .wav on PC and .aiff on MAC. These sources are snd1.wav, count.wav, frogs3cdt.wav and trcdt.wav. The results of CDP processes have been converted to .mp3 for playback (including analysis files) to save on space for the sound examples.
- from the Internet Download to hard disk
- from audio CDs audio or digital OUT from your CD Player to soundcard audio or digital IN, recording via a sound editor or other record program (such as CDP's RECSF).
Sounds brought in by recording:
- from audio out (analogue) of a MIDI instrument audio out from these instruments is in analogue format. The record process involves connecting audio Out to soundcard audio In, pressing Record and Start on your recording software and then performing on the MIDI instrument.
- from recording via an analogue recorder again, audio Out on the recorder to audio In on the soundcard, then Record and Start on your recording software and pressing Play on your analogue recorder.
- from recording via Minidisc (digital) although the recording is digital, output is anlogue, so the transfer to hard disk is the same as with an analogue recorder.
Basic Editing
When recording is involved, there is inevitably unwanted noise or silence at the beginning and end of the recording, due to the time delay between pressing Record and Start on your recording software and actually starting to perform or play back the sound. Two types of edit are therefore in order to tidy this up: CUT and DOVETAIL.
- CUT removes a block of unwanted material or Saves only the portion that you want, depending on how the software works. CDP SFEDIT CUT does the latter. Many commercial editing programs, such as Cool Edit Pro (now Audition), Sound Forge, etc., do the former, blocking out the section graphically.
- In Soundshaper there is a Play From - To option.
It is easy to use this to find edit points aurally. Then, when you go to SFEDIT CUT, you will find these time points already in place in the dialogue box. This actually works rather well, as the ears are a good judge of where to cut.
- DOVETAIL smooths the beginning and end of the sound's amplitude envelope, which can often be sudden or have a click after the CUT process. This can also be done in the soundfile editing program, or with CDP's ENVEL DOVETAIL. You can usually do this in your recording software, but there are also CDP functions to handle this.
- The smoothing is handled by setting times in seconds, i.e., how long from silence to full amplitude and v.vs.
- These lengths are often determined by the purpose for which the sound is to be used: e.g., just a bit so there is no click and it gets to full amplitude as soon as possible (such as 0.01), or longer times to achieve increasingly smooth crescendo and decrescendo of the sound (such as attack times of 0.1 or 0.25 or 0.5 or 1.0 etc.). Longer times significantly change sounds which have a sharp, strong attack, softening or even disguising them. Thus DOVETAIL has a sound transformation aspect as well.
- CDP's DOVETAIL also has a Mode 2 in which an exponential rather than a linear slope is used. This means that the rise to and fall from full amplitude speeds up / slows down. This can be a more effective way to remove clicks at the start and finish, as the linear curve is sometimes too gradual to remove all of the signal.
- In Soundshaper you can double-click on the output cell to return to ENVEL DOVETAIL and use different values. (If you have saved the previous output soundfile, you can normally save the revised output to the same name again.)
Sometimes the recording can have an undesirable level of noise on it, either from a poor quality sample, or from your own field recording. This situation can often be improved by a using CLEAN function, often found in the sound editing software. CDP's SPEC CLEAN (among the Spectral Utils) does this by a comparison method.
- This is a Spectral Domain operation, so all files involved are analysis files.
- First take note of where some noise signal is located and CUT out a (small) portion, retaining it for use as the comparison file (referred to as nfile or noisefile in the usage or dialogue box.
- If you are sure that the noise signal will still be there (e.g, there is some in the middle of the sound), you could do your CUT and DOVETAIL operations first, and then use SPEC CUT in the Spectral Domain, i.e., on the analysis file.
- Mode 2 ('anywhere') appears to be effective in most situations. The software compares the signal of the infile with that of the noisefile. When an analysis channel's signal level in the infile falls below that in the noisefile, it means that the infile signal belongs to the noise aspect of the sound, so that channels ia removed, hopefully removing most of the noise with it.
- The cleaned analysis file is then resynthesised and it is ready to be used.
Two more types of basic editing may have a role in the early stages of preparing a sound for transformation processing: PASTE, SPLICE.
- PASTE is when all or part of a sound is inserted somewhere in the middle of another sound. This can be CUT and PASTE operation in a graphic sound editor, or CDP's SFEDIT INSERT can be used, specifying the time at which you would like the new sound to be inserted. The software smooths over the joins (splices). Too short a splice can result in a click or sudden change of level, and too long a splice can result in an audible dip in level at the point of join. So it may take a few trys to get it right if the level differences of the sounds cause problems.
- SPLICE (CDP's JOIN) involves joining whole soundfiles end to end. The same advice about the length of the splice window applies. If it is impossible to avoid an undersirable dip in the level of signal, you may have to use a MIX operation, which enables you to overlap the sounds.
- Note that 3 new functions in Release 5 extend your options regarding SPLICE operations: JOINDYN, JOINSEQ and SEQUENCE2. These enable you not only list several soundfiles to join together, but also enable you to specify a pattern (with soundfiles repeated). The links above take you to the information about them in CDP File Formats.
- PASTE (INSERT) and SPLICE may at times be used at this stage in order to create a soundfile with much change and contrast in it. Then transformations that stretch out or extend (lengthen) the sound can be more interesting and create good transition material. Some transformations of this nature include BRASSAGE (also see the graphic version, GrainMill) granular manipulations when higher timestretch values are used, DISTORT REPEAT, or any of the functions in the EXTEND Group.
Return to 12-Step Index
Installation includes setting up your directory for sounds. This is very likely to be a project-related directory. You need to have open this directory in order to access the sounds you want to hear. Within this HTML file we can play our main source sound, snd1.wav It is made from a single stroke on a brass gong, with dovetailed versions backwards and forwards mixed, and time-varying vibrato added to the result.
- In Sound Loom you use the Find Directory button and then Any Directory you can then browse for and Select the required directory. List Named Directory places the contents of that directory in the Window on the far right hand side of the display. You then Grab (cursor-selected files) and Use on Workspace and then select a soundfile and click on Play.
- In Soundshaper you set your sounds directory on the configuration page under OPTIONS/SETTINGS. Your configuration file can be a Default opened whenever you load Soundshaper, or it can be a Personal one (e.g., for a specific project) that you open under the FILE menu in the top left corner of the OPTIONS/SETTINGS page. Once the directory is active, you can open it to select a soundfile to play either via the FILE menu, or using the Icon to the left of the Play transport. Having selected the soundfile, you click on the Green Button to Play.
- Creating analysis files analysis files are created from sound files, using the Fast Fourier Transform (FFT) to change time amplitude data to frequency amplitude. You can Play these files via a program created by Richard Dobson, called PVPLAY.EXE. This can be used on the command line. Soundshaper calls it automatically when you click on the Play button while an analysis file is open, or you can access it via the TOOLS menu.
- Analysis files may be played directly (i.e., without conversion to Time-Domain soundfile format) with PVPLAY.
IMPORTANT:
Two contrasting sounds are used as sources for the examples below. The first is our snd1.wav gong sound, but could be a more complex, noisy sound. The second should be something distinctly rhythmic, and we are using the chirrups of frogs sound, frogs3cdt.wav. Please bear this in mind if you recreate these examples with your own sounds. If you then follow the recommended names as you work through the examples, you will be able to play your results directly from this Tutorial, as well as from Soundshaper or Sound Loom.Return to 12-Step Index
OK, we can make our first alteration to a sound using the program MODIFY SPEED. This will be a straightforward transposition (change of pitch level) up or down in the time domain.
- Access the function (note this basic procedure when using the respective GUIs):
- In Soundshaper: go to the Soundfiles menu (for Time Domain functions), then Pitch Speed/Transpose.
- In Sound Loom: having Grabbed a file and put it on the Workspace, then click on Enter Chosen Files Mode and then on the sound you want to process (this 'chooses' it and places it in the left hand Processing window. Now click on the Process button to access the list of functions. Select Pitch: speed-pitch-tape transposition by semitones.
- In command line mode: enter the groupname followed by the function name to get the usage, e.g., in this case, modify speed. The usage gives you a reminder of what to do. Then backtrack using the command history function (DOSkey) and fill in the rest of the command line. The command line for this first process would be
modify speed 2 snd1 snd1d12 -12
meaning call MODIFY SPEED, use Mode 2 (i.e., transposition in semitones), input snd1.wav, name the output snd1d12.wav and give -12 as the amount of transposition: down 12 semitones. Even though we would most often use the graphic user interface, we can see that this text entry of the information is actually very straightforward.
- Enter a transposition value in number of semitones. First try an octave lower, preceding the value with a minus sign for a downwards transposition: -12.
This process lowers the whole sound. Note that fractional values can be used to create mictrotonal transpositions, e.g., -1.5 lowers it 1½ semitones, = 150 cents.
- Soundshaper: Run the function (click OK), and Play it (to check the results), then Save (using the SAVE TO dialog, File > SaveAs, or the Toolbar Save icon). Until you do this, only a temporary file is made.
Recommendation: to create file names that tell you how the file was made: type 'u' (for upwards transposition) or 'd' (for downwards transposition) and then the number of semitones. Thus the filename for our first sound transformation would become snd1d12. The .wav (or .aif) extension is added automatically. NB: To save space, the examples distributed with this Tutorial have been converted to .mp3 format, while the source files have been retained in .wav format, to make it easier for you to use them for your own trials.
- Sound Loom: Run the function and Play (to check the results), then Save, naming 'u' or 'd' and the number of semitones as described above. Until you do this, only a temporary file (which gets overwritten) is made.
- Also see the Chart of Equivalent Pitch Notations and the Transposition Ratios Chart if you need to count how many semitones a certain transposition entails.
- Now we can PLAY our original (snd1.wav) and our transposed (snd1d12.mp3) sounds and compare how they differ.
Return to 12-Step Index
Step 4 includes:
Introduction
Time contours are a way of getting the sonic material to alter in some way during a given period of time. This is an essential feature of all good music, as shown by the way a performer is constantly altering pressure of lips, breath or bow etc. to create a 'living' tone. (These performing techniques are called articulation.)
Change over time is also referred to as breakpoint files. These contain a series of times and values to be used at those times: because the values change, they 'break' the prevailing flow and cause it to change direction. This is also referred to as 'automation' because the breakpoint files apply the changes automatically during the processing of the sound. In CDP, these files are saved as editable text files. They can be created by text input (directly in a Soundshaper window or with a text editor), or by means of a graphic editor, which then writes the text breakpoint file as its output.
EXAMPLE: Time-varying transposition with MODIFY SPEED
An essential piece of information about breakpoint files in CDP is that they interpolate between (different) values. What does this mean? Consider the following breakpoint file that transposes an input sound 12.6 seconds in length (you may have to adjust the 12.6 to the actual duration of your snd1.wav):
0 0 12.6 12The column on the left comprises times, that on the right comprises values, in this case, the number of semitones by which to transpose the sound. Thus we have:Thus the output sound will start at its original pitch (no transposition) and end 1 octave higher (12 semitones). There are no values in the file inbetween these points, so the CDP software interpolates, i.e., creates a series of intermediate values inbetween 0 and 12 semitones higher, spreading them out over the 12.6 seconds. In other words, as we shall hear, it creates a glissando, as we can see with a diagram of the breakpoint file (Sound Loom Editor):
- At time 0, transpose by 0 semitones (i.e., no transposition)
- At time 12.6 seconds, transpose by up 12 semitones upwards because it is a positive rather than negative number. A negative number transposes downwards.
Sound Loom graphic breakpoint editor, showing a glissando
View FullsizeHere are the steps to do this in Soundshaper, entering the data as text:
- select, open and play an input soundfile, noting its length
- Access the function: Soundfiles Pitch Speed/Transpose
- Now we see the dialogue box where we can enter the number of semitones. This time we click on T-V, and when we do, a new window appears in which we can write our breakpoint file:
If this is already populated with a default file, either click CLEAR or simply delete some or all of the entries.- We need to SAVE the revised text file:
- Click SAVE AS and save to a new name: ss12&6u12.brk.
The program selects a dedicated folder for SPEED by default, but you can navigate to a different folder if you wish.- N.B. Do not click SAVE unless you intend to overwrite the default file transposn.brk.
- We could also go to the FILE menu (top left corner) and select Save Data File AS, which will prompt us to enter the name we want to use.
Recommendation: when creating several breakpoint files for the parameters of one complex function, code the starting letters of the names alphabetically so that they will appear next to each other in file listings.- Now click on OK to RUN the transposition process and save the output file as before. If we call it snd1tvtr.mp3, the name itself tells us that we used a breakpoint file to modify snd1.wav.
Here are the steps to do this in Sound Loom:
- We assume that you have listed a directory with your sounds, selected your sounds and used GRAB to move them to the Workspace.
- Now enter Chosen Files Mode and click on snd1.wav. It will now pop in the window on the far left, meaning that it is available for processing.
- Now we click on the PROCESS button and then select MODIFY SPEED among the highlighted buttons in the Time Domain group of functions.
- As with Soundshaper, the dialogue box is very simple, and we can enter our time-varying data as text or graphically. The image below shows the dialogue box with the constant value '-12' entered.
So we click on Make File to create a new time-varying breakpoint file. We see the 'Work on Text' or 'Work on Graph' option. In this case, we select 'Text'.
Having clicked on 'Text', a window for text entry opens and we enter our times and values. The Soundshaper file went up, so let's go down in this one.
Having made it, please name it sl12-6d12 (a .txt extension is automatically added), Save it. The dialogue box returns, with the name of our new breakpoint file shown in the parameter box.
- The next step is to click on RUN, and on OK when processing is finished. (If by any chance the processing fails, do not click on ABORT, but on OK.)
- Now we can PLAY the result. If it's OK, we SAVE it to a name that reminds us what it is all about. In this case, we name the output snd1d12.mp3.
Creating a breakpoint file with a graphic editor
Using a text editor to create breakpoint files can be simple, quick, direct, and precise. It helps to start by working from a rough drawing of the shape, annotated with time-points and values. You may prefer to work with a graphic breakpoint file editor, which is more directly visual and intuitive. Also, the BRKEDIT program offers the possibility to create exponential and logarithmic curves, compare two different files at the same time (useful when creating maximum and minimum value contours for the same parameter), perform data reductions (e.g., on envelope files), and audition the shape created.
There are 3 different graphic breakpoint file editors available in CDP: the general purpose BRKEDIT program just mentioned (also called from within GrainMill), one in the Sound Loom GUI and one in the Soundshaper GUI. Each has their own Reference Manual section, so I won't go into them all here. Please note that only the Sound Loom is available on the Mac.
As an illustration, let's look at the Soundshaper editor, using the breakpoint file ss12&6u12.brk. It's the one that goes up 12 semitones.
A breakpoint file in the Soundshaper Editor
View Fullsize
- On Soundshaper's parameter page, click on the Edit button.
- Then select Open X (this is X timescale and the full Y scale, which leaves space to move the Y coordinate further than it was before)
- Note the upward sloping line of the glissando, and the time value list up at the top right
- Now go to the Adjust Values sliders at the bottom and slide the Y slider to the left and watch the sloping line move downwards, while the changing values appear in the top right time value grid.
- If you want to save the result under a different name, go to the File menu and select Save As. Otherwise, just click OK and the file will be saved with your new values. (Note that the file returned to the parameter page is a temporary one; click SAVE AS on that page to overwrite your previous version, or to create a new file name.)
Instant Changes in Breakpoint Files
Our next topic in this important section is how to create instant changes in breakpoint files rather than gradual, interpolated, changes. To do this, we have to stop interpolation from happening. This is done by putting the same value at the beginning and end of a time period. Thus, if we want our sound to stay at its original pitch level for 1 second and change instantly at the start of the next second, we would do this:
0 0 0.99 0 1 12The sound stays the same from time 0 to time 0.99, and then jumps up an octave (12 semitones) at time 1 sec. Now we can create a file that jumps up an octave at 1 second, down 2 octaves (i.e., an octave below its original pitch level) at time 2 sec. and returns to its original pitch level at time 3 sec., staying there until the end of the sound.[time value] 0 0 0.99 0 (No change for 0.99 sec) 1 12 (Instant leap up 1 octave) 1.99 12 (Steady pitch level for another second) 2 -12 (Instantly down 2 octaves. NB not -24: the 2.99 -12 value is relative to the original pitch.) 3 0 (Instantly back to the starting pitch level) 4.5 0 (Where it stays for another 1½ sec)We will do this again for our 12.6 sec. snd1.wav and mix instant and gradual changes.
0.0 0 1.99 0 2.0 12 3.99 12 4.0 -12 7.00 0 7.99 0 8.0 -5 8.99 -5 9.00 16 12.0 0
Instant pitch changes in the Sound Loom graphic editor.
View FullsizeWhen we Save this file and click on Use, we return to the MODIFY SPEED dialogue box with this file in place. After we Run this transposition process, we hear it jumping instantly up and down and back again. The times could be even closer together (e.g., 1.999) as long as they aren't the same: two different values at the same time cannot be processed.
Perhaps you might now try creating some glissandos and instant transpositions with one of the graphic editors by adding points and moving them about.
Randomised Values
A final note about CDP breakpoint files: When a parameter such as pitch has upper and lower limits (max & min), the software's normal interpolating mode of operation causes it to select a value at random between these two limits or several values if in a multi-event context such as the TEXTURE Set.
- To use this randomisation process, just specify the upper and lower limits (they are, therefore, constants)
- To restrict the output to one specific value, enter the same value for both the upper and lower limits: thus all multi-events produced would be at the same value
- To time-vary, enter time-varying contours (breakpoint files) for the upper and lower limits. Note that BRKEDIT, Richard Dobson's graphic editor, enables you to open a previous breakpoint file and display it in the background while creating a new one. Thus you can see exactly what you're doing when creating breakpoint file pairs for upper and lower limits. This editor also enables you to create exponential and logarithmic curves as well as linear shapes.
Return to 12-Step Index
Deep into computer-based sound processing: working with analysis data
Background: Analysis Data
Working with frequency amplitude analysis files opens up a vast area of sound processing only available via the computer.
- The CDP Software has one of the, if not the, largest range of functions which manipulate the data in FFT analysis files, i.e., the frequency content and its amplitude also see explanations of the Time Domain and the Spectral Domain.
- What this means in practice is that a sound file (.wav or .aif) has to be converted to an analysis file (.ana) in order to be processed with these functions.
- To do this in Soundshaper we open a soundfile and then go to Spectral PVOC: FFT Convert and select ANALYSE. Alternatively, you can just select a spectral process from the spectral menu and the program will automatically write the analysis file in the background (to the same name but with an .ana extension).
- In Sound Loom, you need to set the system to use the various CDP extensions. This is done in System State > .... Otherwise it will have a .wav extension. If left like this, we therefore suggest that you name the output with an 'ana' in the name. Thus it could be: snd1ana.wav and then you can see from the name that it is an analysis file.
- Note that you can play an analysis file using Richard Dobson's PVPLAY. In Soundshaper, this happens automatically when you click on PLAY, or you can invoke PVPLAY (.ana files) in the Tools menu. Activate playback with e.g., the Spacebar. Before this was available, you had to convert back to a soundfile before you could hear it.
Processing Analysis Files
Processing an analysis file, from the user's point of view, is the same as processing a sound file. Having created an analysis file, let's try out some functions. Use the output of each of these as the input to the next one.
BLUR BLUR (Soundshaper menu: Spectral Time Blur) The only parameter is the number of windows to blur.
100 creates quite a lot of blurring, but not an extreme amount: the data in each group of 100 analysis windows is averaged. Name the outfile in a way that reflects what you did, e.g., snd1bl100.ana. Now PLAY snd1bl100.ana and listen to the difference. If useful at this point in your work, eg., to process the file in the Time Domain, Convert back to a normal soundfile: SYNTHESISE snd1bl100.ana to form snd1bl100.wav.All .ana files are being converted to .mp3 for these Tutorial examples: to save disk space and so that the Soundfile Player associated with your Browser can play them. Therefore, even if it says 'snd1bl100.ana' as above, it will be an .mp3 file in your folder.PITCH TUNE (Soundshaper menu: Spectral &150; Freq/Pitch &150; Tune) Select the MIDI pitches mode. We need a file of MIDI pitches to define the chord to which we want to tune the sound, and the other parameters can be left as they are. (The Reference Manual has detailed information about them.) As we are in MIDI pitches mode, we just have to enter a series of MIDI Note Values representing our chord. They should be in approximately the pitch range of the input sound and are better if spread out a bit rather than notes close together (think 'open position' chords). E.g., a C'ish chord could be:
48 55 60 64 70 72 77
.To enter this, we click in the MIDI data window to activate it and enter these values (either on one line with a space between each value or on separate lines)
and then Save Data File As to your own name, e.g., snd1.tun. We can give the output file a relevant name such as snd1bl100tune.ana. Click on OK to Run the process. Note that this function introduces us to another type of auxiliary file used by the CDP functions. See a list of them all in CDP Files & Codes, with further information and examples.The blurred sound is now tuned to our specified chord, as, hopefully, will be clear when we PLAY it. (You no longer need to convert to .wav first in Sound Loom.) We lost some volume on this operation, so I've applied a Gain of 1.5, which is why the 'g' is added to the name.
TIMESTRETCH (Soundshaper menu: Spectral Time Time-Stretch) We can now stretch out the previous result, e.g., 3 times longer: just enter 3 as the tstretch factor and give a relevant output name, such as snd1bl100tunegx3.ana.
After applying Gain = 2, the result of our 3-part sound transformation sequence (BLUR + TUNE + STRETCH) now opens out the insides of the original sound.Re-synthesis
Finally, to complete this section, we can RESYNTHESISE our final result and create a normal soundfile: (Soundshaper menu: Spectral PVOC Synth). Now we have snd1bl100tunx3.wav in the input window and can Play it and perhaps carry on processing it with the Time Domain functions under Soundfiles. Note how our filename encapsulates all of the sound transformation processes that we employed. We can just look at this filename and see that it has been created by Blurring by a factor of 100 windows, Tuning it and Stretching it 2.3 times. (This kind of naming convention is just a suggestion you will no doubt settle upon your own preferred method, that creates a distinctive name and helps you identify how you arrived at it.)
Saving History in Soundshaper also records everything done in that session, with all command lines and parameter values. There is more information about Soundshaper History in the CDP Files & Codes document.
We now have now seen the main parts of the system: Load and Play, a Time Domain function (Transposition), breakpoint files, Analysis, several Spectral Domain functions (Blur, Tune and Stretch Time), and Re-synthesis. Now we are ready to explore other parts of the system quite quickly.
Return to 12-Step Index
Let us now return to our original snd1.wav and explore colouring the sound with different types of FILTER BANK and tuning it in a time-varying way with FILTER VARIABLE BANK.
FILTER BANK (Soundshaper menu: Soundfiles Filter Bank) Here we have a number of preset options (modes of the program) and several parameters. Here we will use Mode 6: 'equal intervals 2', which allows us to set the distance between filters in semitones.
Q is the bandwidth: the 'acuity' or 'tightness' of the filter. A low value 'lets through' more of the original sound, and a high value focuses the sound much more tightly on the center frequencies of the filter, creating 'resonance' or something akin to tuned pitches. (I.e., this is a 'peaking' as opposed to a 'shelving' filter.) Too high a Q may cause the sound to overload (amplitude distortion) or your speakers to ring. The filter process tends to reduce amplitude (because considerable parts of the sound are being filtered out), so some Gain is useful less when Q is high. Let's try Q = 250 and Gain = 20 on our original sound, selecting the Equal Intervals 2 mode (Mode 6) and setting the interval to 4 (semitones). Double-filtering tends to result in clearer, more focused pitches. If not used and the Q is lower, more of the original sound comes through. We also lower low frquency to 50 for more bass. The output soundfile could be named: snd1fbankeqint.wav.Try this again with the Subharmonic Mode (using the original sound source). In Soundshaper, double-click on the 'BANK' output cell, which take us right back to the FILTER BANK parameter page, with all the values previously selected still in place. This time, we click on Subharmonic. We can leave Q at 150, reduce Gain to 5, and turn on double-filtering (which gives us a little chiff that isn't there without it). The output, which we can name snd1fbsubh.wav is different from the equal intervals result!
FILTER VARIABLE-BANK (Soundshaper menu: Soundfiles Filter Varibank) This function provides an effective way to tune sounds to specific harmonies. It is similar to the Spectral functions PITCH TUNE and especially TUNEVARY, enabling more than one chord to be used (quick changes or interpolating between chords as with transposition breakpoint files). Thus it 'harmonises' sounds with a great deal of flexibility regarding the chord defined, the number of chords, and the type of movement from one chord to the next (instant or interpolating).
The data file is written in separate lines:
time1 note1 amp note2 amp note3 amp etc. time2 note1 amp note2 amp note3 amp etc. etc.A useful example file will be one that interpolates from chord 1 to chord 2 (C-G-E-Bb-E-A to A-E-B-A-C#-A) over a given time period, then holds, and then changes instantly to a 3rd chord (B-G-B-G-B-F). Consult the Equivalent Pitch Notations chart to confirm exactly what pitches these Midi Note Values represent (use Mode 2 for MIDI):
0.00 48 -3dB 55 -3dB 64 -3dB 70 -3dB 76 -3dB 81 -3dB 5.99 45 -3dB 52 -3dB 59 -3dB 69 -3dB 73 -3dB 79 -3dB 6.00 47 -3dB 55 -3dB 59 -3dB 67 -3dB 71 -3dB 77 -3dB 12.00 47 -3dB 55 -3dB 59 -3dB 67 -3dB 71 -3dB 77 -3dBIf we want to hear the chords clearly, a rather high Q is needed (100), as well as some Gain (4)to compensate for all the filtering too much Gain will, however, cause resonance problems. 10 Harmonics are set to give it some edge, and rolloff is 0 to maintain harmonic richness. Double-filtering also strengthens the clarity of the pitches. When turned off, the chords weren't really formed.We again use snd1.wav as the input. The resultant output can be named: snd1vfb.wav, 'vfb' standing for 'variable filter bank'.
Return to 12-Step Index
Two of my favourite Waveshape Distortion functions are DISTORT REPEAT and DISTORT SHUFFLE.
DISTORT REPEAT (Soundshaper menu: Soundfiles Distort cycles Repeat) This function is very easy to use. It has 3 parameters, of which the first two are essential.
Cycles specifies how many waveshapes to group together, i.e., more cycles mean a longer segment of input. Repeat specifies how many times to repeat this group of cycles before moving on to the next group. I suggest you start with the default (2 and 2), then try 5 repeats of 2 cycles, then 2 repeats of 5 cycles, then anything you wish. A relevant output name could be (carrying on from before): snd1dr2-2.wav.DISTORT SHUFFLE (Soundshaper menu: Soundfiles Distort cycles Shuffle) This is an immensely versatile function because the shuffle pattern can be varied enormously. It has a Domain (source pattern) and an Image (shuffle pattern).
The source pattern simply lists the elements to be shuffled, which can be few or many: e.g., abcde. The shuffle pattern moves them about. Repeating the same letter prolongs that part of the source, while mixing them up will jumble the source to a greater degree, moreso if the Domain is quite large. This shuffle pattern first prolongs and then jumbles: aabbccddeeaedbceaebecad. This is quite a long pattern about 56 seconds), and if the Cycles parameter were set to 3, an even longer output length will be created. This one might be called snd1dshuf.wav.CDP Files & Codes has more information about the shuffle codes.
Return to 12-Step Index
We now move into a very interesting part of the CDP Software: operations involving more than one file.
We start by replacing the amplitude envelope of one file with that of another: ENVEL REPLACE, which operates in the Time Domain.Background
What is the amplitude envelope? In the Time Domain, each sample data packet has a time and an amplitude. If we graph these amplitudes as height and connect the tips of the lines, we get an amplitude contour: the amplitude envelope, i.e., the rise and fall in loudness.
The CDP Files & Codes document gives an example of an envelope file, with some further discussion about it.
Time Domain Amplitude Envelope Processing
ENVEL REPLACE (Soundshaper menu: Soundfiles #150; Envelope #150; Replace)
The window parameter deals with the resolution of the envelope data: a low value for a fine, more accurate envelope, and a higher value for a coarser envelope.The second input is the source of the envelope and the first input is the destination, i.e., the sound onto which the envelope from the second input is imposed.
In a frog sound, we would hear the amplitude envelope very clearly as the separate chirps of the frog. If we extract this shape and replace the more steady envelope of our snd1.wav with it, snd1.wav will now peak and ebb with the separate chirp amplitude shapes of the frog. This, therefore is an (amplitude) ENVELOPE REPLACE function.
ENVEL REPLACE can be put to good use in preparing a sound for a morph: the sound is the same, but it moves like the second sound because it has acquired the second sound's amplitude envelope. This can therefore be used as an intermediate stage between the first sound and the second sound of the morph: first sound morphs towards the 'first sound with the envelope of the second sound' which then morphs towards the second sound, using two separate morph operations.
ENVEL REPLACE has several Modes that give you options to use envelopes acquired separately, or even envelopes hand-made by yourself.
Spectral Domain Spectral Envelope Processing
To understand Envelope replacement in the Spectral Domain we need to understand spectral envelopes in the first place, and why this involves formants.
Secondly, we need to understand the options provided when we extract the spectral envelope, namely the frequency-wise or pitch-wise extraction methods. We are dealing with the spectral envelope, which is the amplitude contour of the frequencies. Thus the nature of the sound itself is involved, its timbre, tonal qualities: because different frequencies will have different amplitudes. Thus, when we impose the amplitude contour of the frequencies in one sound on that of another, the frequencies of the first sound are emphasised with the result that the second sound now starts to sound like the first one which still has its original overall Time Domain amplitude envelope. This process is traditionally known as 'cross-synthesis', and CDP refers to it as 'vocoding'. The CDP vocode function is called FORMANTS VOCODE.
Spectral Domain Formants Processing
FORMANTS VOCODE (Soundshaper menu: Spectral #150; Morph/Formants #150; Vocode)
Now we can compare the difference when we take the spectral envelope (frequencies and their amplitudes / timbre) from count.ana and impose them on snd1.ana. We name this output: svocc.ana and Convert it to svocc.wav.Pitch extraction and combining with transposition files is a more advanced topic for a specialist workshop (the one on Transposition & Shifting that is still 'in progress').
Return to 12-Step Index
These are the topics introduced here:
- Background about Transition
- Time Domain Amplitude Crossfade
- Spectral Amplitude Crossfade
- Achieving Audio Morphs
- Other Transition Processes
Background
Morphing is another advanced topic we only introduce it here, providing pointers towards relevant programs. Gradual change is a basic musical process. This time we are concerned with making a transition from one sound to another. CDP has 5 programs directly related to this purpose: SUBMIX CROSSFADE, COMBINE CROSS, MORPH BRIDGE, MORPH GLIDE and MORPH MORPH.
CDP's method for morphing is based on a timbral interpolation process. There is another method which is based on partial tracking and replacement. The latter is a good method to use with pitched sounds and is implemented in the public domain program, SNDAN, and in other commercial software packages.
A general suggestion is to allow plenty of time for an aural morph. The ear is extremely sensitive to detail, but it takes time to notice and interpret what is going on. Aural morphs cannot as a rule be anywhere near as fast as visual ones otherwise they just sound like quick crossfades. Morphing is a process to explore and play with in a creative way, as suggested below regarding the creation of intermediate stages. I often work with the second half of the first sound and the first half of the second, creating a suitable intermediate stage. Then I morph from the first sound to the intermediate stage, and morph again from the result to the second sound.
SUBMIX CROSSFADE is a mix operation in which the amplitude of the first sound gradually decreases as the amplitude of the second increases. This is quite different from a MORPH, which is a gradual (and weighted) interpolation from the spectrum (partials and their amplitudes) of one sound to that of another. However, achieving a smooth and aurally effective 'morph' often involves more than a simple use of the program. It can easily be indistinguishable from a crossfade, without that sense of being distorted and reshaped as familiar with visual morphs. I suppose it depends on what you want to achieve.
Time Domain Amplitude Crossfade
In the Time Domain, sample data is stored as time and amplitude. SUBMIX CROSSFADE gradually reduces the amplitude values of the first sound to zero while gradually increasing the amplitudes of the second sound to zero. The result creates a smooth transition from the 1st to the 2nd, but with no sense of an intermediate 'warped' stage as one may wish to have in an audio morph. We can listen to our basic mix (snd1.wav) make a transition to frog chirrups in this way, a result very similar to that achieved by Spectral Domain amplitude crossover (next section).
Soundshaper SUBMIX CROSSFADE dialogue box
SUBMIX CROSSFDADE also offers facilities to set the timing of the crossfade (stagger and begin - end), and in Mode 2 ('Cosinusoidal') the skew of the crossfade. Mode 1 is a Linear crossfade.Spectral Domain Spectral Amplitude Crossover
A similar process in the Spectral Domain involves replacing the channel amplitudes of one file with those of another. This is done with COMBINE CROSS. Again, the amplitude data is taken from the second input and applied to the first input. The effects vary depending on the weighting, i.e., the degree to which the frequency amplitudes of the second file are used. This is controlled by the interp parameter:
- With a weighting 0.25, we hear the first sound and practically none of the second.
- With a weighting of 0.5, we hear both sounds equally.
- With a weighting of 0.75, we hear the second sound louder, while the first sound sounds muffled.
- Interp is a time-varying parameter with a range from 0 to 1. 1 is the Default, meaning that if you don't use it, the second sound will dominate your result almost entirely. On the other hand, providing a breakpoint file that moves from 0 to 1 over the course of your sound will provide a smooth transition from the first sound to the second sound.
COMBINE CROSS (Soundshaper menu: Spectral-Morph/Formants -Cross)
Our input sounds are both 12.6 seconds long (by design the frog sound was spliced 3 times and then cut to 12.6 seconds to match our other input.) The weighting breakpoint file to make a smooth transition over this whole duration delays the beginning of the crossfade by 3 seconds:0.0 0 3.0 0 12.0 1We can name it scrossfade.brk.So we run COMBINE CROSS with this breakpoint file, entering snd1.ana as the first input and frog3cdt.ana as the second input (analysis files!). In the speccrossfade.ana, the resultant sound, we hear what we expect: a smooth crossover from the first to the second: the frog gradually comes in and the gong sound gradually fade out.
The key thing to understand with this process is that the Spectral Domain holds the sonic data as frequency and amplitude. Therefore, the amplitudes involved are the amplitudes of the frequencies. This is why the sound, the timbral qualities, of the second sound come through, not just its loudness as in the Time Domain, as we saw with ENVEL REPLACE and SUBMIX CROSSFADE.
Achieving Audio Morphs
If we assume that when we morph we expect to hear the first sound slowly changing into the second sound, then the emphasis is on the word 'changing': we want to hear the change as a process, we want to hear the second sound gradually assuming characteristics from the first sound, such as amplitude envelope, rhythmic features, and tonal characteristics. Differences in pitch level can also pose challenges.
One of the powerful features of the CDP software is the number of functions which can be brought to bear on morphing issues. These can be used to create intermediate stages of a morph:
- transposition functions to adjust pitch levels (all at once or with glissandi): MODIFY SPEED, PITCH TRANSPOSE, PITCH TRANSPOSEF (with formants preserved), PITCH OCTMOVE, PITCH TRANSP, REPITCH COMBINE.
- envelope functions to introduce contours and rhythmic features: ENVELOPE EXTRACT, ENVELOPE IMPOSE, ENVELOPE REPLACE, ENVELOPE REPLOT or RESHAPE, FORMANTS VOCODE.
- crossfade or channel amplitude replacement to move in a general way towards the second sound, and other transition functions: SUBMIX CROSSFADE, COMBINE CROSS, COMBINE DIFF, COMBINE MEAN, MORPH BRIDGE, and MORPH GLIDE
- blurring functions to reduce the recognisability of the first sound (as harmonic modulation first reduces the harmonic identity of the first harmony): BLUR BLUR, BLUR AVERAGE, BLUR SHUFFLE.
- morphing itself, which does the final timbral interpolations: MORPH MORPH.
Looking more deeply into creating an intermediate stage, let us consider three basic situations that relate to types of infile.
- Vocode the sounds are fairly different timbrally, so that 'cross-synthesis' is used to transfer timbral qualities. ('Vocoding' in CDP is actually 'cross-synthesis' rather than the harmonisation of sounds.)
- Envelope transfer one sound is rhythmic and one is steady-state, so the rhythmic amplitude envelope is used to put some contour into the steady-state sound.
- transpose the sounds are at different pitch levels (but close enough to be viable for morphing). In this case, it may be useful to transpose the portion from the place where the morph will begin, transposing the second file to glissando to the level of the first file over the period of the morph. But one has to be very careful with speech, which is very sensitive to transposition.
We can use vocoding as intermediate territory when morphing from our gong sound to the sound of a voice counting from 1 to 10. Below is a typical hand-drawn rough sketch that maps out the morph. (Sketching out breakpoint files, morphs and mixes on paper can help to clarify objectives and procedures.)
This sketch shows the last two steps of a 3-step plan:
- vocode ('cross-synthesis') so that the sound of the voice enters into the sound of the gong, forming svocc.ana. We hear the voice counting, but the voice also resonates in an unusual way, because it's now inside the gong, as it were.
- morph this vocoded sound with the count, forming svocc-m-c.ana. We hear the rough voice, but it gradually gets clearer until, at the end, we hear the original sound of the voice. The morph starts after the beginning to give some time with just the vocoded voice.
I tried some stagger to give more original voice at the end, but in this case, it put the counting out of phase, so couldn't be used.
- amplitude start and end: 2.5 - 7
- frequency start and end: 2.0 - 6.5 (sooner)
- morph the original gong sound with this morphed sound, to form the beginning of the final result, so that the original gong sound is heard on its own at the beginning. This time we DO use some stagger so that the original gong sound isn't muddled by the vocoded sound at the beginning of the second file. Note that the first sound, the gong, is no longer heard after the end of the morph. This sound is cut off at this point, leaving only the second file, with the original sound of the voice emerging out of the vocoded-morphed section. The final result is therefore snd1-m-svocc-m-c.ana, which we convert to a soundfile.
Morphing is often done by starting both files at the same time. The stagger parameter introduces some complication to this, especially if the files are not of equal length.
Here is a diagram of these basic situations. It is also found in the Reference Manual for MORPH MORPH.
- A. File1 is longer
- the shorter second file can start at the beginning. When it ends, the first file will be cut off (truncated).
- the shorter second file can be staggered so that it ends at the same time as the first file, or after the first file ends. The resultant file will be only as long as the second file (+ plus any stagger).
- B. File2 is longer only the second file can be staggered, so the resultant file will inevitably be longer than the first file. The shorter first file ends before the second (it ends when the morph ends), but the second will continue to its end. The resultant file length will be the length of the second file + any stagger.
- It should be obvious that you cannot start the morph before the second file comes in. Therefore the start time of the morph must be equal to or greater than stagger.
Other transition processes
MORPH BRIDGE operates like MORPH MORPH in that it is an interpolation process, but it moves from a time-specified fixed state in the first sound to a time-specified fixed state in the second sound, with various weighting options. You can therefore choose parts of the sounds with specific timbral characteristics and explore the results. The bridging process starts at the first time and ends at the second time, interpolating all the sonic material inbetween these two points.
MORPH GLIDE is similar to MORPH BRIDGE except that it moves between fixed states that comprise only single windows: it interpolates between, shall we say, an interesting fragment of sound 1 and an interesting fragment of sound 2, using only these two windows and none of the sound material inbetween. Given substantial differences between the two fragments, this can lead to wonderful gradual changes, especially if plenty of time is given: the outfile length can be specified. Don't be afraid to make it 30 or more seconds long.
Return to 12-Step Index
Different approaches to sonic assembly
Many if not most CDP users have a hybrid setup in which they assemble with their favoured audio sequencer. These are designed as 'track-based' systems. The standard use is to have the same sound repeat at different times on the same track, and to build up layers of tracks, each with their own sound. You can also have different sounds on the same track. These systems are a great advance in musical assembly, are visually intuitive and are optimised for repeating and layered sounds.
From the point of view of sound composition, there are several things to watch out for when using an audio sequencer:
- You have to be careful about how you handle variants of the same sound, because within the virtual environment of the sequencer, the original sound will be overwritten by the variant copies unless you specify that you want to save it as a new sound. This is especially the case if you want to save these variants as separate soundfiles, e.g., for use individually on a soundtrack. In this case they will have to be 'bounced' to hard disk to get them past their virtual status to being an actual soundfile.
- Sounds cannot overlap on the same track without truncating the earlier sound. Overlapping is done by placing the sounds you want to overlap on different tracks.
- A soundfile can be made from the whole sequence by a process called 'bouncing' or 'mixing down'. This groups all the contents of the sequence together into one file. So you have to be careful when you mix lest you group things that you could handle more flexibly if they were separate.
Mixing can also be done within the CDP System. CDP mixing ('Mixing with mixfiles') is optimised for creating complex sound objects. You can add soundfiles (including repeats of the same sound) at any start time, so that file overlap is never a problem. Sounds can be timed to come after one another or to overlap, but the thinking is more vertical than horizontal.
The main advantage of doing so is that it is good for building up complex musical passages in a controlled, step by step manner. It is normal practice to mix groups of sounds at each significant stage of the construction process. Then these mixes are themselves mixed to create more complex or extended passages. The emphasis is on the placement and layering of carefully designed sound objects that have been created by previous mixes.
The CDP 'Mix with mixfiles' process suffers from the lack of a graphic implementation. This is not as much of a disadvantage as it may seem, if the compositional focus is on 'sound objects' as described above. In this case, CDP mixing is in fact very straightforward, and you don't run into some of the problems that can occur in the track-based systems designed to optimise more horizontally arranged sounds. The CDP mix process involves the creation of a text 'mixfile', either directly with a text editor, or via Soundshaper or Sound Loom. These can be edited to 'tweak' your mix and saved as a way of documenting how you have achieved certain results.
Rajmil Fischman's AL occupies a welcome middle ground. A fully graphic program (PC only), it enables you to copy and move sounds in both horizontal and vertical space, i.e., without the constraint of having separate tracks. This program was written thanks to an AHRB grant, is sent out with CDP Systems, and is available as a free download from http://www.keele.ac.uk/depts/mu/staff/Al/Al_software.htm . Also see the AL-ERWIN information page on the CDP website.
CDP also has some rather interesting and advanced facilities that relate to the placement and overlap of sounds. Chief of these is the TEXTURE SET, about which more in Section 11. Other examples are the GRAIN programs, including BRASSAGE and Grainmill and the many different segmentation programs, such as EXTEND DRUNK (based on a Miller Puckette algorithm), BLUR SHUFFLE and DISTORT SHUFFLE. We can also include the Release 5 programs SFEDIT JOINDYN, SFEDIT JOINSEQ, EXTEND SEQUENCE and EXTEND SEQUENCE2. These enable you to create multi-event musical passages by re-ordering a set of numbers, each number relating to an input soundfile. All of these programs provide various ways to shape multi-event assemblies of sounds by specifying parameters in one program, rather than by assembling sounds one by one as in the standard mix procedure.
Finally, we should mention yet another approach to assembly. This is to build scores using an algorithmic scripting language, such as Richard Orton's Tabula Vigilans. Here we have a low-level musical programming language that can be used to design complex musical structures and/or operate as a real-time (MIDI) instrument. I personally regard this (and similar programs) as one of the most important 21st century tools for moving computer music into design areas that push the boundaries of the computer and of music into new areas. Although currently operating only with MIDI, it is possible to write a Csound score file with it, which is then used by Csound to create a musical passage using soundfiles.
Mixing with Soundshaper
Soundshaper has a dedicated Mix Page. Sounds are selected on the Main Page or on the Mix Page and are set up initially with dummy mix parameter values (corresponding to the CDP function SUBMIX DUMMY).
Soundshaper Mix Page, with completed mixfile
View Fullsize
- Select a cell on the main page, containing a soundfile. Go to Edit/Mix : Mixfiles : Edit/CreateMixfile. This takes you to the MIX page where the current cell's file is shown in the Soundfile List. To select others, click on the Select Soundfile button.
- Click on a file in the Soundfile List. Its properties are shown (length, number of channels etc.) at the top of the page and you can audition it with the Play button there.
- The sound's current mix parameters are shown in the Mix Parameters section. Adjust values for Start time (when it should start to play), Level (how loud it is) and Pan (where it is to be located in the stereo field moving pans have to be applied to the sound before bringing it into the Mix Page). Note that Level and Pan have to be applied for each channel if it's a stereo file: select the channel and enter the values. (Be careful to switch channels before entering the other channel's values.)
- Click on the Add to List button. The line you have just defined is displayed in the Mix List panel below the Parameters section.
- To edit an item in the Mix List:
- Double-click on the soundfile name in the Mix List to display its parameters again in Mix Parameters. (Or click on the name, then the button Edit Params.) When you have finished editing, click the button Update Item. The highlighted item in the list is updated.
- Alternatively, for a simple edit, you can double-click on the item's parameters at the right of the Mix List; this opens up an editable text box.
- Or you can re-select a soundfile in the Soundfile List, then after editing its parameters highlight the appropriate item in the Mix List and click Update Item.
- Select and add more soundfiles, as appropriate. You can select a soundfile more than once. Check your rough sketch to review file overlaps, which will indicate which files may need lower amplitude levels.
- When everything is entered and adjusted, if you want to save the mix, optionally enter a name in the OUT Mixfile edit box and click on File : Save Mixfile. This step is not necessary unless you might want to access the mixfile at a later stage.
- Click on the MIX button (top right corner) and the mix is performed, creating a soundfile.
- If you do not need to see the mixfile list, particularly if you do not intend to use any of the infiles more than once in the mix, a quicker alternative procedure to the above is:
In this method, the Mix List is not shown and all of the files in the Soundfile List are mixed, but if you change your mind, you can select Edit/Create Mixfile in the Mixfile Operations list and then perform steps 5, 6 and 8 as above.
- Instead of Step 1, select Edit/Mix : Mix : Mix on the Main Page. The current cell will be your first selected file. Now click on cells for other files, or select them via the file selector. These files are added to the ADD INPUT dialog; click OK in this box when finished. This takes you to the Mix Page, where your selected files are listed. Or you can select soundfiles on the Mix Page (click on the Select Soundfile button).
- Follow steps 2 and 3 above, re-selecting any sound in the Soundfile List as required, until you are satisfied with its parameter values. Click MIX to perform the mix.
- You may want to return to the mix after hearing it, in order to adjust make adjustments.
- Double-click on the output cell (labelled 'MixEdit' or 'Mix'), or CTRL+ENTER if you prefer. If you do this right away, the Mix Page will appear as you left it. Edit the parameters as described above and click the MIX button.
- If the Mix Page has been used for a different mix since your last edit, you can still recall and edit the earlier mix if you have saved it (Step 7). Select File : Open Mixfile. Note that the sounds listed in the Soundfile List will be those of the later mix, so ignore these or clear the list.
Mixing with Sound Loom
The MIX procedure in Sound Loom is a little different, though also focused on creating the mixfile. It uses the CHOOSE FILES mechanism to select the soundfiles to mix, then quickly creates a template mixfile using the soundfiles you have chosen. You then edit the mixfile and run the mix.
- Step 1 therefore is to select the directory containing the files for your project, that you will be mixing. Appropriate files from this directory are GRABBED and moved to the Workspace. You will probably have done all this as part of your work on the project up to this point.
- Now you clear everything in the left panel and activate CHOOSE FILES. Now you select (on the Workspace) the files you are going to mix, and they are all placed in the left panel: CHOSEN FILES.
- In PROCESS you now go to Mix > Create Mixfile > superimposed (or 'end to end' if you want the start time of each soundfile in the mix template file to begin where the previous sound ends). When you RUN this, a template mixfile is made: it contains all the soundfiles, start times = 0 (for 'superimposed'), the number of channels for each sound, and default values for Level and Pan. Remember to SAVE this file.
- Now you go back to Workspace, New Files, clear the sounds in the left panel and select (CHOOSE FILES) the new template mixfile from the Workspace.
- When you now go to PROCESS, select Mix with Mixfile. You now need to select Edit Mix. The screen below shows the edit window.
Sound Loom Edit Mixfile Window
- click on 'Edit Mix' and edit the template mixfile with the start times, levels and pan details that you want to have. Remember that moving pans have to be part of the soundfile before it reaches the MIX stage.
- When you click on 'Edited Version', your named template mixfile is re-saved with your new edits.
- Go to RUN and the MIX is performed.
- Play and SAVE if OK. Otherwise, go back to 'Edit Mix' etc.
Return to 12-Step Index
This is a very powerful and important part of the CDP software. I have written an extensive Reference Manual and prepared an optional Texture Workshop CD-ROM. I recommend these additional materials because the possibilities of the TEXTURE Set of programs are many and varied and there are many details to master in order to make use of them. This section of our 'Getting Started' tutorial can only cover a few of the basics:
- Parameters
- Note Data File
- Timing Grid
- Fixed Harmonic Grid
- Changing Harmonic Grid
- Motifs
- Nodal Substructure & motifs
- Play between Randomised & Defined
TEXTURE PARAMETERS
The first key to the TEXTURE Set is the parameter listing. We can summarise these with first a screen of a TEXTURE SIMPLE dialogue box filled in for use with Mode 5 (random), and then an explanation of each parameter.
Using the gong sound as the input and the above settings, we produce a randomised texture, txsnd1.wav. Yes, it's a bit mad, but it shows a multi-event texture being formed and the fact that the randomised pitch selection between minpitch and maxpitch will include microtonal transpositions.
A quick overview of the main TEXTURE parameters follows.
- outdur the desired length of the outfile. This will be a minimum length. It is often longer because the program tends to finish patterns that it has started.
- note data file a text file containing various components with which to shape the result
- packing the time-density of note events
- scatter timing offset for the packing, to randomise or 'humanise' the results
- tgrid a quantisation mechanism, set to 0 when not used
- soundfiles the two fields here are usually filled in with 1, meaning 1 soundfile input. If there are, for example, three input sounds, the first field will be '1' and the second one will be '3'.
- loudness MIDI 'velocity' Range 1 to 127
- duration the length of each note event, each of which starts from the beginning of the infile. When different values are used for these two fields, a duration is randomly chosen from somewhere between the two values.
- pitch these fields are for minimum and maximum pitch transpositions, given as MIDI Pitch Values relative to the reference pitch. If the reference pitch is 60 (Middle-C) and the two pitch fields are both 60, then all note events will be on the same pitch. If the minimum is 60 and the maximum is 67, the pitch transposition for each note event will be randomly chosen somewhere inbetween, including microtonal values. Time-varying breakpoint files can be used as well, which is very important.
- attenuation gain reduction. Use more if there will be considerable overlap of a fairly loud sound.
- position central position in horizontal stereo field for spread
- spread amount of spread around position; 1 is across the full stereo field
- use whole sound disregard duration values and use the whole of the input soundfile for each note event
- mult if there is a mult parameter, this controls tempo. There are also Min and Max options, randomly chosen values inbetween, or even time-varying breakpoint files. 1 = same tempo, 2 = twice as fast, 0.5 is twice as slow.
NOTE DATA FILE
The second key to the TEXTURE Set of programs is the note data file (ndf) and its various components. This is a text file that instructs the program with your wishes not all of them, but all of the more complex features.
The order in which the note data file components occur in the file varies with the different programs. This formatting information is summarised in the TEXTURE Note Data File Chart. I recommend that you have a printout laminated for permanent use. (I can't use TEXTURE without it!)
The following is a very simple note data file comprising the pitch reference and a two line harmonic set (used in Mode 3 of TEXTURE SIMPLE):
60 MIDI reference pitch #2 number of lines in harmonic grid 0 1 60 0 0 first line of grid, Middle-C 0 1 67 0 0 second line of grid, the G above
- The first line in all note data files contains a MIDI Pitch Value as a reference pitch for (possible) transpositions for each note event in the texture. If you have more than one input soundfile, you need separate values for each soundfile, e.g., 60 60).
- Whatever value you give the reference pitch is taken by the software to mean no transposition (i.e., it is taken to be the original pitch level of the sound). It is often impossible to determine a precise pitch level for a complex, noisy sound, which is why this s called a 'reference pitch'. Pitch transpositions move the pitch evel up or down the specified number of (possibly fractional) semitones from this reference pitch. Thus if you have the sound of wind and give it an MPV of 60, if you tell the program to transpose it to 72, it will move the sound of the wind up an octave (12 semitones), and 48 will move it down an octave, etc.
- When you use Mode 5 ('random'), you specify minpitch and maxpitch values in the dialogue box (constant, i.e., just one value, or a time-varying breakpoint file). The program selects a transposition for each note event that lies somewhere between these limits, and note that this includes microtonal variants. The latter can be useful when you want to create washes of sonic material that do not have noticeably discrete pitches.
- When you want all the note events to have the same pitch, give the same MIDI Pitch Value to the Min and Max Pitch parameters.
TIMING GRID
Rhythms can be created with a timing grid, which is another component of the note data file. Here we are using it independently with TEXTURE TIMED, but timing instructions can also be used within motifs (TEXTURE MOTIFS), together with motifs (TEXTURE TMOTIFS), or provided in the form of a nodal substructure (TEXTURE DECORATED and TEXTURE ORNATE program sets) see below.
The format of the timing grid is shown in the following example:
#5 0.00 1 0 0 0 dotted quaver 0.75 1 0 0 0 semi-quaver 1.00 1 0 0 0 dotted crotchet 2.50 1 0 0 0 quaver 3.00 1 0 0 0 crotchet if skiptime is 1You can see that the duration of the note event is determined by the next start time (left column).To work this out, you need to think of rhythms numerically, most easily by letting 1 = 1 sec. (i.e., MM crotchet = 60). You can set the tempo with the mult parameter(s). Thus a crotchet (¼-note) = 1, a quaver (1/8 -note) = 0.5 etc. Start at 0 and add the duration-value of the note to get the next start time. It will overlap or leave a gap depending on the length of the infile or the mindur maxdur settings.
When used independently, the timing grid defines when pitches that are randomised between minpitch or maxpitch will occur. In other words, the rhythmic 'riff' repeats, but the pitches are randomised. If a harmonic grid is used, then the pitches are randomly selected from the grid.
The timing grid above repeats a rhythm in a narrow pitch range: minpitch = 60 and maxpitch = 64 (span of a Major 3rd).
Skiptime is an important parameter in this program. The last note event in the timing grid begins at 3.00. Skiptime defines the amount of time between the start of this last note event and the repeat of the timing grid, i.e., when it starts over with the note event at time 0.00. In this case we put a 1, making it 1 second before the grid repeats. If the tempo is made faster or slower, skiptime has to be adjusted accordingly. The maths here is: new_skiptime = old_skiptime * (60 ÷ new_tempo). E.g., at a new_tempo of 120, the 1 sec. skiptime above becomes 0.5 sec.: new_skiptime = 60 * (60 ÷ 120) = 0.5.
FIXED HARMONIC GRID
Another common component of a note data file is a fixed harmonic grid. This is a great feature, because it enables you to link your texture to other harmonies used in your composition, whether clearly or subtly, depending on the nature of the sound, the rate of packing, the length of soundfile used for each note event, and similar considerations.
60 MIDI reference pitch #2 number of lines in harmonic grid 0 1 60 0 0 first line of grid, Middle-C 0 1 67 0 0 second line of grid, the G above
- Two terms are used to describe these harmonic grids: 'Set' means that only the pitch levels specified in your grid are used. 'Field' means that the pitches may be taken from any octave. 'Fields' can be useful when you want an open, spread out texture.
- Through the use of this harmonic grid, the transpositions for all the note events of the texture 'snap' to the pitch levels specified, are restricted, 'constrained', to these pitch levels only. Each note event will (randomly) select a pitch from this grid.
- If the packing is tight (e.g., 0.05 sec.), the note events will come thick and fast. Thus, if the grid contains just two MPVs spaced 7 semitones apart, e.g., 60 and 67, if the packing is fast and the source sound is reasonably clearly pitched, you will hear a shimmering Perfect 5th because of the rapid multiple note events created.
- Each harmonic grid is preceded by an indication of how many lines it contains. E.g., #2 means that it contains two lines. If there is a mismatch, you will get an error message.
- The format for this grid is '0 1 MPV 0 0' for each line. The first column is the start time, and is always 0 if you want any of the pitches to be selected at any time. The '1' in the second column is always '1', MPV means MIDI Pitch Value, and the last two columns are always 0 (these fields are used for motifs).
- I used the phrase 'all note events' because it is the usual case that the packing rate will be faster than the time changes in the harmonic grid. This may give the 'shimmering' effect, but also may create chords: if you have several lines starting at the same time, and then several more starting at a different time, and a fast packing rate, then you will hear chords comprising all the pitches with the same start time. This is one of the uses of a 'changing harmonic grid', described in the next section.
CHANGING HARMONIC GRID
A variant on the harmonic grid is a 'changing harmonic grid'. All of the texture programs contain this facility.
60 MIDI reference pitch #3 3 lines in the harmonic grid 0 1 60 0 0 time 0, note events are on Middle-C 5 1 65 0 0 time 5, note events are on the F above 10 1 67 0 0 time 10, note events are on the G above
- Modes 2 ('Field') and 4 ('Set') are used when you want to specify the time at which a particular pitch level is to come into play. Thus if as here line 1 were '0 1 60 0 0' and line 2 were '5 1 65 0 0', all note events would be on Middle-C for 5 seconds (the count starts at 0), and on the 5th second, all note events would be on the F above Middle-C. Smaller time changes can result in melodic shapes, or even (tiny time changes) strumming effects. Packing and the length of infile employed make a big difference to the result. You can notionally create clear melodic shapes with this program, but it actually works better via the nodal substructure used in the DECORATED and ORNATE sets (see below).
- Our example will use a fairly slow packing (0.25) with a relatively large scatter offset (0.1). This will give us relaxed but irregular repetitions of the infile that move upwards on the pitches specified when the next time point occurs. Again, mindur and maxdur and 'use whole sound' control the amount of infile used for each note event, and therefore the amount of note event overlap.
MOTIFS
The MOTIFS and TMOTIFS Sets allow you to use motifs. The note data file below shows a Mode 5 situation: just the reference pitch and a motif definition. Note that the two columns on the right are used: amplitude (MIDI range) and duration (length of infile from its beginning to use legato overlaps are achieved by making the duration a little longer than the time to the next event.) Because all the fields are used, I refer to these motifs as 'fully defined'.
60 #5 0.0 1 60 64 1.0 0.5 1 65 74 1.0 1.0 1 67 84 2.0 2.5 1 63 74 1.0 3.0 1 65 78 2.5
- In TEXTURE MOTIFS, this motif is started on a pitch selected at random between the minpitch and maxpitch parameters. You can get some very complex pitch overlapping when event density (packing) is less than the full length of your motif(s) yes, you can have several! Alternatively, you can make the packing equal to or greater than the length of the motif.
- Our first example repeats the motif on the same pitch because minpitch and maxpitch are both set to the same pitch (60).
- With TEXTURE MOTIFSIN, a harmonic grid is also used, so the motif will repeat on a pitch selected (at random) from this grid. A bit of compositional planning is needed to get a sensible relationship between harmonic grid and the motif(s) that will use it. The possibilities for play are endless.
- The second example uses MOTIFSIN, adding a simple harmonic grid to the note data file:
60 #2 0 1 60 0 0 0 1 64 0 0Thus the program will choose (randomly) to start each note event on one or the other of these pitches. The pitch fields are adjusted accordingly: 60 and 64 because the span in the parameters must match the span in the note data file if less, pitches in the harmonic grid will be omitted. Also, we here use different values for the MULT paramters, to give a randomly different tempo for each note event: multlo = 1 and multhi = 1.5. The result phases the motif, which is suggestive of many musical possibilities.
- Notice that with motifs, the timing is part of the definition of the motif itself. The amplitude and duration fields are also set.
- A timing grid can also be used together with fully defined motifs, thus defining exactly when the motifs will come in: TEXTURE TMOTIFS and, with a harmonic field or set, TEXTURE TMOTIFSIN.
LINES: NODAL SUBSTRUCTURE & MOTIFS
Beside the above possibilities, there can also be fully defined motifs that are placed on linear shapes. These comprise timed pitch nodes. I like to call these timed pitch nodes 'nodal substructures' in order to emphasise that other shapes are placed on them. They are like the contour shape 'backbone' of melodic forms, and the timing dimension gives considerable control over how the motifs are spread out or overlap. This is illustrated in the examples below, because they pull together a number of features.
The DECORATED and ORNATE Sets allow you set a nodal substructure, the first on its own and the second with the addition of fully defined motifs as well in this case called 'ornaments'. The note data file below shows a nodal substructure followed by a motif (ornament) and is for use with POSTORNATE. The 'POST' in the name means that the ornaments follow after (i.e., start on) the times specified in the nodal substructure.
The first example uses TEXTURE POSTORNATE and repeats the motif, starting each on a different pitch, i.e., on a different node of the substructure. There is no overlap because skiptime (5 sec.) is longer than the motif. This is its note data file, ndfnstune1.txt:
60 reference pitch #5 nodal substructure 0 1 60 0 0 5 1 65 0 0 10 1 67 0 0 15 1 63 0 0 20 1 65 0 0 #6 motif ('ornament') 0.0 1 60 64 1.0 0.5 1 65 74 1.0 1.0 1 67 84 2.0 2.5 1 63 74 1.0 2.75 1 62 68 0.7 3.0 1 65 78 2.5The outdur is set to 25 seconds, to give plenty of time for all this to work itself out. The output soundfile is about 35 seconds long. Note that there is no pitch parameter in POSTORNATE, as all the pitches are specified in the note data file.The second example creates overlap in two ways: the time between nodes is less than the length of the motif. This time, 2.5 seconds, is also set in skiptime so that the repeat of the nodal substructure stays in sync. Also, the pitch at 2.5 duplicates the one at 0.0 (60), and the one at 7.5 duplicates the one at 5.0 (67). The overlap creates harmonies inherent in the motif itself when the second one starts later but on the same pitch. Ndfnstune2.txt is as follows:
60 Original tuned to Middle-C #5 5 lines in 'line' (nodal substructure) 0.0 1 60 0 0 1st node is Middle-C 2.5 1 60 0 0 motif repeats on Middle-C, with overlap) 5.0 1 67 0 0 node moves to G 7.5 1 67 0 0 motif repeats on G 10.0 1 65 0 0 motif moves down to Eb #5 5 lines in the motif; amp & dur fields used 0.0 1 60 64 1.0 motif starts on Middle-C 0.5 1 65 74 1.0 and moves up to F 1.0 1 67 84 2.0 dur longer to last till next note 2.5 1 63 74 1.0 motif continues on Eb 2.75 1 63 74 1.0 an 'escape tone' embellishment is added 3.0 1 65 78 2.5 final pitch is FThe final example illustrates an advancing overlap: the time between repeats gets smaller.
I have tried to make it easy to hear what is happening with these examples, but the principle involved in handling the nodal substructure is very powerful, given care over designing motifs to achieve specific musical results. More than one motif can be used (the program will select randomly among them when initiating the motifs) and the MULT fields can be used for varying tempi, so there is much to explore. The note data file for this example is:
60 reference pitch #5 nodal substructure 0.0 1 60 0 0 4.0 1 64 0 0 6.0 1 67 0 0 7.0 1 60 0 0 7.5 1 64 0 0 #6 motif ('ornament') 0.0 1 60 64 1.0 0.5 1 65 74 1.0 1.0 1 67 84 2.0 2.5 1 63 74 1.0 2.75 1 62 68 0.7 3.0 1 65 78 2.5In the DECORATED group of programs (PRE- and POST- as well), the nodal substructure is defined, but the motif itself is not. It is created on the fly according to your parameter settings, making for a more flexible result.
CONCLUSION: PLAY BETWEEN RANDOMISED & DEFINED
The third key to the TEXTURE Set of programs is to understand how they can be used to play with both randomised and fully defined musical features. Note that 'randomised' selections can be constrained to a harmonic field in various ways in Modes 1-4. We have seen all of these features above, but this summarises the play between randomised and defined.
- decorations on a nodal substructure these note groupings with randomised features that get attached to a defined set of pitches (the 'nodal substructure')
- rhythms these define a rhythmic grid onto which randomly selected notes are 'snapped'.
- ornaments on a nodal substructure here, fully defined note groupings (pitch and rhythm defined) are placed on a defined set of pitches (the 'nodal substructure'), but the timings of the nodes enable you to control overlaps with great precision, create canons etc.
- Packing sets the density of the note events (their temporal location) and can change over time. It also has an offset that randomises these occurences a 'humanise' function (the scatter parameter).
- As with many CDP parameters, most TEXTURE parameters can be constants, random selections between maximum and minimum values or random selections between time-varying contour limits.
- Note that to repeat a note or ornament etc. on the same pitch, make the maximum and minimum pitch values the same. This, for example, makes it possible to create 'canons' that start on the same pitch.
- One last example for TEXTURE illustrates the play between nodal substructure and motifs. It comes from the Texture Workshop CD-ROM (No. 20).
Return to 12-Step Index
There are so many possibilities for the precise sculpting of sound in the CDP Software, that it can be an efficient use of time to take some trouble over documenting your work as you go, especially sounds that really please you. The history or log files help with this. Here are a few suggestions for your consideration.
- Maintain a separate overall folder for a project, possibly with subfolders for source sounds & modifications to the source material, building chordal material, assembly with texture and mixing, or some other processing category used extensively in that project.
- Keep a hard-cover notebook for jotting ideas, roughing out the breakpoint files, mixes, harmonies, and parameters of effective transformations. This gives you a permanent record of your work, sources used, effective transformations achieved and precisely how they were done, names of files etc. So much is achieved in one day on the computer, that it is hard to remember very much even from the previous day's session! Coloured flags can be used to index important pages.
- Code the names of soundfiles and related breakpoint and text files with the same initial letters. E.g., txwater.wav, with txwaterndf.txt and txwaterpk.brk. Then you can easily see which files went into the making of a particular sound, and find them again when you want to edit them.
- Remember to SAVE a session's work with Soundshaper's History function ( .hst files). Sound Loom history is saved automatically.
Return to 12-Step Index
The above survey by no means exhausts the functionality of the CDP software. For example, we haven't even touched on granular manipulations or a host of the spectral functions. The aim has been to lay out the basic components of the system and how to use them, opening the way to more thorough creative explorations by yourselves. It is absolutely fascinating how different the objectives and results can be. My parting advice is to accept your individuality, the uniqueness of your life history and perceptions, trust yourself and, in the words of the sculptor Paolozzi, 'Follow your preferences'.
Return to 12-Step Index