Digital Recording, Mixing and Mastering Volume 4

  • Upload
    guille

  • View
    259

  • Download
    0

Embed Size (px)

Citation preview

  • 8/14/2019 Digital Recording, Mixing and Mastering Volume 4

    1/103

    Digital Recording

    ADATs and DA88's have now virtually killed off analogue reel-to-reelmachines, but what are the real advantages?

    Perhaps the most surprising thing about digital multitrack recorders, is that they were so long in coming (theaffordable ones, anyway). By the time Alesis released their first ADAT machine in 1992, stereo DAT machineshad already been around for four years, with the original R-DAT format actually proposed back in 1983.

    But the problems which beset Alesis in the development of their 8track digital recorder postponing its delay forsome 18 months after an already lengthy R&D period left no-one in any doubt as to the technical difficulties whichhad to be overcome.

    Format-itus

    One of the major problems was the choice of recording medium. Like Tascam, whose 8track DA88 machine wasdeveloped around the same time as ADAT, Alesis faced a choice between using an existing tape format ordeveloping an entirely new one. Alesis opted for the tried and trusted technology of SVHS tape cassettes alreadywidely available through the domestic VCR market. Perhaps because of this, ADAT got a head start of around sixmonths over its rival, the DA88 though the two were broadly similar in terms of audio performance.

    Should you have been considering a move into digital multitrack recording at that time, your choice was by nomeans limited to Alesis and Tascam machines or even to a digital tape format. By the early 90's, directtodiskrecording systems were already a realisable goal for anyone with a PC or Macintosh, and a large enough harddrive.Digital sound cards were available for both machines (and falling dramatically in price as far as the PC wasconcerned). Furthermore, with a largescreen graphical interface at your disposal, editing on a dtd systempromised to be far more intuitive.

    Not only that, but being a 'non-linear' system (unlike tape) you also had the advantages of random access: being

    able to move instantly to any point in a recording for playback or editing. Although ADAT and DA88 tapes were incartridge form, you experienced exactly the same delays as reel tape when it came to getting from one position ina recording to another.

    The benefits...So where does the attraction for machines like ADAT and the DA88 lie, given such stiff competition? Well, using adigital multitrack tape machine actually provides you with 8 'physical' tracks, each with its own output which canbe independently mixed or processed.

    By contrast, direct-to-disk computer-based systems (with the exception of more advanced hardware/softwarepackages like Pro Tools), offer only two outputs - with on-screen tracks having to be mixed down to a stereooutput signal.

    From the point of the established commercial or home studio, there is also the over-riding advantage of being able

    to simply unplug an existing analogue machine and stand a digital multi-tracker in its place. Chances are, all theinput and output levels will match, and the machines feature almost exactly the same transport controls, recordinglevel meters and monitoring systems. A studio taking delivery of an ADAT or DA88 in the morning could be up andrunning the same afternoon.

    Better still, there's no steep learning curve to get past and that counts for a lot. As many manufacturers havefound to their cost, you cannot simply ignore public familiarity with certain technology and disregard traditionalperceptions of the way things work. You only have to look at the graphic imagery used in computer user interfacesfor evidence of that. This is where digital multi-track tape scores heavily over rival hard disk systems. It is offersrecording in a form people are familiar with through reel-to-reel and cassette machines. No backing-up problems,no system crashes - and no weighty instruction manuals to wade through.

    Digital Recording, Mixing and Mastering.Volume 4..--+=({ethix-wing})=+--..

  • 8/14/2019 Digital Recording, Mixing and Mastering Volume 4

    2/103

    But the advances over analogue tape systems are even more pronounced. Digital multitrackers offer significantlyimproved recording quality, lower noise, a more convenient tape format and much more accurate editing andcontrol facilities. In fact, so marked are the improvements in these areas, many 16track analogue studios haveopted to change to an 8track digital format, making use of the ability to bounce down tracks (with no loss in signalquality) to compensate for having fewer tracks. In any case, one of the features of both the ADAT and the DA88 isthat they can be run in pairs (or even greater combinations) to achieve the required number of tracks.

    The choice is yours...

    If this sounds like the sort of technology you'd be comfortable with as the central component in your recordingsetup, you'll be happy to learn your choice now is a little broader than the two original machines. Alesis unveiledits successor to ADAT, the ADATXT, a little over a year ago, and joining them as development partners, Fostexhave their own ADAT-format machine, the RD8.

    Both offer significant improvements over the original ADAT, many of which are now coming onto the secondhandmarket, which provides you with an additional (somewhat cheaper) route into digital multi-tracking.

    As for Tascam, they have again adopted a different approach to Alesis. Instead of bringing out a successor to theoriginal DA88, they extended the range to include the less-expensive DA38 a machine sharing the same basicdesign, but with additional features intended to appeal to the recording musician and domestic studio user.

  • 8/14/2019 Digital Recording, Mixing and Mastering Volume 4

    3/103

    What The Hell Is: Mixing?

    You might think mixing should be easy, but you'd bewrong...

    Almost every book I've read on the subject of mixing has likened it to baking a cake. You take a bunch ofingredients, mix them together in the right amounts, put them into 'the oven' and watch them emerge assomething special: a whole which is greater than the some of its parts. It's a fair enough analogy, I suppose, butpersonally I've always thought mixing is more like sex: everyone seems to do it, everyone knows why they do it,but everyone has to learn how to do it without being shown. Unfortunately there's no mixing equivalent of the'knowledge' you pick up behind the bike sheds at school. You simply have to jump in (so to speak) and beprepared to get it wrong.

    And people do get it wrong... spectacularly wrong in some cases. I have recordings of the Beatles produced by

    George Martin which place the entire band in the left-hand speaker, the vocals in the centre and a solitarytambourine on the right. This was obviously a problem of mixing associated with stereo recording (somethingMartin and the engineers at Abbey Road didn't quite get the hang of until the late 60s) but still very disconcertingto listen to.

    And you don't have to go that far back to hear the complete mess made of mixing music on TV. It wasn'tuncommon to hear the lead vocalist, guitarist and snare drum in the mix, but no trace of the other instruments. I'vealso heard songs rendered unrecognisable by mixing harmony vocals much higher the lead vocals, and gaps inthe music where you could see the drummer doing something but you couldn't hear it. Admittedly, these are allpretty extreme examples, but they do serve to illustrate how critical the job of mixing is.

    In your home studio, the chances are you'll be mixing your own music and you'll know exactly what instrumentsare used and how you want them to sound. There should be no question of anything being 'lost in the mix' or notbeing given its rightful position. On the other hand, such familiarity can be a problem when it comes to beingobjective. After a couple of weeks listening to the same track, objectivity can leak away faster than best bitter froma cracked pint glass. This is precisely why so many bands and artists hand over responsibility for the mixing tosomeone else, often without them even being present. It's also why remixing a piece of music can produce suchstartlingly different results.

    The history of mixingThe whole concept of mixing has changed dramatically over the years, having evolved through three distinctstages. I mention them here not as a history lesson, but because each still offers a valid way of recording andmixing. Before the arrival of multitrack, the onus was on musicians to get their performances right so they could berecorded in a single take. It was the job of the mixing engineer to achieve a good balance of instruments 'atsource' - almost as if he were mixing live. This relied to a considerable extent on his musical judgement (or thelack thereof) and how he thought the band should sound. Admittedly, that meant that no one heard a bass drumon a pop record until around the mid-60s. But if you're familiar with these early recordings, you'll know this was aperfectly acceptable way of working, and certainly proved itself as a means of capturing a live feel which hasrarely been bettered.

    With the advent of multitrack recording machines, it became possible to defer the mixing process until after eachinstrument had been successfully captured on tape. As a result, musicians could get very precious about theirperformances ("Yes I know we've done 28 takes, but I think I'll get it this time") and be in the pub before themixing started. Then, the introduction of stereo recording and progressive increases in the number of tape trackssaw the mixing process grow in complexity, yet it remained essentially the same for the best part of 25 years. It'sstill a valid and flexible way of working but it's easy to get carried away with the idea that more tracks always makelife easier.

    In general, the more tracks you have, the more you'll fill and the more you'll feel obliged to pour their contents intothe mix. On the other hand, by separating the mixing process from the composition and recording stages, youallow time for your objectivity to return and, providing you keep the original multitrack recordings, it's also much

  • 8/14/2019 Digital Recording, Mixing and Mastering Volume 4

    4/103

    easier to remix a track at a later date. The challenge to this way of working came with the introduction of thecassette multitracker (the Portastudio) in the mid-80s and the development of MIDI and computer-basedsequencers. Not only was it possible to record and mix music to an acceptable standard at home, but the linesbetween writing and arranging, recording and mixing were beginning to blur. Mixing was becoming much more apart of the creative process, rather than simply an exercise in balancing levels and adding EQ.

    There was also a change to the concept of a piece of music having a fixed, definitive mix. These days, with radioand club edits various mixes of a song included on a CD single, the potential of remixing has been fully explored.

    And with it has emerged a whole new generation of remix artists who can reach inside a track and turn it inside

    out. It's no longer enough to express a liking for a particular track, you also have to name a favourite mix.

    The universal's not here... yetBecause of the changes that have overtaken music production over the past 15 years, it's become impossible toset down prescribed methods of mixing which are universally applicable. With sophisticated sequencers, direct-to-disk systems and digital recorders now competing alongside portastudios and multitrack tape machines,musicians and remix artists have the freedom to choose their preferred method of working. Mixing is no longer aprocess which has to be carried out after recording has been completed. And despite the flood of cheap, high-quality designs which have come on to the market, it doesn't even need to be done on a mixing desk. Softwaremixing is now a reality for owners of inexpensive PCs and there's even the prospect of digital mixing for theslightly better-heeled.

    Even so, the general principles of mixing hold good. Before we look at these principles, a word about the basicrequirements. Firstly, your ears. Keep them fresh (and clean, of course). Never ever attempt to mix a piece ofmusic at the end of a long listening session. Take a break of at least an hour and preferably overnight. The humanear is incredibly good at identifying problems with certain sounds, but not if it's had time to get used to them.

    Secondly, your monitoring system. It goes without saying that you should buy the best equipment you can afford.Without a reasonable system you'll have no idea how accurate an image of the music you're getting. But even ifyou do splash out on an amp and speakers, how do know you're getting a true picture? The answer lies inlistening to your mixes on as many other systems as possible, so that you know, for example, if you're tending tomix a little bass-heavy or aren't adding sufficient top end. Finally, don't think about mixing through headphones.Irrespective of what it may say on the box, headphones do not reproduce music in stereo. They reproduce it'binaurally', which is quite different, and makes it all but impossible to set up an accurate stereo mix.

    Feel your wayYou can take any approach to mixing you feel is appropriate to your music, from the 'wall of sound' (favoured bypeople as disparate as Phil Spector and hardcore guitar bands), to a cleaner, more considered approach wherespace is created around each instrument in terms of both frequency and time. The latter approach is undoubtedly

    the more time consuming. You need a good ear to determine the area of the frequency spectrum in which eachsound predominates and to prevent too much overlap. But that's what professional studio engineers andproducers are able to do, and the results usually speak for themselves.

    The most basic function of mixing - the balancing of levels between individual instruments (or tracks) - is notsomething anyone can advise you about. You know how you want your music to sound and the level controls arein your hands. But do bear in mind the likely destination for a particular mix. There's no mystery here. The primaryrequisite for the dance floor is a rhythm track which to hit the punters in the solar plexus. But apply the samebottom end to a song destined for someone's car stereo, and it'll cause major problems.

    Bass needs to be tailored quite specifically to the needs of a particular track. Using EQ, it's possible to strip awaylow frequencies to quite a high level before the ear will tell you anything is missing (though this is where having anaccurate monitoring system is so important). Very low frequencies are often not audible but will soak up a highproportion of a speaker's available energy. Filtering them out can actually increase the perceived volume of theaudible bass and will certainly reduce distortion at high sound pressure levels. As effective as EQ is in such

    applications, it can be something of a mixed blessing in the wrong hands. Use it to correct minor problems withindividual sounds and to create space round certain instruments by filtering out unwanted frequencies, but don'trely on it as a universal panacea.

    Obviously, much will depend on the versatility of the controls; sweep and para-metric EQ is much more effectiveat homing in on problem areas of the frequency spectrum. But they can just as easily be responsible for raisingthe profile of certain sounds till they just don't fit in any more. There's no clear dividing line between the two,except to say that the ear is much more forgiving of frequencies which aren't there than those that are. Sowherever possible, try cutting the frequencies you don't want, rather than boosting those you do.

    Wet, wet, what?

  • 8/14/2019 Digital Recording, Mixing and Mastering Volume 4

    5/103

    One of the areas of controversy which has divided musicians and producers for years is whether to record tracks'dry' or 'wet'. No, it's nothing to do with towelling yourself off after you get out of the bath, it's down to whether youadd effects such as reverb and delay before you record them or whether you leave them dry and add your effectsduring the mixing process.

    There are pros and cons to either approach which need to be carefully considered. Record your track with effectsand they're impossible to remove subsequently. If at the mixing stage, you decide you have too much reverb onthe vocals, you'll have to live with it, or re-record the performance. On the other hand, you may only have a singleeffects processor and want to use this for another effect on mixdown. So unless you do without the vocal reverb,

    you have no choice but to record with it. Vocals need reverb like England needs Michael Owen but overdo it andit's dead easy to lose the voice in a sea of mush.

    Reverb often has the effect of pushing vocals back in a mix. Great for preventing them sounding like they're sittingon top of it (as they often can when recorded dry) but not so good if it's masking an otherwise excellentperformance. You can get round this by introducing a pre-delay to the reverb. This can be set up on most effectsprocessors and can be applied to many instruments, but is particularly useful for creating space around a vocal orbringing it forward while giving it an 'aura' of reverb. You'll need to experiment with the pre-delay setting, butaround 30-50ms should do. The tendency of reverb to clutter up a mix is something you need to listen for verycarefully.

    And it's vitally important that you choose a program with the right reverb time for each track. 'Hall' programs soundgreat in isolation but can clog up the music quicker than the mud at Glastonbury. Short reverbs are great forcreating interesting room ambiences and don't take up as much space in the mix, but can sound unnatural. This is

    one argument for not adding reverb until mixdown.

    When all your instruments are 'in place' you can properly assess the type and quantity of reverb you'll need. If thisisn't feasible (perhaps you only have one effects processor) try to keep reverb to the minimum needed to achievethe desired effect and limit reverb times. Long reverbs often don't have time to subside before being retriggeredand can accumulate in your mix like Glastonbury mud (yes I know I've said it already, but you should have seenit).

    Use pre-delays if they're available and don't reject the use of gated programs. The overuse of gating effects ondrum sounds in the late 80s may have contributed to their current unpopularity, but they can be extremely usefulin chopping of unnecessary reverb tails and creating space. Another trick is to limit the frequency response ofreverb using either your mixer's controls, or your processor's built-in EQ (if it has it). This is best done bymonitoring return signals from your reverb unit and cutting any unwanted frequencies or limiting those whichappear to be obscuring the sound.

    Panning for goldThe art of panning instruments and sounds to create a convincing stereo image is one of the most important inmixing, yet is frequently misunderstood. So often, you hear demo tapes where the instrument placing appears tohave been carried out quite arbitrarily. It's like sharing sweets: one for this side, one for that side, and one in themiddle for luck. Panning is an essential part of mixing; a means of achieving balance in your music as well ascreating the transparency of a stereo image that we all take for granted in commercial recordings, but which canbe difficult to reproduce.

    Though I'm loathe to talk about what usually happens in a mix (if we all did what 'usually happens', we'd still beplaying whistles and banging hollow logs), there are a few basic ground rules which you really can't get awayfrom. The first is that the dominant, low-frequency instruments invariably sound better placed at or around thecentre of the mix. I'm talking here about the bass drum, the bass guitar or synth and any deep percussiveinstruments you may be using. Pan them too far left or right and your music will sound off-centre. Fine, if that'swhat you're aiming at, but there are much better ways of getting creative with your pan controls.

    One of the best is to set up some interesting rhythmic interplay using your different percussion sounds. Obviously,if you're using a sample loop for the drum track this may not be possible, but you could always augment it withadditional percussion (such as cabasa or claves) and pan these to the left and right. Alternatively, try setting up adelay on one of your instruments and panning the dry and delayed signals to opposite sides of the mix.

    Lead vocals are also placed at the centre of mix in most recordings, though this has much to do with where you'dfind the singer at a live performance. There's is certainly nothing to prevent you experimenting with the positioningof the vocals, particularly where you also have backing vocals as well which can be placed in a similar position onthe opposite side to the lead vocals, to balance things out.

    But again, hard panning left or right of any vocal parts can be difficult to live with. I should also remind you that

  • 8/14/2019 Digital Recording, Mixing and Mastering Volume 4

    6/103

    pan controls are not static, and there's nothing to prevent you from panning instruments left and right during arecording. It's easily overdone, but in moderation it can provide a real sense of movement (quite literally) within amix. A more subtle alternative would be to use a stereo chorus program on a effects unit which features auto-panning. This leaves the dry signal in place, but shifts the chorusing between the left and right speakers. Andtalking of effects brings us back to reverb which can be used to create a convincing stereo image from any monosource.

    By panning outputs left and right, you can use reverb to produce a much broader, more expansive sound, even at

    short reverb times. On the other hand, reverb may be upsetting your stereo imaging by changing the apparentlocation of a specific instrument. If this does occur, try panning the reverb to exactly the same point in the stereofield as the dry signal, preferably sticking to a mono effect.

    Instant mix fixesTo round things off, how about a couple of ways to provide an instant fix for your mix? If you've already mixeddown to stereo and found the result disappointing, try sticking the entire mix through an aural enhancer (of thekind we looked at a couple of issues ago). Though not always successful in treating a complete mix, they can alterthe overall sound in subtle and distinctive ways, particularly processors which affect the stereo imaging.

    Alternatively, give the track to someone else to mix. The results may not be to your liking (at first), but I guaranteethey'll reveal a side to your music that wouldn't have emerged had you been sat behind the mixing desk. Whathave you got to lose?

    Nigel Lord09/98

  • 8/14/2019 Digital Recording, Mixing and Mastering Volume 4

    7/103

    15 Reverb tips

    Our team of experts, producers and engineers give their bestreverb tips

    Diversify

    Rather than trying to make everything in the mix in the same acoustic environment, why not use a couple of reallydiverse reverbs to add some strange depth to your tunes? A really dry, upfront vocal works nicely alongside areally 'drowned' string section or a small bright room setting on the drums.

    Automate

    Try automating return levels if you have a digital mixer so that the reverb comes and goes in different sections of

    the song. By tweaking the aux send levels, manually, during the mix you can add splashes of reverb on the fly toadd interest to snares or vocal parts.

    Take your time

    Spend some time choosing or trying out different 'verbs. Different songs lend themselves towards different typesand sounds. Don't just settle with what sounds good in solo...

    Send that EQ

    Remember you can always EQ the send. Most large consoles offer you a choice of high and low EQ on the auxsends. On small desks, route the instrument/voice to another channel via a group or aux send, float this from themix and send this to the reverb effect. Now you can add EQ to the send and even automate it as it's now on afader. This is commonly used for those delays and reverbs that you want to move easily during the mix, such aswetter vocal in the chorus.

    Old tricks

    Reverse reverb is an old trick, where you can hear a vocal before a singer comes in, or a snare before it plays,easily using tape as you simply turn the tape over and record it backwards. You can do it using a computer, butyou will have to move the audio to the right place after recording it.

    Use combinations

    A combination of reverbs on things can be good. A short setting for the snap sound with a longer bright plate canturn a biscuit-sounding snare into a more live sound.

    Old school plate

    In the old days it used to be called delay to plate. You sent the signal to a loop of tape then sent that to the reverb.The speed of the tape would adjust the delay as the time it took to get from the record head to the playback head.This gives, say, a voice a dry sound before the reverb comes in, giving a more upfront sound while keeping thewetness, which would usually take it to the back of a hall somewhere! Some people still use the tape methodtoday for that old school sound.

    Simple drum one

  • 8/14/2019 Digital Recording, Mixing and Mastering Volume 4

    8/103

  • 8/14/2019 Digital Recording, Mixing and Mastering Volume 4

    9/103

    Cubase Tips - Tooled-Up

    Do you know Cubase's editing tools inside and out? If not, read onfor some handy tips that'll make your life easier..

    One of the reasons Cubase has remained so popular must be its inherent flexibility. As the old saying goes, thereis more than one way to skin a cat - and that, metaphorically at least, is exactly what Cubase allows for. There aremany ways of achieving the same result tucked away within Cubase. Which method to use is largely down to theuser's personal preferences, but there's also an element of 'horses for courses' involved too.

    The best way to know what method to use in what situation is to know what tools are available and what theyactually do. So without further ado (and before I start rolling out some more of my Granny's old sayings) let's takea look at Cubase's various tools.

    The Toolbo xAs you will no doubt be aware, right-clicking in a window opens up a toolbox (For Mac users, it's [option]+click, orgo to the tools menu). The majority of Cubase's windows have their own set of tools available for quick and easyediting, some of which are found in all the editors and some which are exclusive to only one. Most of the tools alsohave modifiers, accessed by holding down combinations of the shift, control and alt keys (command, control andshift on a Mac - but not necessarily in that order!). Here's a rundown of what each tool does...

    Common too ls1. The Pointer. This is the most-used tool. Use it for selecting parts or events, either by clicking on-screen items or'rubber banding' groups of objects. Holding down Shift allows you to add or remove an item from the selection.Clicking and holding on an item or group of items turns the pointer into a hand and allows you to move the itemfreely around the screen. Hold down the Alt key before clicking and dragging to create a copy of the item. In the

    Arrange window, hold down Control when moving an item to create a ghost copy of a part (more on ghost partslater).

    2. The Pencil. This tool is mainly used for creating and re-sizing items. The items can be parts in the Arrangewindow or events in the List and Key editors. To create an item, simply click and drag at the desired location inyour song. In the Key and List editors, using combinations of the Shift and Control keys creates notes with varyingvelocity. There are four values of velocity that can be entered in this way, from 32 (Control and Shift keys held) to127 (no keys held).

    To resize an item, select a pre-existing item with the Pointer, change to the Pencil, click on the selected item anddrag it to the new size. In the Key and List editors it's possible to re-size more than one event at a time. Using thePointer again, select the notes to be edited, switch to the Pencil, click on to one of the selected events and drag itto the new size. All the other items selected are stretched or shortened by the same amount. Holding down theControl key while doing this will force all selected parts to the same size.

    Back in the Arrange window, there are some more handy little shortcuts to be had from the Pencil. You can 'dragout' copies of a part by holding down Alt, clicking on the part to be copied and dragging - the part will be copiedrepeatedly to fill the selection. Doing the same thing with the Control key held down will drag out ghost copies of a

    part.

    3. The Eraser. This does exactly what it says on the tin. Use it to delete items by clicking on them or movingacross them while holding down the mouse button. But you didn't really need me to tell you that, now did you?

    4. The Magnifying Glass. This is really more of an audition tool than anything else. In the Arrange window, clickinga part with the Magnifying Glass changes the part's appearance to represent the events it contains Moving themouse while keeping the button pressed then plays any events that the Magnifying Glass passes over. In the Editwindows, the Magnifying Glass simply plays the event it is clicked on.

    Arrange window tools

  • 8/14/2019 Digital Recording, Mixing and Mastering Volume 4

    10/103

    5. The Q Match tool. This Arrange window tool is exceptionally useful, especially when dealing with drum andother rhythmic parts. Using this tool always involves two parts: a reference and a target. You simply select the QMatch tool, then drag the reference part and drop it on top of the target. The idea behind what this does is simple:to match the feel of the target part with that of the reference part.

    Let's say we had a two-bar unquantised hi-hat part. Now let's say that we had programmed a kick and snarepattern that had a slightly different feel. Now, the aim here is to match the feel (i.e. the bar positions of the events)of the kick and snare with that of the hi-hat part. In the arrange window, drag the hi-hat part onto the kick andsnare part. A dialog box will pop up to ask if you want to include the accents (the velocities of the hi-hat part) - the

    options are self-explanatory. That done, and all being well, the result should be as tight as a gnat's chuff. So tospeak.

    There is a little more to it than this, as the quantisation setting does come into play, but we shall cover that whenwe take a look at all of the quantisation methods in a future instalment.

    6. The Mute tool. Use this tool to mute individual parts within an arrangement. That's it - there's nothing more tobe said about it. Move along, now.

    7. The Scissors. This tool is used for cutting a part into segments. In normal use, the Scissors simply create a splitin a part at the point at which the Scissors are used. However, holding down the Alt key before clicking on a partcreates splits along the full duration of a part. Each new cut has the same length as that of the first cut.

    8. The Glue. The opposite of the Scissors tool. Clicking on a part with the Glue tool merges it with the followingpart on the same track, creating a single, longer part. The two parts in question do not have to be next to eachother, as any space in between them is included in the new part. Holding down Alt when using the Glue tool willmerge all parts from the selected part to the last part in the arrangement.

    Editor tools9. The Crosshair (a.k.a. the Compass).This is probably the most versatile tool in Cubase's armoury. Its function not only varies with which window youare using, but also with what area of the window you are using it in. Let's start by looking at its function in the Keyeditor as this is where it is most useful, and has the most modifications.

    As an example, consider an E major chord as voiced on a guitar. Using the crosshair to click and drag a line thatslopes from top left to bottom right adjusts the lengths of all the notes to follow this line. Doing a similar thing withthe Alt key held down adjusts the start points while still keeping the note ends in their original positions. Hopefully,it will be clear at this point why I used a guitar -voiced chord - the crosshair is perfect for creating a strummed

    effect.

    Holding down the Control key when using the crosshair allows you to move either the start or end points of anevent or group of events. Using the tool on the left half of a note moves the start point without affecting the endpoint; using the tool on the right half alters the end point without affecting the start.

    The crosshair can also be used to edit data in the controller portion of the Edit window. Its operation here variesdepending upon what type of controller information you are editing, but the following general rules apply: using thetool with no extra keys held down simply alters any existing data to follow the line drawn, for example rampingvelocities.

    However, in the case of a MIDI continuous controller message, such as volume, there is not necessarily anyvolume data to edit (the volume stays at whatever level its last message told it to be - it doesn't have a continuousstream of 'volume=112' messages). To get around this, simply hold down the Alt key while drawing the desiredfade (or filter sweep or whatever). Now Cubase will create a series of controller messages to match the desired

    line, their spacing being determined by the Editors' 'snap' setting.

    10. The Brush. This tool is used for creating large numbers of events quickly and easily. In the Key and Drum Editwindows, clicking and dragging the Brush on the screen creates a line of notes whose length is defined by thecurrent quantise value, and separation by the current snap value. Combinations of the shift and control keyscreate different note velocities, in the same way as they do with the Pencil tool.

    You will notice that notes are only drawn in a horizontal line from the point of your initial click. Holding down Altenables you to 'paint' events anywhere you want. In the List editor the Brush performs the same operation, butcan be used to create any type of MIDI message, with the message type being defined by the 'ins.' setting at thetop of the List Edit window. We shall cover this in more detail when we take a closer look at the Editors.

  • 8/14/2019 Digital Recording, Mixing and Mastering Volume 4

    11/103

    11. The Kickers. These two tools are used for nudging events earlier (to the left) or later (to the right) in a part.The amount an event is nudged by is defined by the editor's snap value.

    12. The Drumstick. This tool, exclusive to the Drum editor, is used, rather unsurprisingly, to create events on thedrum grid. Using the shift and control keys while entering notes adjusts the note's velocity (the velocity values thatare created are defined for each individual instrument in the drum editor). Notes in the Drum editor only appear asindividual hits with no note length. The notes entered with the drumstick do indeed have a length value attached tothem, but normally this isn't important, as most synths' drum kit patches are set simply to trigger on an incomingnote on message and ignore the note length. If, however, you need to enter longer notes, then these longer

    lengths can also be set on aper-instrument basis.

    Adam CruteThe Mix 05/00

  • 8/14/2019 Digital Recording, Mixing and Mastering Volume 4

    12/103

    Drum Fundamentals 2

    The second part of our hands-on tutorial

    Read Part 1 here

    I hope you've been practicing your up, down, tap and full strokes as they are going to come in handy with thismonths lesson, which involves tackling grooves.

    Getting StartedGetting the right positioning of your kit is all-important. Make sure that all of your drums and cymbals are easy toreach. You don't want to be overstretching when you're playing. It may look very 'rawk' to have your cymbals sixfoot above the kit but invariably you'll end up breaking your back trying to actually hit them. As a result your

    playing will suffer not to mention your health, so it's best to just keep it simple.

    Place your feet on the appropriate pedals and adjust the stool - or throne - so that your thighs are flat whilst you'reresting on them. Pivot on the ball of your foot when playing, drawing the power you need from your whole leg andnot just your ankle. Once you start to 'groove' your legs will automatically bounce and not stick rigidly to thepedals.

    Cross your hands over (left over right or vice versa depending on your set up) so that your hi-hat hand is aboveyour snare drum hand. Now that you're all set to go, let's take a look at some basic patterns.

    Groove 1- click here for notation

    OK, here's you're bread and butter pop/rock groove. Simple, yet essential. All of the rock/pop grooves that you willplay are just variations on this pattern. If you've never read music before this is the easiest way to understand

    what's going on. The top line, indicated by the X's are notes to play on the hi-hat (play them as tap strokes). Thebottom line is the bass drum and the middle is the snare (play the snare as a full or down stroke).

    We are in 4/4 time, which means that there are four crotchet beats to each bar. As you can see, there are fourblack notes spread between the bass and snare drums in each bar - these are the crotchets. The hi-hat is playingeight notes to the bar - these are quavers or 1/8th notes. When counting out the bar adopt the practice of saying'1-and-2-and-3-and-4-and' whilst playing the groove.

    Start of by playing the bass and snare drum alone, which should sound on the 1,2, 3 and 4. The bass drum shouldbe on 1 and 3, the snare on 2 and 4. This is known as the backbeat. Once you are comfortable playing this, addthe hi-hat that will be played on all the beats that you are counting (1-and-2-and-3-and-4and).

    Always play to a click if you have one. Start at around 80bpm (beats per minute) using either a metronome, a beat

    on a keyboard or, as a last resort, songs from a tape or CD.

    Once you are a master of this backbeat consider yourself well on the way. Now try these.

    Groove 2- click here for notation

    In this groove the bass drum plays an extra note just before the second snare drum of the bar. So the backbeattranslates to your counting pattern as 1, 2, 3-and-4 (remember beats 2 and 4 are on the snare drum).

    Groove 3- click here for notation

  • 8/14/2019 Digital Recording, Mixing and Mastering Volume 4

    13/103

    This time around the bass drum beat moves to an earlier place in the bar. Count the back beat as follows 1, 2-and-3, 4 (remember the snare is on 2 and 4).

    Groove 4- click here for notation

    This last pattern incorporates the use of a rest! Hang on, it's not that worrying. Simply count the backbeat like this1, 2-and, and-4 (snare drum beats!).

    And finally...Got it? Great! Now that we've managed to work out the maths of it all it's time to play all these patterns over andover until we're grooving to the max! Make sure you use a click to keep yourself in time and try to let the beat flowthrough you. Relax at the kit, enjoy it, don't over play. Music is about having fun, OK?

    Right, you have your orders now get down the shed and start rocking!

  • 8/14/2019 Digital Recording, Mixing and Mastering Volume 4

    14/103

    Guitars And Sampling

    How to use the old-fashioned plank in a hi-techenvironment

    It's a familiar story: you started playing guitar, did the rounds in a few dodgy local bands, played to 'select'audiences in the Dog & Arse and, to the endless stream of drummers your combo digested, described Sony'srejection letter as 'record company interest'.

    Ready to throw in the bandana, you boxed up the gig-scarred geetar and thumbed through the dreaded Sits Vac,fearing the inevitable. Then, from nowhere, MIDI came to you, along with samplers, synths, sequencers, the fullmonty. Band schmand! You could suddenly do it all on your own. At home. Yessss!

    So, while you lavished creativity with all the verve of a born-again bandmeister on the MIDI set-up, your guitar

    gent wept in the corner, gathering dust and untouched, aside from the occasional late night, half-cut discordantstrum. You rotten sod. How could you neglect the very thing that gave you your start in music?

    Time to make some reparations... In truth, there's far more scope between guitars and sequencer/sampler set-upsthan you might imagine. Beyond the use of short loops and effected noises, which, with a little imagination, canyield a host of interesting results, a sampler with a reasonable amount of memory can also take the rle of amastering device for extended sections of audio.

    Sample examplesIf you play guitar to any reasonable extent, you'll doubtless be familiar with the general restrictions of analoguerecording getting a good take can be a painstaking task in itself but drop a sampler into the equation and you'llquickly find there's a lot more room to manoeuvre. One of the simplest examples of the benefits of sampling guitarparts is explained thus: imagine you want to record a lead guitar part, but, as ever, the playing is inconsistent.Sometimes it begins well, but you lose it halfway through.

    Then again, on that wild take there were some fantastic moments, but the mistakes and timing problems render itunusable. You get the picture. Supposing you record a number of takes into the sampler the rigid, note-perfectversion; the wild, improvised job, and so on. If you had enough channels on an analogue multitracker, you'dprobably work out a composite take, made up of the best sections edited together, but your sampler will allow youto do this in the same way or, even better to run, for example, four separate takes set to different MIDI channelsand triggered from your sequencer. You could then use MIDI volume controller information to cut between them.

    Obviously you'll need to insert the initial volume commands before the note trigger to ensure that the tracks youdon't want to hear are set to 0, but from then on in you can switch between takes with ease. In this way you avoidall the fiddly editing and note shifting that a composite sample track would require, and retain all the original takes,in case you decide to change the configuration.

    Chorus and flanging effects are easy to achieve if you double up a sample and play both at once. Panning the twosamples left and right and modulating pitch on one will give you a good repro of chorus, with more modulation

    taking you into flanging.

    Alternatively, you can pitch one of the samples up or down a tad and timestretch it to maintain the timingconsistency between the two samples. Simultaneous triggering of a single sample throws up some interestingresults, often creating an out-of-phase sound run three or four triggers of the same sample concurrently, or buildup the layers over a longer period for a gradually swelling sound.

    Vacuum packingIf you do find yourself in a momentary creative vacuum, it's always worth playing around with reversed samples.OK, so it's a well trodden path, but there's more to it than backward bloody cymbals. Psychedelic loons used toreverse tapes on multitrackers to get those oddball lead riffs that your old man lost it listening to, but with thesampler, push one button to get the same effect.

  • 8/14/2019 Digital Recording, Mixing and Mastering Volume 4

    15/103

    The chord from nowhere is a great way of introducing a track, or leading into a wild section, and if you've laiddown a riff or lead part that lacks the verve that you anticipated, try copying and reversing it. Timing the reversedpiece in and running it simultaneously with the normal part can throw up some intriguing developments, oralternatively, work on a composite of the two, intermingling the normal and reversed parts.

    Copying parts and shifting the pitch in octaves is another way of adding depth to a sound. Obviously, the outcomeis dependant on the nature of the part; a single note will simply be thickened by an upward or downward octaveshift on the copy, whereas a riff will gain a double or half-tempo counter riff, depending on whether you shift it up

    or down. Pitch shifting can, of course, be taken further, and not necessarily restricted to octave shiftsexperimentation can lead you to a harmonised counter-rhythmic accompaniment to your original part, and whystop at one copy? Take it as far as you can and edit back to the parts you like.

    Chord sequenceSampled guitars also respond well to effects your sequencer can generate. Tremoloes, delays, flanging andphasing, volume-based effects and layering all produce interesting results. You can create a range of delayeffects from reverb simulation to intricate multi-taps quite simply by multi-triggering your samples. Experimentingwith the timings and velocity values of the re-triggered sample will give you an infinite scope of delay possibilities.If you're using Cubase, the List or Key edit pages allow simple, graphical drawing-in of controller values; 'ramp'and 'v' shaped velocity settings, for example, will throw up some neat variations; offsetting the placement of yourtrigger notes lets you formulate more complex rhythmic effects.

    Using MIDI to control effectsIf you possess a multi-effects unit that has MIDI capabilities, then acquiring a MIDI foot controller can open up a

    host of possibilities for guitarists, maximising the instantaneous control you can have over the unit.

    The universal standardisation of MIDI 'language' means that any MIDI foot switch will be compatible with any MIDIeffects unit, so buying this extra won't depend on your purchase of an expensive dedicated piece of kit there arecheaper products to be bought on the market.

    The simplest function a MIDI foot controller allows is a program change message. This gives you the ability toassign certain effects patches or programmes to the switches on your foot controller, providing instant andordered access to the effects you wish to use. Mapping the program change information lets you place the effectsswitching in a logical order for the particular track you're playing essential if you want to avoid any mis-selectionsduring live playing.

    Getting more involved, a foot controller can also be employed to alter parameter information, such as overdriveintensity, compression, feedback on delays, repeat speed and so on. Using the standard 0-127 controller

    message values, the degree of parameter change can be altered via a rocking foot pedal assigned to theparameter of your choice, and with a foot pedal switch assigned to parameter changes, you can stomp to any ofup to 128 parameters (depending on the complexity of your effects unit). EQ frequency, master volume, wah-wahswell and other effects can all be assigned to the foot pedal and selected at will.

    The results can be enhanced further by copying a sample and assigning the two to left and right pan positions toproduce ping-pong effects. Vary the frequency filters between the two samples and you'll arrive at a differenteffect again interesting if you record sweeping filter data. Running two complementary guitar phrases, panned leftand right, with the filter sweep of one ascending from a low frequency to a high one and the other doing vice-versa, can work well with strummed chords or single bass notes.

    For tremolo, repeat a short trigger note at chosen points to dictate speed. Ping-pong and velocity effects canenhance results in the same way as with delay sounds, as can any filter sweeping effects you add. Chopping thesample up on your sequencer brings more rhythmic possibilities, so don't shy from trying un-guitarish sequencertricks, particularly in combination with the original part.

  • 8/14/2019 Digital Recording, Mixing and Mastering Volume 4

    16/103

    How to sound lo-fi

    Instead of trying to make the most of your gear, why notmake the least of it?

    Sick and tired of spacious reverbs and 24-bit delays? Bored of subtle compression and glossy mixing? Then comewith us on a journey into sound. Monophonic crap sound.

    Increasing numbers of artists, from trip hop to big beat are using that dirty lo-tech feel we all know and love, andno doubt E-mu will eventually launch a Planet Dirty module, but in the mean time, how do you make those kind ofsounds? Armed only with this article and Portishead's lo-fi tips you'll soon be able to shag your sonics in a varietyof innovative ways with kit you'll already have, while next month we'll be concentrating on equipment tailor-madefor the job.

    Before we start, it's assumed you know how to use multi-effects and compressors already. If you don't, you shouldprobably read your manuals carefully and try out any tutorials to get the hang of it all, otherwise you're going topick up a lot of bad habits. Also, be warned: some of these techniques can result in howling feedback andbludgeoning noise, so monitor at a lower level than normal, unless you want to give your speakers or your ears apermanent lo-fi sound. If you have a spare compressor, stick it across the stereo mix to catch any sudden peaks.

    Echo, echo, echo, echo, echo, echo, echoThat's the annoying thing about digital delays, they're too bloody good. You want something that mangles yoursound with each repeat, not replays the same sample quieter. Something along the lines of "echo... eko...grecko... grackle... growing". To find the remedy we'll have to go back to the early 70s and the birth of dub.Starting as an offshoot of reggae, dub's sparse off-beat sound was one of the first musical styles rooted in thestudio. The emptiness of the basic tracks left room for long evolving delays, feeding back into themselves. Thetrademark sound of dub delays can be heard on many records; for example the recent(ish) Portishead remix ofKarmacoma, or almost any track on The Orb's classic UFOrb album. But how do you turn a mild-mannered delayinto a feedback monster?

    First, set up a delay as normal, sending to it on auxiliary 1, and bringing the returns back to two mixer channels.Set the delay to the required time, and turn the feedback to zero. Send a sound to the delay at this point and youcan hear that it repeats just once and then stops. Now feed the echo back into itself by sending it down aux 1 onthe return channels. Be careful, as too high a level will cause an ear-splitting feedback loop as the delay repeatsitself louder and louder. By varying the amount of auxiliary 1 being sent from the returns you can set the numberof repeats. You'll hear an example of this on the first section of track 19 of the CD which is set for numerousrepeats and isn't that different from ordinary delay.

    Adding EQ to the mixThe secret ingredient is EQ. Rolling off the top- and bottom-end of the return channel causes each successivedelay to become a bit thinner. (This effect appears on the second slice of track 19 where the echoes soon becomenoticeably degraded.) The effect is similar to a vintage tape delay, such as the Watkins Copycat or Roland SpaceEcho. Next, if you have a sweeping EQ, apply a gentle 3 or 4dB boost to the frequency of your choice, and slowlysweep the frequency around as the delay repeats.

    Fairly soon the echo is almost unrecognisable, as on the third section. Alternatively, cut instead of boost the sweptfrequency for the phasier sound which you'll hear on section four. Riding the send level by hand you should beable to keep the echoes going on indefinitely. Letting them start to fade and then bringing them back is a greatway of speeding up the mutations, or you can even try and overload the delay, as we've recorded on section five.Try recording five minutes of evolving echoes to DAT, and then go through it for interesting samples.

    This is a good practice to follow when wiring up unpredictable effects chains, as it can be hard to get the samesound twice. A lot of drum 'n' bass artists fill DAT after DAT with bizarre effects for sampling later, so if you want tobe unique, do the same. When you're tired of delays, switch to a different effect: try phasing or reverb. Section sixof track 19 uses a flanger and reverb, while slice seven features phaser, reverb and delay.

  • 8/14/2019 Digital Recording, Mixing and Mastering Volume 4

    17/103

    Downbeat and dirtyWhen it comes to beats, compression is everything. The whole Portishead vibe builds on the sound of drumsdrowning under the weight of the compressors, while the Chemicals' beats are block rockin' with the sound ofhard-edged compression. The basic setting for a typical trip hop drum sound has very low threshold values. Forbig beat, ease up on the threshold (try -20dB) and reduce the ratio to 12:1. Increase the attack time slightly, untilyou hear the front end of each drum smack out hard, and set the release to between 40 and 80ms to allow a moredynamic sound. The first section of track 20 has the dry loop and on the second section you'll hear threevariations. Compressors are normally used in insert points, but there are advantages to using them on a send. Setone up using the same wiring as the dub delay, merely replacing the delay with a compressor. For even better

    and more interesting sounds leave the delay exactly where it is and put the compressor before it.

    Just take the cables out of the delay's input sockets and stick them in the compressor's. Then run leads from thecompressor's outputs to the delay's inputs and set the delay time to 20ms or so, with no feedback. Now send yourdrum loop. You'll hear this effect on the third slice of track 20 which starts with just the compressor and then thedelay is switched in. You can hear the metallic quality caused by the compressor feeding back into itself. With acompressor in the loop it's impossible to blow your speakers so welly up the gain on the returns until it's well intothe red. Grab yourself an EQ and sweep it all over the place. It should sound something like the example on thesection four of track 20.

    Although it's no louder in terms of dBs, the subjective effect is of a huge volume increase. You can hear afeedback tone at the start and end of this track, caused by the compression when nothing is playing. Even a gatecan yield creative results when used in this fashion.

    Spring has sprungA lot of lo-fi sound has old gear at its roots, like the 'fake' tape echo effect created earlier. Before the advent ofdigital effects many studios used plate reverbs (essentially a resonating chunk of iron in a wardrobe-sized box) or,if they were on a budget, a spring reverb. If you've ever kicked or dropped a guitar amp then you'll probably haveheard the brain-shattering crash of a spring reverb.

    Most modern effects units do a fairly convincing plate reverb but not many offer a spring algorithm. You can makeyour own with a stereo delay, by setting one delay to about 45ms and the other to about 25ms. Set the feedbackvery high and set any damping parameters to maximum, giving a very dull echo. Send the effect back into itself,as with the tape echo earlier, and you have something approaching the classic spring reverb.

    Listen to the first slice of track 21 for an idea of what you're aiming for. An easy way of accessing grungy effects isto get your hands on some guitar pedals. They're mainly designed for live use, so they provide crude larger-than-life sounds, and they're cheap, as little as 20 or 30 second-hand. And there's not much that can compete with areally crap guitar compressor when it comes to big beat madness. Similarly phasers and flangers tend to be less

    subtle than their digital counterparts, while analogue delays degenerate into a soupy noise within seven or eightrepeats. Highly recommended.

    Red light districtDon't shy away from clipping. People have overdriven everything from valve EQs to analogue tape machines tocreate a bigger more crunchy sound, so don't panic at the first sight of an overload light. Experiment a little, tryoverloading your sampler's input, or driving your effects boxes too hard.

    The classic Josh Wink track Higher State Of Consciousness 303 sound relies on distorting the mixing desk, andthe sound of tape saturation can be heard on most 70s rock drums. In the land of lo-fi use your ears and not yourlab coat to decide what sounds good.

    Found soundsWhether it's trip hop or big beat that you're making a lot of lo-fi styles are loop-based so you're gonna need some

    interesting loops. Using the methods already covered we've got some pretty gritty beats going, but what aboutatmospheric stuff? You can start by putting the radio on and switching to long wave (for possibly the first time inyour life). Now find a station and then detune slightly away. The further you move from the original signal the morethe sound degenerates into a clangorous sort of ring modulation. Keep twiddling until you get the sound you like(the less recognisable the better) then record a section to DAT.

    This isn't always as straightforward as it sounds as it can take a while to strike lucky with a phrase or piece ofwarped music. I had to sit through George Michael to get the sound on the second section of track 21, so you'vebeen warned. Having got your ideal bit of noise you'll probably need to EQ out any whining tones, then sample itback off the DAT and use it. It's occasionally worth a quick foray into medium wave, but FM rarely producesanything worth hearing (the radio signal that is, not the mag!). Another source of unique sounds is digitalfeedback. This sound was most famously used by Garbage on Stupid Girl underneath the vocals running up to the

  • 8/14/2019 Digital Recording, Mixing and Mastering Volume 4

    18/103

    chorus.

    And while not everybody has access to an 02R, the same effect can be attempted with a sampler or an audiosequencer. Route the outputs to a desk (so you can monitor your results) and then feed them down an auxiliaryback to the inputs. Then fiddle with the input gain and/or EQ. The sounds on track 21, slice three were created bylooping an 02R out through one of its internal reverbs and back into two inputs or, in layman's terms, stuffing itshead up its own arse.

    Bitty and grittyMany dance acts have an old sampler hanging about, purely for the gritty sounds they produce. The Casio FZseries is particularly renowned for hardening up drum loops with its low sample rate. Fortunately for you, you don'thave to buy a second-hand sampler to achieve this effect, as most current machines allow you to reduce thefrequency bandwidth and/or bit-rate. In these days of big memories and cheap RAM, most people's samplers areleft set to the highest sampling quality at all times, so get in there and set it to the lowest.

    Sampling uses a process known as anti-alias filtering to try and mask the effects of lower bandwidths.Unfortunately, this is an automatic process on many samplers but if you're one of the lucky few who can switch itoff then the grungy effect will be stronger. As mentioned previously this sound is particularly suited to drum loops,giving them an antique flavour similar to crackly old vinyl. It's a sound you can hear a lot in hip hop, frequentlyused to make a sample stick out from the rest of the beats.

    Of course you can try it on anything - vocals, crusty old strings, even sections of the whole mix.

    Lo-coderVocoders have led a chequered career, swinging from cool (Daft Punk, Underworld) to very sad (The Cylons,Sparky's Magic Piano). Either way they were still largely used to process vocals until they were leapt upon by themore experimental members of the dance fraternity. Despite their name, vocoders are no more suited to voicesthan to any other sound source, being basically a bank of filters that analyse the EQ content of one sound andimpose it on another.

    As most vocoders only operate on frequencies below 3 or 4kHz they impart a pleasant woolliness to the soundsthat they process. Whenever you use a vocoder, always allow yourself five minutes of mumbling "I willexterminate" and "We meet again, Obi Wan" just to get it out of your system, and then route two effects sends intoit.

    Now experiment with sending different sounds from your track to work against each other. A modern classic is thesound of drums being imposed on a slow pad, giving a gating hard-edged movement to the pad sound. Hear it on

    track 22, section one. This is also an ideal time to use some of your found sounds to impart a little oddness ontomore conventional parts of the mix. By varying the depth of the vocoding you can create anything from a synthgarble to a gentle organic movement. Track 22, section two starts at maximum effect, reducing to a slightcolouration. Of course, you may not own a vocoder and think you can't afford one. Well, you're wrong. Vocodersare popping up cheaply all over the place, in multi-effects units and as software plug-ins so don't worry... you'll begetting them free with cornflakes by the summer.

    Burn the magic boxesIf you're on a real caveman tip you might still be finding all this a bit too modern and hi-tech, so here's a few realmedieval tips. Instead of using effects boxes, why not use real acoustics instead? Place a guitar amp at the end ofone of your auxiliaries and put a microphone at the other end of the room (preferably in the kitchen or the toilet)and hey presto! Really grotty reverb. If you can't afford a guitar amp use the useless (until now) pair of tinyspeakers that came with your Walkman, or buy some; they're pretty cheap. Point a mic at them and try shakingthem around, or putting them in different acoustic spaces.

    Another great technique can be achieved by putting both speakers in an empty fishbowl and moving the mic overthe top of the bowl. That's all for now, see you in part two. And I leave you with the endearing image of a manrecording a fishbowl at one in the morning. Now that's what I call lo-fi.

    Thanks to Studiocare Pro-Audio for the loan of the 02R and effects used in this article.

  • 8/14/2019 Digital Recording, Mixing and Mastering Volume 4

    19/103

    To print: Select Fileand then Printfrom your browser's menu.

    Click here to return to intermusic.com

    Mastering Cubase and Logic

    Let us help you improve your MIDI manipulation skills...

    There are many great paradoxes in life: when we go to bed at night we can't get to sleep, yet in the morning wecan't wake up; those most capable of getting elected to power are often those least suitable to hold such aposition. And, first and foremost, one of Cubase's most powerful editors often stays unused at the bottom of amenu due to its apparent complexity, yet is called the Logical Editor. All right, it's not the sort of thing that wouldkeep you up at night (or, if it is, you really do have something better to be worrying about). But if you use Cubaseand don't use the Logical Editor, I guarantee you are doing many things the long way round.

    Getting the most from this editor does require a basic understanding of MIDI and how its messages arestructured, but as MIDI is a subject that can fill a book - or many different books - we won't go into it here, and willassume that you have a basic knowledge. The best way to describe the Logical Editor is as a kind of 'MIDI

    calculator' used to mathematically manipulate raw MIDI data. This could be for a simple task, like selecting all theinstances of a particular note in a part, or for more creative uses, such as transforming note data into a controllermessage for a filter bank.

    At first glance, the editor can be a bit baffling, with lots of drop -down menus and fields labelled, rather vaguely, asValue 1, Value 2, and so on. Thankfully, there are two modes of operation: Easy and Expert. We will look at theformer first.

    When you open the editor, you will notice that it is split into three distinct areas: Filter, Processing and Functions(Preset is just for storing settings for common pperations, so it doesn't count). The basic principle here is that thefilter is used to specify the MIDI events that will be passed along for processing. Here, mathematical functions canbe set - or not, as the case may be - and applied to these MIDI events in the manner dictated by the Functionssettings. The meaning of the Value 1 and Value 2 fields varies, depending on the type of event being edited. Yousee? I told you it was logical! Let's move on to an example...

    Example 1To achieve a useful result from the editor, you need to have an aim in mind before you start fiddling with thesettings, so record a MIDI part into Cubase. Now, let's say that you wanted to delete all the instances of the noteC3 that have a velocity value between 30 and 50 from the new part (hey, it's just an example, right?). Clicking theEvent Type drop-down gives three choices: Ignore, Equal and Unequal - select Equal.

    You can now access the drop-down, just below where you select the event type to be processed - for thisexample, select Note. You've just told the editor that you only want it to deal with note data and nothing more.(Leaving this field on Ignore would mean that all event types - modulation and aftertouch, for instance - would bepassed on to the next stage of the filter. Selecting Unequal would mean the editor was to deal with everything butnote events.)

  • 8/14/2019 Digital Recording, Mixing and Mastering Volume 4

    20/103

    The next field is Value 1. For a note event, this means the MIDI note number (and this is where the basic MIDIknowledge comes in handy). This time we have a greater choice of filter types to chose from, and all are self-explanatory - again, select Equal. Now we can enter the value of the note we want to deal with, either with themouse or by manually entering the value. For this example, set the value to C3, or MIDI note number 60, as it willappear in the field.

    For a note event, the Value 2 setting represents the note's velocity. As we have decided we want to deletevelocities between 30 and 50, we need to use the Inside filter. Now the second numerical field has become active:set upper and lower settings to 50 and 30, respectively. Because the event we are dealing with has been recorded

    from a single source and doesn't contain events on multiple MIDI channels, the last field can be left as Ignore - ifyou are dealing with multiple parts, however, this can be useful.

    Have a glance back through the settings. We have now specified what events to delete. All that remains is toselect the Delete option from the Functions drop-down menu and click the Do It button. The logic should bebecoming clearer now...

    Example 2On to the Processing section: let's say that you had programmed a drum part using a single sound module as thesound source. Now you want to lift out all the events in the hi-hat line that have a velocity greater than 90 andsend these MIDI events to a sampler that has a better loud hi-hat sound. The problem is that on the sound modulethe hi-hat was assigned to note F#1, yet on the sampler it is on C2. To make matters worse, the sampler's hi-hatsound is more sensitive to velocity values than the one on the sound module. (Well, it's more feasible than the lastexample!)

    Use your new-found knowledge of the Filter section to set it up appropriately. All the processing fields default toKeep, meaning no processing will be carried out on the data. Have a look through the drop-down menus and seewhat options are available - again, these are self-explanatory and, on the whole, are simply basic arithmetic. Forour example, to change the note number from F#1 to C2 (MIDI note number 42 to MIDI note number 60, in otherwords) is nothing more complex than setting the Value 1 drop-down to Plus and setting the numerical field to 18.

    There are various approaches to changing the velocity value in this example: you could use subtract to reduce thevelocity data by a set amount; divide by two to half all the velocity values; or use the Dyn option to constrain thevelocity value between an upper and lower limit, yet retain the relative difference in velocity. To finish theoperation, choose Extract from the functions menu and click Do It. A new part will have been created in the

    Arrange window containing the new notes, all transformed and separate.

    Cubase 5 users, however, will be disappointed to find that the Extract function has been dropped (this is probably

    an oversight rather than a conscious decision to drop the function - so these users would need to perform anoperation like this in two stages: use the Select option to select the notes, then cut and paste them to a new part.This part can then be processed with the Logical Editor's Transform function.

    Once you get the hang of the Easy side of the Logical Editor, have a look at the Expert settings. The new fields ofLength and Bar Range add a whole new range of useful functions - the method of operation remains the same asfor the Easy side of the editor. Hopefully, I've managed to make the Logical Editor seem somewhat more... well,logical, really. The more you use it, the more sense it will make.

  • 8/14/2019 Digital Recording, Mixing and Mastering Volume 4

    21/103

    MIDI (ie, Musical Instrument Digital Interface) consists of both a simple hardware interface, and a moreelaborate transmission protocol.

    Hardware

    MIDI is an asynchronous serial interface. The baud rate is 31.25 Kbaud (+/- 1%). There is 1 start bit, 8

    data bits, and 1 stop bit (ie, 10 bits total), for a period of 320 microseconds per serial byte.

    The MIDI circuit is current loop, 5 mA. Logic 0 is current ON. One output drives one (and only one)input. To avoid grounding loops and subsequent data errors, the input is opto-isolated. It requires lessthan 5 mA to turn on. The Sharp PC -900 and HP 6N138 optoisolators are satisfactory devices. Rise andfall time for the optoisolator should be less than 2 microseconds.

    The standard connector used for MIDI is a 5 pin DIN. Separate jacks (and cable runs) are used for inputand output, clearly marked on a given device (ie, the MIDI IN and OUT are two separate DIN femalepanel mount jacks). 50 feet is the recommended maximum cable length. Cables are shielded twistedpair, with the shield connecting pin 2 at both ends. The pair is pins 4 and 5. Pins 1 and 3 are not used,and should be left unconnected.

    A device may also be equipped with a MIDI THRUjack which is used to pass the MIDI IN signal toanother device. The MIDI THRU transmission may not be performed correctly due to the delay time(caused by the response time of the opto-isolator) between the rising and falling edges of the squarewave. These timing errors will tend to add in the "wrong direction" as more devices are daisy-chained toother device's MIDI THRU jacks. The result is that there is a limit to the number of devices that can bedaisy-chained.

    Schematic

    A schematic of a MIDI (IN and OUT) interface

    Messages

    The MIDI protocol is made up of messages . A message consists of a string (ie, series) of 8-bit bytes.MIDI has many such defined messages. Some messages consist of only 1 byte. Other messages have 2bytes. Still others have 3 bytes. One type of MIDI message can even have an unlimited number of bytes.The one thing that all messages have in common is that the first byte of the message is the Statusbyte.This is a special byte because it's the only byte that has bit #7 set. Any other following bytes in thatmessage will not have bit #7 set. So, you can always detect the start a MIDI message because that'swhen you receive a byte with bit #7 set. This will be a Status byte in the range 0x80 to 0xFF. The

    remaining bytes of the message (ie, the data bytes, if any) will be in the range 0x00 to 0x7F. (Note thatI'm using the C programming language convention of prefacing a value with 0x to indicatehexadecimal).

    The Status bytes of 0x80 to 0xEF are for messages that can be broadcast on any one of the 16 MIDIchannels. Because of this, these are called Voicemessages. (My own preference is to say that thesemessages belong in the Voice Category). For these Status bytes, you break up the 8-bit byte into 2 4-bitnibbles. For example, a Status byte of 0x92 can be broken up into 2 nibbles with values of 9 (highnibble) and 2 (low nibble). The high nibble tells you what typeof MIDI message this is. Here are thepossible values for the high nibble, and what type of Voice Category message each represents:

    MIDI Specification

  • 8/14/2019 Digital Recording, Mixing and Mastering Volume 4

    22/103

    8 =Note Off9 =Note OnA = AfterTouch(ie, key pressure)B = Control ChangeC = Program (patch) changeD = Channel PressureE = Pitch Wheel

    So, for our example status of 0x92, we see that its message type is Note On(ie, the high nibble is 9).What's the low nibble of 2 mean? This means that the message is on MIDI channel 2. There are 16possible (logical) MIDI channels, with 0 being the first. So, this message is a Note On on channel 2.What status byte would specify aProgram Changeon channel 0? The high nibble would need to be Cfor a Program Change type of message, and the low nibble would need to be 0 for channel 0. Thus, thestatus byte would be 0xC0. How about a Program Change on channel 15 (ie, the last MIDI channel).Again, the high nibble would be C, but the low nibble would be F (ie, the hexademical digit for15).Thus, the status would be 0xCF.

    NOTE:Although the MIDI Status byte counts the 16 MIDI channels as numbers 0 to F (ie, 15), all

    MIDI gear (including computer software) displays a channel number to the musician as 1 to 16. So, aStatus byte sent on MIDI channel 0 is considered to be on "channel 1" as far as the musician isconcerned. This discrepancy between the status byte's channel number, and what channel the musician"believes" that a MIDI message is on, is accepted because most humans start counting things from 1,rather than 0.

    The Status bytes of 0xF0 to 0xFF are for messages that aren't on any particular channel (and thereforeall daisy-chained MIDI devices always can "hear" and choose to act upon these messages. Contrast thiswith the Voice Category messages, where a MIDI device can be set to respond to those MIDI messagesonly on a specified channel). These status bytes are used for messages that carry information of interestto all MIDI devices, such as syncronizing all playback devices to a particular time. (By contrast, VoiceCategory messages deal with the individual musical parts that each instrument might play, so thechannel nibble scheme allows a device to respond to its own MIDI channel while ignoring the VoiceCategory messages intended for another device on another channel).

    These status bytes are further divided into two catagories. Status bytes of 0xF0 to 0xF7 are calledSystem Commonmessages. Status bytes of 0xF8 to 0xFF are called System Realtimemessages. Theimplications of such will be discussed later.

    Actually, certain Status bytes within this range are not defined by the MIDI spec to date, and arereserved for future use. For example, Status bytes of 0xF4, 0xF5, 0xF9, and 0xFD are not used. If aMIDI device ever receives such a Status, it should ignore that message. See Ignoring MIDI Messages.

    What follows is a description of each message type. The description tells what the message does, whatits status byte is, and whether it has any subsequent data bytes and what information those carry.Generally, these descriptions take the view of a device receiving such messages (ie, what the devicewould typically be expected to do when receiving particular messages). When applicable, remarks abouta device that transmits such messages may be made.

    MIDI Specification

  • 8/14/2019 Digital Recording, Mixing and Mastering Volume 4

    23/103

    Note Off

    Category: Voice

    Purpose

    Indicates that a particular note should be released. Essentially, this means that the note stops sounding,but some patches might have a long VCA release time that needs to slowly fade the sound out.Additionally, the device's Hold Pedal controller may be on, in which case the note's release is postponeduntil the Hold Pedal is released. In any event, this message either causes the VCA to move into therelease stage, or if the Hold Pedal is on, indicates that the note should be released (by the deviceautomatically) when the Hold Pedal is turned off. If the device is a MultiTimbral unit, then each one ofits Parts may respond to Note Offs on its own channel. The Part that responds to a particular Note Offmessage is the one assigned to the message's MIDI channel.

    Status

    0x80 to 0x8F where the low nibble is the MIDI channel.

    Data

    Two data bytes follow the Status.

    The first data is the note number. There are 128 possible notes on a MIDI device, numbered 0 to 127(where Middle C is note number 60). This indicates which note should be released.

    The second data byte is the velocity, a value from 0 to 127. This indicates how quickly the note shouldbe released (where 127 is the fastest). It's up to a MIDI device how it uses velocity information. Oftenvelocity will be used to tailor the VCA release time. MIDI devices that can generate Note Off messages,

    but don't implement velocity features, will transmit Note Off messages with a preset velocity of 64.

    Errata

    An All Notes Off controllermessage can be used to turn off all notes for which a device receivedNoteOnmessages (without having received respective Note Off messages).

    Note On

    Category: Voice

    Purpose

    Indicates that a particular note should be played. Essentially, this means that the note starts sounding,but some patches might have a long VCA attack time that needs to slowly fade the sound in. In any case,this message indicates that a particular note should start playing (unless the velocity is 0, in which case,you really have aNote Off). If the device is a MultiTimbral unit, then each one of its Parts may soundNote Ons on its own channel. The Part that sounds a particular Note On message is the one assigned to

    MIDI Specification

  • 8/14/2019 Digital Recording, Mixing and Mastering Volume 4

    24/103

    the message's MIDI channel.

    Status

    0x90 to 0x9F where the low nibble is the MIDI channel.

    Data

    Two data bytes follow the Status.

    The first data is the note number. There are 128 possible notes on a MIDI device, numbered 0 to 127(where Middle C is note number 60). This indicates which note should be played.

    The second data byte is the velocity, a value from 0 to 127. This indicates with how much force the noteshould be played (where 127 is the most force). It's up to a MIDI device how it uses velocityinformation. Often velocity is be used to tailor the VCA attack time and/or attack level (and thereforethe overall volume of the note). MIDI devices that can generate Note On messages, but don't implementvelocity features, will transmit Note On messages with a preset velocity of 64.

    A Note On message that has a velocity of 0 is considered to actually be a Note Off message, and therespective note is therefore released. See theNote Offentry for a description of such. This "trick" wascreated in order to take advantage of running status.

    A device that recognizes MIDI Note On messages mustbe able to recognize both a real Note Off aswell as a Note On with 0 velocity (as a Note Off). There are many devices that generate real Note Offs,and many other devices that use Note On with 0 velocity as a substitute.

    Errata

    In theory, every Note On should eventually be followed by a respective Note Off message (ie, when it'stime to stop the note from sounding). Even if the note's sound fades out (due to some VCA envelopedecay) before a Note Off for this note is received, at some later point a Note Off should be received. Forexample, if a MIDI device receives the following Note On:

    0x90 0x3C 0x40 Note On/chan 0, Middle C, velocity could be anything except 0

    Then, a respective Note Off should subsequently be received at some time, as so:

    0x80 0x3C 0x40 Note Off/chan 0, Middle C, velocity could be anything

    Instead of the above Note Off, a Note On with 0 velocity could be substituted as so:

    0x90 0x3C 0x00 Really a Note Off/chan 0, Middle C, velocity must be 0

    If a device receives a Note On for a note (number) that is already playing (ie, hasn't been turned off yet),it the device's decision whether to layer another "voice" playing the same pitch, or cut off the voiceplaying the preceding note of that same pitch in order to "retrigger" that note.

    MIDI Specification

  • 8/14/2019 Digital Recording, Mixing and Mastering Volume 4

    25/103

    Aftertouch

    Category: Voice

    Purpose

    While a particular note is playing, pressure can be applied to it. Many electronic keyboards havepressure sensing circuitry that can detect with how much force a musician is holding down a key. Themusician can then vary this pressure, even while he continues to hold down the key (and the notecontinues sounding). The Aftertouch message conveys the amount of pressure on a key at a given point.Since the musician can be continually varying his pressure, devices that generate Aftertouch typicallysend out many such messages while the musician is varying his pressure. Upon receiving Aftertouch,many devices typically use the message to vary a note's VCA and/or VCF envelope sustain level, orcontrol LFO amount and/or rate being applied to the note's sound generation circuitry. But, it's up to thedevice how it chooses to respond to received Aftertouch (if at all). If the device is a MultiTimbral unit,then each one of its Parts may respond differently (or not at all) to Aftertouch. The Part affected by aparticular Aftertouch message is the one assigned to the message's MIDI channel.

    Status

    0xA0 to 0xAF where the low nibble is the MIDI channel.

    Data

    Two data bytes follow the Status.

    The first data is the note number. There are 128 possible notes on a MIDI device, numbered 0 to 127(where Middle C is note number 60). This indicates to which note the pressure is being applied.

    The second data byte is the pressure amount, a value from 0 to 127 (where 127 is the most pressure).

    Errata

    See the remarks under Channel Pressure.

    Controller

    Category: Voice

    Purpose

    Sets a particular controller's value. A controller is any switch, slider, knob, etc, that implements somefunction (usually) other than sounding or stopping notes (ie, which are the jobs of the Note On and NoteOff messages respectively). There are 128 possible controllers on a MIDI device. These are numberedfrom 0 to 127. Some of these controller numbers are assigned to particular hardware controls on a MIDIdevice. For example, controller 1 is the Modulation Wheel. Other controller numbers are free to bearbitrarily interpreted by a MIDI device. For example, a drum box may have a slider controlling Tempo

    MIDI Specification

  • 8/14/2019 Digital Recording, Mixing and Mastering Volume 4

    26/103

    which it arbitrarily assigns to one of these free numbers. Then, when the drum box receives a Controllermessage with that controller number, it can adjust its tempo. A MIDI device need not have an actualphysical control on it in order to respond to a particular controller. For example, even though a rack-mount sound module may not have aMod Wheelon it, the module will likely still respond to and utilizeModulation controllermessages to modify its sound. If the device is a MultiTimbral unit, then each oneof its Parts may respond differently (or not at all) to various controller numbers. The Part affected by aparticular controller message is the one assigned to the message's MIDI channel.

    Status

    0xB0 to 0xBF where the low nibble is the MIDI channel.

    Data

    Two data bytes follow the Status.

    The first data is the controller number (0 to 127). This indicates which controller is affected by thereceived MIDI message.

    The second data byte is the value to which the controller should be set, a value from 0 to 127.

    Errata

    An All Controllers Offcontroller message can be used to reset all controllers (that a MIDI deviceimplements) to default values. For example, theMod Wheelis reset to its "off" position upon receipt ofthis message.

    See the list of Defined Controller Numbersfor more information about particular controllers.

    Program Change

    Category: Voice

    Purpose

    To cause the MIDI device to change to a particularProgram(which some devices refer to as Patch, orInstrument, or Preset, or whatever). Most sound modules have a variety of instrumental sounds, such as

    Piano, and Guitar, and Trumpet, and Flute, etc. Each one of these instruments is contained in a Program.So, changing the Program changes the instrumental sound that the MIDI device uses when it plays NoteOn messages. Of course, other MIDI messages also may modify the current Program's (ie, instrument's)sound. But, the Program Change message actually selects which instrument currently plays. There are128 possible program numbers, from 0 to 127. If the device is a MultiTimbral unit, then it usually canplay 16 "Parts" at once, each receiving data upon its own MIDI channel. This message will then changethe instrument sound for only that Part which is set to the message's MIDI channel.

    For MIDI devices that don't have instrument sounds, such as a Reverb unit which may have severalPreset "room algorithms" stored, the Program Change message is often used to select which Preset to

    MIDI Specification

  • 8/14/2019 Digital Recording, Mixing and Mastering Volume 4

    27/103

    use. As another example, a drum box may use Program Change to select a particular rhythm pattern (ie,drum beat).

    Status

    0xC0 to 0xCF where the low nibble is the MIDI channel.

    Data

    One data byte follows the status. It is the program number to change to, a number from 0 to 127.

    Errata

    On MIDI sound modules (ie, whose Programs are instrumental sounds), it became desirable to define astandard set of Programs in order to make sound modules more compatible. This specification is calledGeneral MIDI Standard.

    Just like with MIDI channels 0 to 15 being displayed to a musician as channels 1 to 16, many MIDI

    devices display their Program numbers starting from 1 (even though a Program number of 0 in aProgram Change message selects the first program in the device). On the other hand, this approach wasnever standardized, and some devices use vastly different schemes for the musician to select a Program.For example, some devices require the musician to specify a bank of Programs, and then select onewithin the bank (with each bank typically containing 8 to 10 Programs). So, the musician might specifythe first Program as being bank 1, number 1. Nevertheless, a Program Change of number 0 would selectthat first Program.

    Channel Pressure

    Category: Voice

    Purpose

    While notes are playing, pressure can be applied to all of them. Many electronic keyboards havepressure sensing circuitry that can detect with how much force a musician is holding down keys. Themusician can then vary this pressure, even while he continues to hold down the keys (and the notescontinue sounding). The Channel Pressure message conveys the amount of overall pressure on the keysat a given point. Since the musician can be continually varying his pressure, devices that generate

    Channel Pressure typically send out many such messages while the musician is varying his pressure.Upon receiving Channel Pressure, many devices typically use the message to vary all of the soundingnotes' VCA and/or VCF envelope sustain levels, or control LFO amount and/or rate being applied to thenotes' sound generation circuitry. But, it's up to the device how it chooses to respond to receivedChannel Pressure (if at all). If the device is a MultiTimbral unit, then each one of its Parts may responddifferently (or not at all) to Channel Pressure. The Part affected by a particular Channel Pressuremessage is the one assigned to the message's MIDI channel.

    Status