Blog

Important Synthesis Modules and Audio Production

For music producers understanding synthesis and its modules helps us to hear and understand timbre and sound. In synthesis we create sounds from scratch which helps us to hear how where the sound comes from and how it evolves via the different modules.

Also, as music producers the chances are great that we will be working with synthesizers and samplers which now play important roles in many forms of music. There has been a recent reemergence of modular synthesizers such as the famous Moog which has been around at the early days. The modules used in these historic models are still the basis of many modern versions – both hardware and software.

Fundamentals of Synthesis

The fundamentals of synthesis are related to five modules and the way they are connected. These modules are the building blocks that most synthesizers are built upon. These modules are the oscillator, filter, amplifier, LFO and envelope.

Oscillator (VCO)

The oscillator creates the sound. This sound is based on a geometric wave form such as a sine, saw tooth, square, triangle or noise. This sound can be very raw and bright. The oscillator is also known as a voltage control oscillator (VCO).

The oscillator’s sound can change over time (modulate). This can be the pitch/frequency – controlled by a keyboard going from key to key. The key controls how the sound changes over time.

The sound changes based on the wave that is used. Square is hollow (because it’s missing the even partials) and sounds like an instrument like a clarinet. The saw tooth can be bright and sustaining (with all upper partials) and can emulate a string instrument such as a violin or cello.

Filter (VCF)

The sound travels from the oscillator to the filter. The filter is also known as the Voltage Controlled Filter (VCF). The job of the filter is to remove the unwanted frequencies with the low pass filter playing an important role. It allows the lows to get through while eliminating a proportion of the highs. This helps smooth out the bright oscillator sound making it more real worldly. Often the filter is considered the most important part of the synthesizer. The classic Moog synthesizer is renowned for the great sound of its filter.

Filters can be changed from low pass to high pass or band pass, band stop or notch. The filter can be modulated – often via the filter cut off – which dulls the sound.

Resonance (feedback) is an important facet of the filter. If you turn up the resonance it emphasizes the cutoff frequency.

Amplifier (VCA)

After the oscillator and filter the sound reaches the amplifier which controls the volume of the sound and how it develops over time (modulation). It is also known as a voltage controlled amplifier (VCA).

The VCA is an amplifier whose gain is set by the voltage level of a control signal. Mostly they attenuate the sound rather than amplify it. The VCA determines the instantaneous volume level of a played note, and it quiets the output at the end of the note.

Envelope

The envelope creates a shape that runs every time a key is pressed. It is mostly controlling the main amplifier – this is how we can make the amplitude change over time. The result is percussive shape or a sustaining shape. Envelopes can also control pitch, filters and other parameters.

The envelope performs a specific shape every time a key is pressed. The sound path goes up, down, sustains and disappears. The path takes the form of ADSR which stand for attack time, decay time, sustain level, and release time.

In a synthesizer the attack actually dulls the beginning of the note. The more you increase the attack – the slower it will develop. Attack takes it up to 100% and then the sound decays down to the sustain level which is a steady state. Once it’s released it goes down to zero.   Very much like holing a key down and releasing.

Low Frequency Oscillator (LFO)

The LFO is a modulator which creates cyclic variations in any other parameter. Often it is controlling the pitch of the oscillator as a vibrato. This tends to give the sound a warmer, richer sound much like the vibrato of a string instrument. LFOs can also control filters, amplifiers, other LFOs and other parameters.

We usually cannot hear the LFO as it is used as a control signal for changing parameters of synthesizers or effects. An LFO is a low frequency oscillator. An LFO usually runs below 20 Hertz. The vibrato of a singer or string instrument is in the three to six Hertz range.

There can be many more of these individual modules in a complex synthesizer and they can connect in large variety of ways. The LFO also uses wave shapes such as square, saw tooth and sine.

Synthesis Modules and Audio Production

We have looked at the usage of the 5 most important synthesis modules: the Oscillator, Filter, Amplifier, Envelope, and LFO and how they relates to audio production. We have observed how these help us understand timbre and sound – and also the importance of synthesizers in modern day music.

 

 

 

 

 

The Delay Spectrum and Audio Production

Audio delays are all around us. Sound bounces of surfaces and reaches our ears at different times – which we can often perceive. While recording it’s important to understand the role of the delay spectrum and the impact this can have. A slight delay can influence the sounds we are trying to produce – some of this we can use to our benefit – some we try to avoid.

Audio Delays

In audio production delay is a concept where the original audio signal is followed closely by a delayed repeat. The delay time can be as short as a few milliseconds or as long as several seconds. A delay effect can include a single delay or multiple delays. Delay can be processed – either by the musician or within the DAW. In music and audio production delay forms the basis of effects such as reverb, chorus, phasing and flanging.

Comb Filtering

Short delays take the form of a deep notch in the frequency spectrum. When this type of delay is increased (by milliseconds) we will get a series of harmonically related notches. This is known as comb filtering. Comb filtering can cause problems when we are recording. When you set up a microphone you need to be aware that the sound will come from the instrument/voice – but it will also be reflected off nearby surfaces before hitting the mic – causing delayed signals along with the original signal. Comb filtering can also occur when you set up 2 or more mics to record from one source if they are located at different distances from the source. For example this can be an issue when recording drums with multiple mics.

If multiple versions of a signal are recorded with a slight delay, certain frequencies will be added and others will be lost. A frequency-response plot would show a sequence of peaks and dips extending up the audio spectrum, their position depending on the time difference between the two waveforms.

Comb Filtering

A small delay can have a serious impact on recording. It’s important to give a lot of consideration to mic placement and nearby flat surfaces to reduce the possibility of comb filtering. When the delay time is increased to around ten milliseconds the sound starts to buzz and pitch can be observed. By 35 milliseconds or above two separate sounds (echo) can be observed.

Creating Pitches with Delay

When you have short delays and feedback introduced then this can result in an audible pitch. If you increase the delay time with feedback this will result in the frequency/pitch decreasing until it is no longer heard and it evolves into two separate sounds.

We observe that delay and frequency are related. This theoretical relationship enters reality when we introduce audio effects – chorus, phasers, flangers, short delays, long delays, reverbs and filters into the mix.

Slapback Delay

Slapback delay occurs when the delay time is lengthened to the 40-120 millisecond range and mixed against the dry signal. This is a medium, static delay and helps to add a sense of space to the signal – the equivalent to playing in a room. It is often used as a quieter alternative to reverb.

An example would be to set a low delay time to around 110 milliseconds and a dry/wet output to about 20% which will result in an a slapback version of the audio signal.

This type of delay was used extensively in the 1950s specifically in pop, surf and rockabilly recordings.

Synchronized Long Delays

When we hear long delays we experience a distinct echo in the sound – like being in a large room or a canyon. These types of delays are used to put an emphasis on a particular vocal or instrument. Often they are used in conjunction with the tempo of the music. To do this we don’t use delay time such as milliseconds – rather musical units such as quarter notes and eighth notes. This keeps the delay time working with the music.

The stereo spectrum is important and the delay does not have to be the same as the original dry signal. There can be a distinction between the original dry signal and the delayed wet signal by setting filter effects to the looped signals.  The delayed version may be panned to the sides of the recording. An example for this is the ping pong delay which bounces the sound from left to right and left to right.

One issue with long delays is that they can clash with the harmony of the piece of music. So it should be used wisely and with consideration for the piece of music. This type of delay can be heard in psychedelic, ambient and electronic music.

The Delay Spectrum and Audio Production

We have looked at the delay spectrum and specifically comb filtering, creating pitches with delays, slapback delays, and synchronized long delays. We have observed that delays can be a very useful tool for audio producers especially in the mixing process. We do have to be aware that there are dangers associated with delay – in particular comb filtering. As music producers we should strive to use these types of effects to add to the music we are producing – rather than using them because we can.

 

Dynamic Processors – Threshold, Ratio, Attack and Release

Dynamic processors are effects used in the post-production/mixing stage of sound recording. They are designed to manipulate the dynamic range of recorded sound. This involves adjusting the quietest and loudest levels of the audio.

Dynamic Processors can either reduce dynamic range by the process of compression or increase dynamic range through the process of expansion.

Manual ways of processing the sound be done using by various techniques including manually isolating sections of the recording and manipulating gain – or by ‘riding the fader’ i.e. manually adjusting the fader volume throughout the performance in the DAW. This can be successful, but can also become cumbersome if you decide to further reduce the whole level of an individual track. This is where dynamic processors are useful as they can fully automate the process.

Compression

Compression means either making the loud audio quieter – or make the quiet audio louder. The end result is a decrease in the dynamic range. This means for music production purposes we have more consistency throughout the project. Examples include compressors and limiters.

Expanders

Expanders as the name implies are used to expand the dynamic range of a piece of audio. This means decreasing the levels of the quiet areas or increase the levels of the loud areas to making them louder. Examples include expanders and noise gates. Usually they are set up to decrease the quiet levels.

Compressors, limiters, expanders, and gates are nonlinear devices. They have a common set of parameters – threshold, attack, release and ratio – but with different rules.

All dynamic processors have one section that’s designed to analyze the input signal (the side chain section, or the key section), and one section that is the volume fader.

Parameters of Dynamic Processors

Threshold

This is the level at which the dynamic processor starts to function. If sound levels cross over the threshold point then they are processed and other parameters may kick into effect.

Compressors and limiters will process sound that goes above the threshold point. Expanders and gates will process sound that goes below the threshold point.

Threshold levels will always need to be adjusted –you can’t rely on a preset to set threshold as it is related to the underlying musical material.

Ratio

Ratio determines how much the level is affected once it crosses the threshold point. It is expressed in the form of Input:Output

At a ratio of 1:1 – for a signal level of 1dB over the threshold, 1dB will be outputted. This means that there is no change.

At a ratio of 2:1 — for a signal level 2 dB over the threshold, 1dB will be outputted. This means that the output will be halved.

As the ratio increases the level of compression increases. At a ratio of 10:1 — for a signal level 10 dB over the threshold, 1dB will be outputted. In the case of compression, this ratio or higher is considered to be a limiter. At a ratio of 30:1 — for a signal level 30 dB over the threshold, 1dB will be outputted. This stops the sound and is known as a brick wall limiter.

Attack

The level of attack is how quickly the volume fader kicks in once the signal has crossed the threshold. Attack is usually shown in milliseconds [ms]. The lower the attack rate, the faster the fader will move.  This controls the amount of the transient allowed through unaffected by the dynamic processor. A transient is an area of the sound where the amplitude changes greatly in a short amount of time.

Release

The level of release indicates how quickly the volume fader returns to normal once the signal has crossed back across the threshold. Release is also expressed in milliseconds [ms]. The lower the rate, the faster the fader will move.

Dynamic Processors and Audio Production

All of these dynamic processor parameters (threshold, ratio, attack and release) can be set up differently and this can have a major impact on the end result of the sound. And as they are all interrelated it can make things even more challenging. A tweak in one area can dramatically influence the others. Therefore as audio producers it is essential that we understand each component and the influence they can have on each other. And at the end of the day it is essential that we use these tools in a musical way – and help improve the audio, the mix and the music that we are helping to produce.

 

The Mixing Board Channel Strip

In music production the mixing board can be quite intimidating for new users – whether it’s an actual physical mixing board, or one incorporated in your DAW software. There are an abundance of knobs, buttons and faders – What to use? How to get started?

The Mixing Board

The good news is the mixing board does not have to be intimidating. The secret is to take it section by section and understand what things mean and how everything works. A good start is to understand the mixing board channel strip which is duplicated many times across the board.

The Channel Strip

One important thing to consider is that sound moves from the top of the strip down to the bottom. So the inputs are at the top – and the outputs are at the bottom of the strip. That is a general statement, however it is not completely correct. There are places where the flow is not strictly top to bottom.

At the top of the strip is the input section. It has an XLR input for mic input and a line input for a line level source which could be either balanced (TRS) or unbalanced (TS) cable. There is also a trim knob or gain which sets the output level. This is where you set the levels – much like when using the audio interface.

Input Section with Insert

Below the input section we have the insert section. This is where we can add external devices such as EQ or compression to the signal flow. With an analog mixer we would use insert cables – a cable which is a combination of a single TRS and 2 TS.

Insert Cable

Below the insert section are a series of knobs which usually make up 2 sections – the EQ section and the Aux Sends.

The EQ section gives us some general control of highs, mids and lows. This section may be missing in a DAW mixer where you may rely upon digital EQ inserts or effects.

EQ and Pan Knobs

The aux sends is a separate output for the track. This can be used to send the signal to headphones, effects or more than one place. This area functions as a secondary mix or can be more than one mix. This can be useful in live situations for individual monitoring of musicians on stage.

Aux Sends

Then we have the pan knob which is used to change the level between the two channels – this is a mono channel strip with a single input to the channel strip but with an output in stereo. The pan knob controls the relative levels so if we pan fully to the right we will hear the sound out of the right speaker only. And if we pan fully to the left, we’ll hear sound only out of the left speaker.

In the DAW mixer it is usually a stereo channel strip so it doesn’t take the information and move from one side to another. Be aware that if you pan to the right it will reduce the left side information.

Next we have the mute button used to turn off the sound. The solo button is used to isolate the track from other tracks in the mix. In the DAW you may turn the solo button on and all others turn off. In some DAWs you can have multiple solo buttons selected at one time.

Solo and Mute Buttons

Below this are the main faders. On an analog mixing board there may be a unity gain section which will not amplify or attenuate the signal. Our desire is to keep levels at unity if possible.

Main Faders

We are now at the bottom of the channel strip. This section repeats across the mixing board and each fader level combines as the mix in the master bus (the master section in the DAW). The mixer LEDs will help to guide the level of the entire mix. For a rule of thumb – keep in green, peak at yellow – never enter red.

The mixing board in the DAW is mainly the same, but there are some differences. In the DAW the signal flow is also located in channel strips and is configured using drop down menus.

DAW Channel Strips

In the DAW it’s a good idea to always name your tracks. Be aware of the signal flow and the individual track’s inputs and outputs. As there are no cables it is essential that we have a good mental image of the signal flow. For the outputs these can be hardware outputs for headphones, speakers etc., or the sounds can be routed to the bus. The bus is the area where sound is collected and re-routed in a computer.

Inserts are the sections where we can add effects such as gates, compressors and EQ’s to the track.

Sends can exist in two places, before the fader (pre-fader) or after the fader (post fader). We should be aware of the send section on the mixing board. It will usually be a bus or hardware output. We’ll have a send level (how much signal is going to that output), and usually there’ll be a button for pre or post, which controls where that send lives inside of the signal flow.

Then there will be the volume and pan. The pan knob controls the output of the signal in the left or the right channel. The volume fader controls the main output volume of the channel. This is the basic signal flow through DAW.

The Analog to Digital Conversion Process

Overview of Sound

We have established that sound is pressure variations in the air – which is continually variable.

This is a constant process and computers cannot comprehend this type of information.

Computers can only understand binary information. That is – strings of numbers – represented in 1 and 0s.

Analog to Digital conversion is the process of changing continually variable sound into a stream of ones and zeros (binary information). This process is known as Sampling.

Analog to digital conversion takes place during the recording process when sound is picked up by a microphone, is transported to the Audio Interface and is converted into digital audio which can be understood by your computer and accessed by your DAW.

 

Sampling

Binary information is based on the bit. A single bit is a 1 or a 0. Every number is collections of those ones and zeros.

The number of bits determines the maximum number of states, or the biggest number that you can represent.

A single bit can represent either states such as ‘on or off’, ‘heads or tails’ or a number from ’0 to 1’.

To represent larger numbers we collect bits into words (a collection of bits). In music production Midi data uses 7 bit words. Digital audio uses 16 bit words.

2 to the power of the word length – gives you the number or value. For example 4 bits (2 to the power 4) has 16 values. Adding one bit doubles the resolution, adding two quadruples it etc. This can increase quickly – 16 bit has 65,536 values.

 

Digital Audio Sound

These are the standards for digital audio sound:

CD – 16 bits

Studio – 24 bit

We record in the studio at a higher bit rate than what ends up of the finished CD recording. This means there is a wider dynamic range (resolution) when we are recording – and we can record at a quieter level – which decreases the possibility of distortion.

There are two really important parameters in digital audio. Word length which is related to amplitude and sampling rate which is related to frequency.

 

Set your DAW Bit Rate

Bit rate – 24 bits

Be aware its best not to alter this setting while you are recording a project which can create problems.

Sampling Rate refers to the multitude of measurements per second that take place during recording. And the sampling rate can measure over 40,000 times per second to accurately represent the continuously variable signals in the air as digital. The higher the sampling rate – the higher the frequency that can be represented as digital.

 

CD Sampling Rate

CD Sampling rate is 44,100 hertz

Frequency is half the sampling rate. For example if you use a sampling rate of 44,100 hertz for music CDs then the frequency will be 22,050 hertz. This is higher than the top end of the human hearing range which is 20,000 hertz. So the CD sampling rate of 44,100 hertz covers everything humans can hear. Another common sampling rate is 48,000 hertz which is used in video production and editing.

 

Set your DAW Sampling Rate

Sampling rate – 48,000 hertz

When recording watch out for sample rate mismatch as this can speed up or slow down the audio.

Bear in mind that a single sample can have a dramatic impact on your audio and the quality of your audio (clicks, pops etc.). So listen very carefully on playback.

 

In Conclusion

Our understanding of the Analog to Digital conversion process helps us to excel in the music production process. It gives up a clear view of how the analog sounds we hear and record are converted into a digital form that computers can comprehend. We know and identify the importance of bit rate and sampling rate and how they relate to amplitude and frequency. We now have knowledge of the optimal settings for bit rates and sampling rates that we can use to make high quality recordings with our DAWs.

Sound Principles and Music Production

Propagation, amplitude, frequency and timbre are all sound principles that we need to understand as they relate to music production. Most of these principles can also be related to our everyday lives and how we listen, communicate and interact with others. In many cases people are oblivious to the theory behind important concepts that relate to sound and its production – however as music producers we need to have a firm understanding of these concepts and what it means for music production. To do this we shall take a closer look at sound, propagation, amplitude, frequency and timbre.

Sound

Sound is a disturbance of the atmosphere that humans and other living creatures can hear. Humans typically hear sound frequencies between approximately 20 Hz and 20,000 Hz. Other species have different hearing ranges. We have developed culture and technology (such as music, radio, phones and computers) that allows us to generate, record, transmit, and broadcast sound. As we are learning about music production it’s important to have a good understanding of different concepts relating to sound.

Propagation

Sound propagates or moves through various media such as the air, concrete, brick, wood, water etc. Because of the different densities and make up of these media – sound moves through them at different speeds. During propagation sound waves can be reflected, refracted, or attenuated by the medium.

Sound propagation through air can be altered slightly by factors such as elevation, temperature and humidity. However, generally speaking sound travels at 340 meters per second or one mile in five seconds.

Propagation and Music Production

In music production it is useful to understand the propagation and reflection of sound as it relates to the recording studio or space. The sound moves through the space and reflects of different surfaces which affects its speed. When we are recording and mixing we can create a sense of space via the room and how it propagates.

Certain sound effects such as delay, reverb, phasers and flangers also relate to the concept of propagation and how sound moves.

Amplitude

Amplitude relates to the sound wave. It shows the size of the sound vibration, and this determines how loud the sound is.  High amplitude makes a louder sound. This differs from the term ‘loudness’ which is our perception of sound related to duration and frequency.

There are many types of waves and in air the direction of the vibration is parallel. This is similar to the direction of propagation.

As the air compresses and rarefies it takes the form of a longitudinal wave. Amplitude is the extent of compression and refraction of the wave in the air. As they are hard to present – longitudinal wave are diagramed as transverse.

Amplitude and Music Production

Amplitude is measured in decibels. There are many places in our production sound flow where we can measure amplitude. In air it is Decibels of Sound Pressure Level, or dBSPL – zero being the quietest and going up as the sound gets louder. In the computer it is measured with the dBFS or Full Scale – zero being the loudest and goes down from there.   Therefore it’s good to know the context of decibels when discussing amplitude.

When we are mixing tracks amplitude is important as we control it in relation to the amplitude and panning of other tracks. The goal is to get a pleasing mix of the different amplitude levels of tracks.

When producing music dynamic plugins help to controlling the amplitude of sound signals over time. These include expanders, gates, compressors, and limiters.

The ‘dynamic range’ of your recording equipment is also related to amplitude. For example this can show the range of decibels in which the piece will reproduce the sound properly – ranging from the noise floor to the distortion point.

Amplitude can also be important when setting up equipment – setting mic levels and setting output levels etc.

Frequency & Timbre

Frequency is the speed of the sound vibration, and this determines our sense of the pitch of the sound. Frequency relates to computer measurement and pitch relates to our perception.

Low frequency has slow moving pulses, high frequency has fast moving pulses.

It’s possible to have a low frequency and high amplitude sound – the frequency and amplitude are not correlated to the propagation rate – they are independent of each other.

This is important as certain audio effects really impact only certain parts of sound – and relates not so much to frequency but rather to timbre which is the collection of sound in multiple frequencies.

Simple sounds such as the sine wave have single frequencies – but most sounds have multiple frequencies and include harmonics (also known as overtones and partials).

Frequency & Timbre in Music Production

Audio effects that controls frequency and timbre includes equalizers and filters. These are used to raise and lower different frequencies and manipulate the timbre.

As mentioned earlier the range of human hearing frequencies is 20 hertz to 20,000 hertz. Females and children tend to hear higher frequencies better and we all lose hearing in the high end as we grow older. We also don’t hear equally across the range. This can be shown graphically as a ‘frequency response curve’. Recording gear and microphones also have frequency response curves to show how they act across the spectrum.

Another example relating to frequency is – the note A above Middle C has a frequency of 440 Hz. It is often used as a reference frequency for tuning musical instruments.

Sound Principles and Music Production

Our understanding of sound related principles such as propagation, amplitude, frequency and timbre are very important to our understanding of music production concepts.  We have looked at the theory behind these principles and related them to their relevance in the field of music production. Understanding these sound concepts will give and the building blocks we need to explore more advanced ideas and techniques in the field of music production.