Grow Your Knowledge
Audio, MIDI, Music Production, Live Performance general knowledge and tip & tricks
Grow Your Knowledge
- The Creation and Evolution of SWAM String Sections
- What is the difference between loading multiple Solo instruments and loading a Section in SWAM String Sections?
- What is a plug-in in music production?
- What do VST, AU, AAX, and AUv3 stand for?
- What is VST? What's the difference between VST, VST2, and VST3?
- What is a DAW and what is a host? Is there a difference between the two?
- What is MIDI? What is CC?
- What does USB Class Compliant mean?
- What is audio latency? How do I fix latency issues while recording?
- What is Core Audio? What is ASIO?
- A Beginner's Guide To How Digital Audio Works
What is audio latency? How do I fix latency issues while recording?
Latency, in the audio world, is another word for delay.
In music production, audio latency is when there’s a noticeable delay between the sound being played and the moment it reaches your speakers. Latency can cause major problems to musicians, music producers, and audio engineers. How can a musician accurately record a track if he hears himself with a delay in his headphones?
Research shows it takes between 20 to 30ms of delay before our brains perceive a sound as separate from another. So up to 10ms latency is usually ok, we won’t notice it. But more than that and a musician will start losing the feel of the music and this will affect his performance.
Why Does Audio Latency Exist?
So where does latency come from? Let’s look at what happens from the moment a sound is played to the moment you hear it back in your monitors:
- You play an incredible guitar solo. Electrical impulses travel through the wire to the sound interface where it’s converted to a digital signal.
- That signal is stored in the input transport buffer.
- It then goes to the ASIO/CoreAudio driver buffer waiting for your DAW to use it.
- Once ready, the data is stored in the ASIO/CoreAudio Output buffer
- It’s then sent back to the transport output buffer
- And finally, the data is converted from digital back to electrical signals and sent to your speakers.
That’s quite a long chain of processes, and each of these stages can add a few milliseconds of latency, sometimes more depending on how fast and efficient your drivers are.
Think about buffers like a waiting room. A buffer is where your machine stores information while it figures out what it needs to do with it. A small "waiting room" will need to be emptied more frequently. But if the information goes out before the computer is ready to process it, you’ll hear glitches in the audio. On the other hand, if the "waiting room" is really large, the computer doesn’t feel the need to empty it that often. That’s when there could be an increase in latency.
How To Solve Audio Latency Issues
So what can you do about it? Even though you can’t control each step of the process, there are a few things to check that can help in significantly reducing latency.
- Adjust the buffer size used by your DAW and/or your soundcard. In general, the more powerful your system, the less you’ll have to worry about this buffer size issue. But if you’re having trouble with latency, adjusting the buffer size can help.
- Close all other programs running on your computer. The more resources your machine has to process audio, the better your chances of everything working smoothly.
- Make sure that the drivers of your audio interface are up to date. The more efficient your drivers are, the faster they will process the information.
- Check how much CPU your DAW is using and see if you can increase or decrease the processing power assigned to it.
- Use zero-latency monitoring. This kind of monitoring is an option you probably have on your soundcard if you’re using a decently recent one. Zero-latency monitoring allows you to listen to the sound directly from your audio interface before it’s being passed on to your computer and processed by your DAW.
Use tracks and plug-ins efficiently. Delete any inactive tracks or plug-ins. Be mindful of your plug-in usage. For example, instead of using four or five different reverb plug-ins, try using only one or two.
If you’re using SWAM instruments, then you probably noticed that they use more processing power than regular plug-ins. This is because SWAM are modeled instruments; they don’t use traditional sampling technology. That’s why having multiple instances of SWAM instruments running at the same time can cause some issues.
There are workarounds to make your use of SWAM plug-ins more efficient. For example, in the case you’d like to create whole ensembles of SWAM instruments, we recommend working on tracks one by one, bouncing them as you complete them, and then building the ensemble on top of the previously bounced tracks. Like this, you don’t need to have multiple instances of SWAM plug-ins running simultaneously.
Audio latency is a natural phenomenon. By understanding more clearly the different stages the sound goes through, it’s possible to reduce latency to acceptable levels that won’t affect a musician’s performance.
Other articles in this category
- The Creation and Evolution of SWAM String Sections
- What is the difference between loading multiple Solo instruments and loading a Section in SWAM String Sections?
- What is a plug-in in music production?
- What do VST, AU, AAX, and AUv3 stand for?
- What is VST? What's the difference between VST, VST2, and VST3?
- What is a DAW and what is a host? Is there a difference between the two?
- What is MIDI? What is CC?
- What does USB Class Compliant mean?
- What is Core Audio? What is ASIO?
- A Beginner's Guide To How Digital Audio Works