top of page

Working with Podcasts

Updated: May 2

I've dealt with 900+ podcasts in the past and would like to share my experience with you, along with its technical process.


There are 4 main audio processes to be considered -

  1. Cleaning,

  2. Editing,

  3. Mixing,

  4. Mastering.


Now let's dive into each process.

A spectrogram of an audio
A spectrogram works best in this process.

  1. CLEANING - It means removing the

  • background noises (dirt),

  • electronic noises

  • Mouth clicks/chaps/licks,

  • small breaths,

  • plosives,

  • windy rustle,

  • clipping/distortion,

  • Gain Staging.

We can do more during the cleaning process, like controlling the Esses (sibilance), applying EQ, etc. But try to avoid doing this, because we will have much control during the Mixing Process.



2. EDITING - The main goal of editing is to make the audio sound seamless without any unnatural cuts.

Sometimes the speaker stretches the words, mispronounces, stutter without even realizing, so you might have to find replacement words in those audios to replace these mistake. Or try a combination of parts of various words to make one word.


Editing an audio file
Making a word out of small audio files

This process mainly includes removing of -

  • Filler Words

  • Retakes

  • Stutters

  • Coughs, Sneezes, Burps

  • Umms and Ahhs

  • Breaths

  • Unwanted noises e.g. Dog Bark, Door Knock etc.,

  • Gain Staging.

This process requires carefully listening and focus of finding errors, sitting for long hours watching the given script and making the best version of the audio.


AI have some great tools but these are not perfect, from sound clarity to word based editing, these tools require human touch for their best use case. They even sometimes fail to achieve simple tasks. In the end, you will still be spending time in cross checking all the things that AI did. But still, we should know when to use these tools.


3. MIXING - This is where the audio comes to life. Clarity is added to the audio and annoying frequencies are carefully controlled. There are hundreds of plugin, and couple of software which can do this job amazingly. But it can get overwhelming for beginners while mixing their audio. There are typically a few process which can enhance the audio -

Plugin Chain of an audiobook
This is how my plugin chain looked in a recent project.

  • Equalizer (EQ)

  • Gate

  • Compression

  • De-esser

  • Gain Staging


It requires a good amount of experience and expertise to use these plugins flawlessly. Each audio type is different and each speaker is also different. To use what and when, and the requirements of your clients totally affect your plugin choices. Even the plugins can be same but their order can be different.

Plugins working during the audiobook mixing
Plugins

4. MASTERING - This process decides whether your audio will be accepted on the streaming platform or not. Each platform has different requirements. Please read this link to understand better - Podcast Submission Requirements


In a nutshell, this process controls the loudness of your audio and its peaks, so it doesn't sound too loud or too quiet. It also controls sudden jump of audio and trims them down.


Before and After of a Mastered audio
Mastered audio
A meter displaying audio levels
Audio Meter

Use an audio meter of your choice to check the levels of your audio and check if its meeting the requirements of your streaming platform. If not, then make adjustments in the Limiter Plugin of your choice.


This whole process can be simple and complex at the same time. That also depends on the client and his working process. At the end of the day, you have to make things easier for them because that's what you're being paid for.


MY TAKE ON AI AUDIO SERVICES


I have used various services like ADOBE, NVIDIA, DESCRIPT, ElevenLabs, Revoicer, PlayHT and many more. These include both AI cleaning and AI generating audio. After running 1000s of audio through them, I felt they work with specific kind of audios. If you're using your zoom audio, or place where its windy, or its too compressed, of if someone taking behind, and in many more cases like these, it will not work. AI is very bad if your recording has reverb.


So cutting the long story short, I use AI for my benefit, when I know it will work. Because clients can easily determine when there voice is treated through AI. The first normal reaction is, it's not natural and right after that, they will say, its not 100% like my voice.


These are great tools but they need to be improved and more humanized.


 

Reviews



Do comment if you have anything to add.

Thanks,

Aman Dembla

13 views0 comments

Recent Posts

See All
bottom of page