Voiceover Answerbase

Do you have a VO question? I may have answered it…

Over the years, I’ve been involved in various online communities – USENET groups, BBS’s, Flickr photo gangs… and of course Facebook and LinkedIn voiceover groups. A few years ago, I started adding certain answers into a text document before sharing them. As more people enter the world of voiceover, I found myself returning to that document, refining those answers and reposting them. It seems to make more sense to just make those answers generally available here.

I’ve grouped questions on similar topics together, and supplied links to resources posted on this site if you want to learn more.

How to find things on this page

You can search this page by pressing Command/Control-F in almost any current browser. You’ll see a “Search this page” window pop up where you can type in a phrase or keywords. That will keep you on this page.

The “SEARCH” box is a site-wide SEARCH for JustAskJimVO.studio site. The “Search…” box will take you away from this resource page and give you results from all of my resources and articles.

(If you are on a computer, it should appear in the upper right of the screen, on mobile/iOS flick down to the bottom of the page to find it). That will bring up articles and resources I’ve published which may go into deeper detail.

I’ve also linked to previous voiceover tech and VO workflow resources I’ve written where appropriate.

Didn’t find an answer to your voiceover question?

If you have a vo technical recording or workflow question that isn’t covered here or in those articles, please ask me directly through the form at the bottom of the this page.

Note: This is a work in progress and I’ll be adding older resources from my voiceover archives and appending new responses as voice actors ask more questions.


Recording Techniques in the Home Voiceover Studio

  • Recording Techniques
  • Punch-In / Punch and Roll Recording for Audiobooks
  • How do I measure my noise floor?
  • How do I listen with headphones when recording
  • How loudly should I record?
  • About recording input levels – Audacity doesn’t use “dB”

Recording Software in the Home Voiceover Studio

  • What is the best recording software for VO?
  • What do you need voiceover recording software to do?
  • Should I set up a home studio before I take classes?
  • How can I learn Adobe Audition, Twisted Wave, Pro Tools, Studio One, etc.?
  • Can I use Wavepad to record VO?
  • Is Adobe Audition better than Audacity?
  • Which software is “right” for recording voiceover?

“Raw” Audio / Formats / Processing

  • What does “raw audio mean”?
  • What’s the difference between “raw” and “processed” audio?
  • What audio format should I use?
  • All about MP3 Audio format

Effects / Plug-ins

  • When should I use a certain plug-in or effect?

VO Production and Recording Workflow

  • Iterative Saves
  • Matching Edits and Pickups

VO Recording Equipment

  • What is the best microphone for voiceover?
  • What is the best audio interface for vocieover?
  • Vocal Booths in the home studio
  • Are VO Booths “sound proof”?
  • Moving Blanket booths – any good?
  • Computers in the studio – in booth OK?
  • Using a non-computer recording device

Source-Connect / Remote Directed Sessions

  • What is a remote recording session?
  • Dropout / quality loss during a session
  • Internet considerations for remote sessions
  • How do I play back audio during a session?

Computers in the Home Voiceover Studio

  • MacOS or Windows?
  • When should I update my OS?
  • MacOS operating system update considerations
  • Apple Silicon machines
  • Where did my memory go? Maximizing RAM

Recording Techniques

What is “Punch In” or “Punch and Roll”?

Topic: Audiobook Recording Techniques

The term comes from tape days when you “punched in” a recording while the tape was “rolling”.

In current digital systems, it’s really referring to a “pre-roll” system of recording. It is a specific feature in many DAW’s which you turn on or implement with a different command than regular Record. 

It goes like this:

– You are recording and make an error.

– You stop recording and place the cursor in front of the error.

– You “Record” (if P&R is on) or “Punch Record” (if it’s a different menu option).

– The software begins playing the part you previously recorded from a certain amount of time before the location where you placed the cursor. This the the “Pre-roll”. 

– The pre-roll is audible in you headphones, allowing you to find the tone and timing of your last good sentence or phrase. You can talk along with yourself to lock in cadence and volume.

– When the playback reaches the cursor point, the software begins recording.

– This creates an immediate “edit” and overwrite of everything following the cursor point – so the error you made earlier is now replaced by the new recording.

(In “nondestructive” recording software, the old error is retained and can be accessed – it’s essentially in a hidden “layer”. In “destructive” recording software, it is deleted by the new recording. Some destructive recording environments allow you to “push” the original recording along the timeline as a way of not deleting it).

– You continue until you make another error.

(Nondestructive) Studio One, Reaper and ProTools all have excellent implementations of this option. (Destructive) Twisted Wave just added it (to the cheers of many of us). Adobe Audition, Ocenaudio and Audacity have it as well. 

It’s a key tool for efficiency in long-form recording.


How do I measure my noise floor?

Topic: Studio room tone, noise floor, background sound

Noise floor is the sound in your recordings when you aren’t talking. Many people who claim super low noise floor levels in their home studios are simply reducing the INPUT level when they record. It’s important to think of the noise floor as a value that’s relative to the recorded level.

In other words, one important consideration is the difference between the background sound/noise in your recording and the general peaks of your voice.

Simple Noise floor measurement steps

Noise floor is the background sound when you are recording at appropriate input levels. Simplest method to test for noise floor in your raw recording:

– Set your input so that your voice levels are peaking in roughly the -12 to -6 dB range.

– Start recording.

– Don’t say anything (or breathe on the mic, etc…)

– Record about 15 seconds or so at this level (i.e. no “source”)

– Speak at your normal volume for a sentence or two. (adding “source”)

– Stop recording. Save.

– Normalize your audio so that the Peaks (the “source”) are generally hitting -3 dB.

– Normalize your audio so that the Peaks are generally hitting -3 dB.

– Use an Analyze tool (such as is in Twisted Wave) to get a value of the Average RMS for this.


I get -73 dB for “room tone” – is that good?

Well… maybe.
There’s not enough information to know.

Room tone is really two things – the acoustic signature of the space (the nature of the noises in your recording area) and the comparative value to the source signal (how “loud” that versus the recording).
In this case, since you were claiming -73 dB, which means “compared to what” and “at what vocal peak” matter.

I can get a room tone of -96 dB by setting the interface’s input gain low enough.

Best practice is to record at a nominal input level (vocal peaks in the -12 to -6 dB range) and then look at the difference between the average peaks and the average room tone. If you have separation of ~67 dB or so, that’s really solid. (In the answer above, I go into more detail of how to do this).

In your example, if your general vocal peaks were around -6 dB with that input setting (a difference of 68 dB), you’d be doing well.

Whereas if you had most of your recorded peaks down in the -30 dB range (a difference of 43 dB), then it would not be usable.


How do I listen with headphones when recording?

Headphone Resources on JustAskJimVO.studio:

How Do You Listen? Recommended Headphones for your VO Studio
Resurrection or Redemption? Wicked Cushions Replacement Ear Pads for Sony 7506 Headphones
Audio Interfaces for the VO Studio (for “Direct Monitor” basics)

Hearing yourself while recording is best done through the audio interface. Most audio interfaces will have a “Direct Monitor” option that feeds your input immediately back to your headphones. Some will have a “MIX” knob that balances the Direct Monitor signal with the playback signal. It depends on the interface model.

Most software recording systems will let you “Monitor” through a round-trip signal – it goes into the computer and then back out to you. Sometimes this is a “Play through when recording” checkbox in settings. That introduces “latency” or delay and is generally very (VERY) distracting.

In all cases, especially when you are starting out, I would actually recommend NOT listening to yourself while recording. That’s a quick way to get distracted from the performance and a short path to sounding “announcery” or like a DJ. (Both of which are typically the “not sound like” direction in audition specs).

Recording while hearing yourself is a learned skill – you can always add that later if you need to. Many VO’s pull one headphone side back or remove them altogether when recording so they can hear themselves in the room.
I’d get in the habit of working without them so you can focus on performance – then specifically listening back so you can evaluate the performance. Those are two separate steps – particularly when starting out.


How loudly should I record?

Topic: Input recording level, target, headroom

Loudness and Levels Resources on JustAskJimVO.studio:

Measuring VO Volume: How Loud is “Loud”?
VO Studio Basics: Numbers We Might Need
VO Software Tools: Thinking About Limiters

Question: How loud should my recordings be?

When recording VO work, I recommend keeping your peaks (the tallest “height” of the loudest single wave) below -6 dB. Aiming at -12 dB “ish” is a good practice. If in doubt, a bit lower is always better. If you are recording in 24 bit, you’ll capture plenty of dynamic range by using a more conservative INPUT recording level. That way, you won’t hit -0 dB and distort the recorded audio.

Distorting audio on the input by recording too hot (i.e. hitting -0 dB with the input) is one of the most common errors in the home studio.(Well, that and recording waaaaaaaay too low….). Distorted input cannot easily be fixed. It’s also very obvious to anyone listening to the result. Don’t do it.

The actual input setting will vary depending upon the energy of your recording. No matter how loudly you are performing, we still want to aim generally at nothing going louder than -6 dB. Generally, you will control that through the INPUT control on your audio interface.

Question: I don’t understand where to set my recording input levels. All my clients want audio at different peaks, LUFS, RMS, etc…

While I do find there is no universal standard for delivery loudness/peak, there is a pretty solid agreement as to INPUT recording levels. Get that right and you can always adjust to their delivery requirements.

Basically, if you record at 24 bit/48 kHz with conservative levels, you’ll be fine.

It’s easiest to track PEAK – which is the loudest single wave you have recorded. I teach folks to aim at a -12 to -6 dB max peak, though -18 to -9 dB is just as reasonable. You are most concerned with not clipping when recording.

Loudness (RMS or LUFS) is a delivery spec. You don’t need to worry about that on the input step.

Noise floor means nothing unless provided relative to recorded source levels. If you can get to a 67 dB difference there, you’ll be fine.

Question: Audacity only shows me a scale between zero and 1 when I record audio. What is equivalent to -6 dB on that?

I honestly have no idea. But, you should not be using that scale. It’s something Audacity uses which is unique to them. The entire rest of the audio recording world uses the “dB” or decibel scale. It is a negative scale with 0 dB as the “maximum” – so -24 dB is not as loud as -12 dB.

Luckily, you can change the display in Audacity to use the same scale as everyone else.

You can right-click or Option-Click on the meter, choose “Options” and then change the

Audacity recording software - change the Metering to dB by right-clicking or Option-clicking and choosing "Options". Then select "dB" under Meter Type.
Audacity recording software – change the Metering to dB by right-clicking or Option-clicking and choosing “Options”. Then select “dB” under Meter Type.

Then in Settings (Windows) or Preferences (MacOS), select “Tracks” and change the default to “Logarithmic (dB)”. That will let you have meters and screen scales using the proper measurement tools for VO.

Audacity recording software - Under Settings or Preferences, choose "Tracks" and then change the Default Waveform scale to "Logarithmic (dB)".
Audacity recording software – Under Settings or Preferences, choose “Tracks” and then change the Default Waveform scale to “Logarithmic (dB)”.

Question: Back in the day, I used to record to tape and we’d usually push the levels to +6 dB or louder on the meter. When I do that for my VO recordings, it sounds terrible! What am I doing wrong?

Ha. Yeah – I remember saturating Ampex 456 pretty hard. That was SOP for magnetic tape. Spent a lot of hours watching tape rewind…

But – it’s absolutely not the same with digital recording. You gotta break that analog recording habit and update your mindset! 

The main differences are that
(1)  tape has an inherent noise floor which does not exist in digital recording
(2) the tape medium itself did not ever record 100% of what you pushed onto it (either frequency range or signal itself), and
(3) Clipping (signal >0dB) in digital recording is BAD. 

When I say “BAD”, I mean really bad. Very, very bad. Not at all good bad…

The inherent noise floor in digital is -96dB with 16 bit recording, and -144 dB in 24 bit, so it’s not typically the limiting factor. (It’s a little different than that as it’s actually a range of usable bit depth, but for comparison’s sake it is accurate.) 

When you push a digital signal past -0 dB* when recording input into your software, a couple things happen. First, there are anti-aliasing filters in your audio interface’s A/D (Analog-to-Digital) converter circuitry, which neatly “chop” the tops of the wave off.  This is a good thing as it prevents “aliasing”, which is the inharmonic reflection of excessive sound back into the signal. 

When I first started messing around with digital recording, the early A/D’s didn’t have that and – trust me – it was a nasty sound. When you chop the top of a happy wave it tends to sound broken up, and since our brains are wired to take meaning from vocal tones, we are very aware of it. 

Since you do that on the input, there’s very little you can do to recover it. Even if you back it off later, that clipping is in the signal. 

You need to get levels right at the source. Give yourself headroom when you records. You can always make things louder later.

There’s no functional reason to take that chance with digital recording – you are much better off recording more conservatively and increasing the volume with a Gain step later. Unless you are using low-end mixers or amps, you shouldn’t really be getting much in the way of noise added to your input signal.

Of course many people do the opposite – they record super low in the thought that this will keep their noise floor really low. But that’s the physical noise floor – the true room tone – so it’s always relative to the vocal level. They will always move in tandem. For example, a room tone noise floor at -31 dB when you have vocal peaks at -1 dB would be at -41 dB if you had recorded that same vocal with peaks at -11 dB.

*some DAW’s do allow processing above -0 dB while through mix/effect busses – that’s a little different.


Recording Software in the Home Voiceover Studio

Topic: DAW, software, apps, recording, file format, WAV vs. MP3

Voiceover DAW & VO Recording Software Resources on JustAskJimVO.studio

VO Recording Software: Transferable Skills
Twisted Wave: My “Go-to” Recording Software for Voiceover
Adobe Audition for Voiceove – A Solid Option
VO Recording Tools: Introduction to Spectral View Editing
Will you ever “Learn” that VO software?

What is the best recording software for VO?

The best recording software (DAW) for your home voiceover studio is…

Twisted Wave, Adobe Audition, Studio One, Reaper, Logic Pro, Ocenaudio, Amadeus Pro, Pro Tools, and quite possibly a few others I did not mention.

Here’s the thing: none of them record “better” than any other. The quality of your recording has to do with the quality of your performance, the room, your input chain (typically your microphone and your interface). Once the recording has been captured by your microphone and converted into a digital signal which your computer can deal with, there’s no functional difference in the quality of that data. They are all simply 1’s and 0’s in your computer’s working memory.

The difference has to do with the quality of the user interface. In other words, how your recording software presents the information to you onscreen in real time, and the way that it lets you (or forces you) to interact with that information. Most recording software was designed for multitrack music recording – where you would have separate audio files for each different instrument and play them all back together in synch – that’s what ProTools, GarageBand and similar software is designed to do.

In most cases, we can strip things down and simplify them so that the design of the software doesn’t get in our way. But most are largely overkill for our tasks as voice actors.

The more important question is – “What do you need your voiceover recording software to do?”

What do you need your voiceover recording software to do?

As working VO’s, we need to get auditions out the door quickly and efficiently. That usually means we need to edit our audition takes and send them out as MP3’s to our agent. During a session, we need to capture the audio reliably and deliver that single vocal track to the client.

A single track workflow such as Twisted Wave, Adobe Audition, or Ocenaudio generally provides more detail to the screen, supporting this workflow.

Core Functionality for VO recording software

  • Show the audio waveform (or spectral information) in a way that lets you easily see what is going on so you can make efficient edits
  • Save in common formats – especially WAV and MP3
  • Provide level (Peak) information, preferably in the form of a visual meter
  • Easily let you zoom in for greater detail and zoom out for “big picture”
  • Allow the use of standard audio plug-ins (Effects created by outside manufacturers)
  • Provide real-time preview of Effects

Desirable Functionality for VO recording software

  • Allow combining Effects into processing “chains” – also called “Stacks”, “Racks”, or “Macros”
  • Support automation or “batch processing”
  • Support “Markers” or “Regions” (essentially bookmarks that permit more sophisticated file splitting)
  • Save in obscure audio formats

Specialty Functionality for VO recording software

  • Apply effects to our input audio while it is being recorded
  • Combine background music or sound effects along with our vocal track
  • Synchronize with an audio backing track (or tracks)
  • Synchronize with a video track for dubbing

Should I set up my studio before I take VO classes?
Should I hire an audio engineer to teach me ProTools?
How can I learn Adobe Audition? (or any other recording software)

All recording software basically does the same stuff, but having someone who is familiar with the application you use is a strong plus. More important is to have a clear understanding of what you need to achieve as a voice actor, as opposed to a more general recording tasks.

Many audio engineers know their craft to an incredibly deep level, but most are focused on areas other than VO. The list of folks who know VO and can effectively teach home studio setup is a bit shorter.

I will say that the two tasks – voice acting and recording/capturing your performance – will gain strength through repetition. Taking small, consistent bites is a good approach.

In other words, it’s not terribly important to “learn Audition” (or Twisted Wave or any other recording software), as the the things we need to know for VO will be a subset of any recording application. And the details of efficiently delivering large eLearning projects are unimportant when you are simply trying to first learn to control Input levels and mic position. Those skills build over time.

Between the two – investing in a good performance coach is more important in my book. The technical stuff is certainly important, but the performance should be be worth capturing.
The Myth of the Microphone


Can I use Wavepad to record VO?

I’ve had a few clients use Wavepad over the years. For a while the MacOS and iOS versions were really buggy, so it didn’t make my recommended list. It appears to be more stable now (as of 2022).

One downside is that there probably aren’t as many VO-centric user groups to be found. That means that you may have to figure things out on your own if odd issues crop up. The company is established enough that it should not disappear without warning.

Wavepad does support common 3rd Party Effects plug-in formats – AU and VST – so you should be able to utilize commonly recommended tools such as Izotope RX and Waves. Since you are saving in common formats (WAV, MP3), you are not dependent upon it working. You can always open up those audio formats in any other application.

Just like Twisted Wave, Adobe Audition and Ocenaudio, Wavepad provides very direct waveform editing access in a single track format. I find that to generally be a more efficient way of working for VO.

Most other recording applications are geared for music use. ProTools, Reaper, Studio One, Logic and Garageband are all multitrack systems designed to combine discrete tracks into one final stereo output result. When setting up a multitrack approach, we usually end up simplifying or bypassing most of the tools so we can generate audio efficiently for our needs.


Which recording software is better? Adobe Audition or Audacity?

In terms of performance appropriate to VO tasks, Audition gets my nod without hesitation.
Here are four reasons:

  • Single track workflow is plenty for VO
  • Waveform detail to screen. Auditions provides a usable screen rendering of the audio waveform. This makes it phenomenally easy to visually detect clicks and small imperfections in the audio.
  • Zoom tools. Although it has gotten better, zooming in and out on the waveform is still indirect. Given the sheer number of times we tend to do this while editing. the Control-scrollwheel (Windows) and Command-magic-mouse-surface-up/down (MacOS) is kludgey and dumps you to the start or end of the file without warning.
  • Preview of Effects. In a destructive-editing environment, the inability to quickly Preview what an effect will do in real time is very necessary. Limiting the preview to short snippets or requiring the application of the Effect adds unnecessary steps.

These issues have been on my Audacity wish list for a long time. Twisted Wave on MacOS, Adobe Audition (Mac/Win) and OcenAudio (Mac/Win/Linux) have all solved these VO-specific needs. Those three were developed specifically for editing voice tracks originally, as opposed to pretty much everything else (ProTools, Reaper, Studio One, Logic/Garageband, Audacity, etc.) which are music production (i.e. multitrack) focused.

All that being said, I have plenty of clients who use Audacity as their main recording/editing environment at a professional level. None of the software records “better” than any other, and if you have an efficient workflow that lets you get stuff out the door, that’s the only test.

I would strongly recommend using the free trial of any software and testing the recording, editing, processing and output steps.

How do I decide on the “right” recording software for voiceover?

There are two basic approaches which software takes for recording VO:

– A direct editor such as Twisted Wave, Audition or Ocenaudio which sets up a single file (track) and allows very detailed views and adjustment.

– A multi-track recording environment such as Studio One, Reaper, Audacity and many others which was developed for full studio recording tasks and is typically simplified to work for the specific needs of VO. The waveform detail to screen varies.

Since most of what we do in VO is simple, single file recording and delivery of clean files, I tend to favor a direct editor such as Twisted Wave or Adobe Audition, especially if you have no experience with multitrack workflows.

Just to say it one more time: There is no difference in the quality of the recordings made using different software. No software records “better” than any other – audio quality is determined by your space and equipment (mostly your space).


Can I Use My iPhone to Record Voiceover?

Sure, you can record directly to your phone. You’ll need a specific interface or converter to do so. There’s nothing wrong with using a phone (iPhone, Android) or tablet (iPad) to record voiceover. These days they all support high resolution audio and can save in common audio file types.

Most voice actors use their phones as a fallback method for emergencies, because it’s a relatively indirect way of getting audio to your client or agent. I’ve recorded into my mobile device from inside of a car, in order to get a rush audition to my agent. However, there are a few reasons why this isn’t my main method.

First, it gets back to the key variable for recording quality: the space in which you record. If I cannot control the quality of my recording space, then will have a difficult time providing professional quality audio.

Second, even though I like the iOS version of Twisted Wave, editing audio on a touchscreen remains a very frustrating. I would argue it could be one of the outer circles of purgatory. That means that if I’m providing finished work, I’m going to want my mouse or trackpad and a large screen.

Third, delivery of files can be problematic. Though there have been improvements to how we access files in the newer versions of Apple’s iOS, uploading large files through a phone’s browser can be problematic. I often find it’s simpler to push it out to Dropbox of Google Drive, and then share that link.


Raw Audio

Topic: VO production workflow, VO Auditions, Meeting Delivery Spec on a VO Project

What does it mean when a client asks for “raw” audio?

When someone asks us to deliver “raw audio” files, it’s pretty much just as it sounds. They literally want the unprocessed audio file you recorded.

That means NO:
Noise Reduction, Noise Removal, Compression, Limiting, EQ in the vocal range, Audio “sweetening”, or any type of processing applied to the audio after you recorded it.

That also means NO:

Applying Plug-in Stacks, Racks or Macros, running your audio through “front-end” processing (such as may be used with an Apollo Interface through their Unison Plug-ins, or the Presonus io24 Revelator Interface).

That probably means NO:

Microphone which might be applying active processing to the incoming audio, such as a few models of USB-direct-connected microphones which I talk about here.

But, your audio would likely be considered “raw” if you use a High Pass Filter – either software or hardware based.

Delivery of Files – Raw v. Raw Takes v. Raw Separated Takes v. Processed

Question: A client asked me to send raw files after our session. Do I edit the mistakes? Do I send everything?

I would simply clarify it with the client – I’ll deliver what they want for their workflow. Gotta have that conversation with them (even if just by email).

Start with format, bit depth, sample rate.

(e.g. WAV 24 bit 48 kHz)

then…

“Raw ” is just that. I send that to someone who was directing remotely where they may have time logs of the takes they like. Upload and done.

“Raw Takes” is a single file of clean takes only, false starts and long gaps removed. I’ll chunk in a 3-5 second silence block so they can jump between them easily.

“Raw Separated Takes” are separate files of each take, labeled in an agreed upon way.

The difference between “Raw” and “Processed” is a little grey. For delivery of VO files, I don’t regard a HPF as “Processed” – Since many mics have a hardware HPF, it could have occurred there. Judicious use of a quality DeClick Tool also makes the cut. Done right, it’s not changing anything and doesn’t damage audio in a way that causes problems later.

But any tonal EQ, Noise Reduction, DeEssing, and of course Dynamics changes like Downward Expansion, Compression or Limiting would be on an a la carte basis (by mutual agreement with the client) and would land the audio clearly on the “Processed” side of the fence.


What Audio Format Should I use for voiceover recording?

TOPIC: WAV vs. MP3 vs. AIFF vs. Ogg Vorbis vs. AUP3
TOPIC: Converting Sample Rate, Converting Bit Depth

There is no universal VO delivery standard. File delivery spec is what your client expects to receive from you. There is not one-size-fits-all answer. If in doubt, reach out to your client and ask.

File Delivery and Audio Specification Resources on JustAskJimVO.studio –

File Delivery: What Does Your Client Want?
Numbers We Might Need – Sample Rates, Output Bit Rates, Bit Depth in VO Recording
The “Right” Settings for Voiceover Recording

Question: You say to record in 24 bit at 48 kHz, but I need to deliver audio to ACX for an audiobook project. They want a 16 bit 44.1 kHz MP3 file at 192 kbps. How do I convert this? Should I convert this?

If you are going to deliver in 44.1 vs. 48 kHz, there’s nothing wrong with recording with that specific Sample Rate.

MP3 audio format supports various _Sample_ Rates. You can save an MP3 with 32 kHz, 44.1 kHz or 48 kHz sample rates, for example.

If you have recorded in 48 kHz and the client wants 44.1 kHz, you will have to change that at some point.

The only reason I recommend using 48 kHz is because everything I’ve delivered to non-audiobook projects has been 48 kHz for the past few years. Recording the initial audio at 48 kHz simply saves me an extra step on those projects.

Audiobooks are kind of the last holdout, still requesting a 44.1 kHz sample rate for deliverables. On an audiobook project, I simply record at 44.1 kHz to begin with – just means one less production step for that type of project.

It is trivial to “Convert Sample Rate” (under EFFECTS menu in TW) from 48 to 44.1 kHz. Just make sure “RESAMPLE” is checked.

As far as “Bit Depth” is concerned – recording with 24 bit is best practice. You gain increased dynamic range in the recordings which makes it easier to adjust the amplitude or dynamics later on. As I outline in classes or working with you directly, we will working our audio in a uncompressed data format – the most common being WAV (.wav or WAVE). All modifications, editing and other processing should be don to this full-spectrum audio file. Only when we are ready to provide the final audio do we convert to MP3.

MP3 is by definition a 16 bit format. When you save to MP3, it converts things (downsamples) to 16 bit. There is no problem in doing this, as it’s part of the formatting algorithm. When you create an MP3, there is a third variable – the output bite rate.

Default for the Output Bit Rate in most applications is 128 kbps – IF the application defaults to CONSTANT BIT RATE. Many applications (I’m looking at you, Audacity!) use “Variable Bit Rate” for the MP3 encoding as their default setting. This should be changed to Constant Bit Rate (CBR). Otherwise, that means that the output bit rate is adjusted while saving, typically leading to lower quality audio.

Because they have not set this variable in MP3 output, many new VO students are unknowingly producing MP3’s with output bit rates in the 96 kbps range, and you can hear the degraded audio quality.

ACX and most audiobook production houses require an output bit rate of 192 kbps or higher (and higher has little benefit, so 192 is a good setting). You aren’t really “converting” to 192 kbps – it’s really a separate variable in the format which you are defining.

That means that when you save an MP3 from a 24 bit file, it will convert the bit depth to 16 but will not adjust the Sample Rate.


Can I save an MP3 back to a WAV format?

TOPIC: WAV vs. MP3 vs. AIFF vs. Ogg Vorbis vs. AUP3

QUESTION: A client asked me if I can send them a WAV version of my audio file. All I have is an MP3. Can I just save it as a WAV?

QUESTION: What software do I need to change an MP3 into a WAV?

While you can convert your audio easily in any common digital recording software, it’s generally a bad practice to do that. Each time you save as an MP3, the compression algorithm of that audio format throws out information. MP3 is a “lossy” audio format. That’s why it creates files which are comparatively small.

While it is possible to Save As into a WAV file, it’s important to understand that you cannot gain (or “regain”) any audio quality with that conversion step. Once your audio has been changed into an MP3 format, it has gone through a conversion process which eliminated data from the audio file. If you only have the MP3, you cannot go back to a higher resolution (full audio spectrum) version. That information is gone.

That’s why we want to record, edit and save a master copy of our work in an uncompressed audio storage format (WAV, AIFF, FLAC). We can then output/export a copy into the deliverable format (the MP3 our client might ask for). Best practice is to save that master copy and maintain it in archive.

If the client wants changes or updates to the MP3 we have sent to them. We can then go back to the master file and change/edit/update and re-export another MP3 if necessary.

If you absolutely needed to edit or work with an MP3, then you could bring it into any DAW or editor and then do additional work in the uncompressed native or WAV format that your software works in. It won’t get any worse. But it won’t get any better.

If you need to convert a large number of audio files from MP3 to WAV, that’s actually a trivial step with Twisted Wave, Audition, RX or a few others. In those software applications, you can set up a batch process to Open, Save As, and Rename them into WAV files.

But to be very clear – it will never be “better” than the original source file, which is already a compressed/lossy audio file.


Effects / Plug-ins

Effects and Plug-ins Resources on JustAskJimVO.studio:

VO Software Tools: Thinking About Limiters
VO Software Tools: Filters, EQ, Equalization
Why Be Normal? Understanding “Normalize” in your VO studio
VO Recording: In Praise of Presets
VO Software: Stacks, Racks & Macros – Oh My!
The Trapping Effects of Effects

When should I use a certain plug-in or effect?

Questions: When do I use certain plug-ins or effects on my audio? I tried using Audacity’s Noise Reduction tool to get rid of my room echo, but it didn’t work… If I use Equalize, am I trying to make all the frequencies the same loudness (i.e. “equal”)? Does Compression make things louder? What does a Limiter do? is Normalize the same as Compression?

Denoise tools will likely do nothing for echo. There are other specific De-Reverb tools for addressing reverberation, which is the reflection of sound within your space. A true “echo” occurs after the initial sound has finished.

Equalize is not necessarily to even out frequencies. It may be used to reduce resonances (not reflections) added by the room or equipment, or enhance certain parts of the signal. EQ/Equalizing (or any other Filter effect) reduces or increases aspects of the frequency range.

DeEss will do nothing for plosives. It uses frequency and a narrow type of compression (simplifying) to reduce “s” sounds or sibilance.

Compression controls amplitude (“top and bottom sounds” could be misconstrued as frequency). Compression starts working at a certain energy level and adjusts the volume in a controlled way. Compression does not increase volume, though many Compression effects have a “Make Up Gain” function which does this after the actual compression step takes place.

Multiband Compression has no direct effect on tonal clarity. It is the same volume control as Compression but can be applied differently on separate segments of the frequency range. 

Limiting is similar to compression, but the effect is more dramatic at a specific point. Once the volume hits “X” dB, nothing louder gets through.

Normalize is functionally identical to Amplify, but uses a different target value. It is a gain (volume) adjustment made by changing either the Maximum or Average audio level to a specific value. Amplify raises or lowers the volume by a certain amount.

Caveats:

Many of these tools have significant impacts upon the audio itself and can create unwanted results (especially Denoise or Compression).

Many of the above tools are used instead of addressing the root cause. Plosives & sibilance are typically best addressed through proper technique, while noise issues and reverberations should be fixed by treating and isolating the recording space.

Ideally, none of these tools should be used. Practically, the fewest and lightest amount typically results in better quality audio.


VO Production and Recording Workflow – getting projects out the door

VO Recording Workflow Resources on JustAskJimVO.studio:

VO Recording: In Praise of Presets
VO Workflow: Structures and Creativity
VO Mindset: Consistency
VO Studio Workflow: Why that DAW?
VO Workflow: Be Safe, Be Saving…

Iterative Saves – An approach so you don’t have to do stuff twice

You never want to have to redo a step in your VO production workflow. Granted – software and computers are much more stable these days, and most recording software will help you recover files which were open when your computer suddenly crashed (or you lost power).

However, there’s always the potential for an open file (the one you are working on) to get corrupted. That means you would have to go back to square one with that file. If you only work with one file, you could potentially lose days worth of work.

Here’s one way to prevent that: Use “Iterative” Saves

Every time you don’t want to have to do something again, use the “Save As…” (or Export a WAV) to create a backup file which has all changes up to that time.

All of this is so I don’t have to redo any major steps and don’t grab any incomplete versions. That way, if there’s a crash which scrambles the file I’m working on, then I don’t have to reconstruct too much. While editing or modifying with effects or stacks, I’ll hit SAVE before doing anything major (e.g. a big copy/paste, a punch-in/pickup, or any effect applied to the entire file).

My Basic Iterative File Saving Steps:

This is the process I use for long form work when I’m responsible for delivering finished, edited files to meet a particular delivery spec like ACX or audiobook production requirements. I sometimes simplify this a bit – say if I’m just supplying raw session audio to a client, I will skip the editing/QC steps.

filename_raw – the original session recording audio file. Saved immediately upon stepping out of the booth. This NEVER gets opened again. I don’t want to have to get back behind the microphone to reconstruct a performance. Immdiately upon session end, I either duplicate it at the computer desktop or do a “Save As…” to the “Working” file.

filename_working – Initial listening pass and obvious fixes/timing issues. This is where I do my first edit pass to remove errors and other noticeable issues. Save once more. Then “Save As…” to “Edited”.

filename_edited – Used for detailed comparison against text. Any pickups and fixes are inserted, level-balanced, and smoothed for transitions. All edits done/confirmed. This is the clean, non-processed copy. Save once more. Then “Save As…” to “QC”.

filename_QC – Detail review step complete. Application of any processing – DeClick, EQ, etc. . Ready for mastering process steps. Save once more. Then “Save As…” to “FINAL”

filename_FINAL – the version which is ready to go to the client. I may duplicate this or just do a “Save As…” to name appropriately, or use Twisted Wave’s Split by Markers if generating individual files.

For archive purposes, I’ll save a WAV version of the filename_edited and filename_FINAL. Others will be discarded after the project is delivered


Matching Edits and Pickups in Your Recording Projects

One of the topics which comes up regularly is how to record corrections and edit them into your voiceover recording projects. Bear in mind that this is an acquired skill and takes some time and practice to get right. The more you do that, the better you will get at it.

That being said, there are some key guidelines which should give you a place to focus when you are learning how to edit in corrections after the original recording. This should help you get work out the door more efficiently.

If you are looking for some more efficient workflow methods, I offer an “Audio Editing Workflows” workshop through Voice One, or you can set up a 1 on 1 session through my calendar.

Steps for Pickups and Corrections

  • If your recording setup is consistent, it should be pretty straight forward. Listen to your corrections (or have someone listen to them) and describe what you are noticing. Louder/quieter? Different tone?
  • The biggest variable in sound quality is the space. If that changes daily, then it will be a difficult process. Position on the mic matters, so that should be consistent.
  • The next variable is performance. You need to hit the tone/energy of the original passage. That comes with practice. If you need more consistency, recording the same passage each day as a warmup can be instructive. Learning to listen and assess your performance so you can match it is a skill.
  • If you have that dialed in – which simplifies everything – then it’s usually an issue of loudness. We notice variance in sound more than absolutes, so if it’s suddenly a couple dB louder/softer, it will catch our attention. So first pay attention to the gain of the replacement section.
  • Depending upon how you are processing your files, you may find that editing in a larger section is a simpler approach. It’s more difficult to get one word to match than a sentence.
  • If you are running the same stack/rack/macro on a super small section, it might not end up with the same result as processing the full chapter again.

VO Recording Equipment, Gear Lust, and the “Perfect” Voiceover Microphone and Audio Interface

Voiceover recording equipment resources on JustAskJimVO.studio:
What gear should I get to record voiceover

NOTE – microphone comments have always been the most frequently shared responses. I expanded those into a 6 part Tuesday Tech Tip series, then aggregated and refined that into the guide to Voiceover Microphones listed below. If you want to learn more about microphones and how they work in a home VO studio, that’s a great place start.

VO Studio: Benefit of the Booth
The Myth of the Microphone
VO Microphones – JustAskJimVO.studio comprehensive guide to Voiceover Microphones
The Winter of Our (Microphone) Discontent
Audio Interfaces for your Voiceover Studio
VO Studio: Balancing the Need for New Gear
Mic Check One…Two… – What to do when your VO microphone doesn’t work
What is “Frequency Response” in Microphones?
What does the microphone “Pickup Pattern” mean?

What Mic should I buy?

The one you can afford, and leaves enough budget left over to adequately isolate and treat your recording space. Seriously, the first (or next) microphone you buy will probably not be the last microphone you purchase. Get something decent that works well for most voices. I have listed many microphone models at different price points in my VO Microphones article.

What is the best microphone for voiceover?

The one you have.

Wait…are you making a joke?

Not at all. This is easily the most common question which gets asked in VO community groups. It’s the question which will generate the highest number of responses, probably provoke an argument or three, and generally solve nothing. As I mentioned in “The Winter of Our (Microphone) Discontent”, that question generally misses the larger point – once we’ve got decent gear, it’s what we do behind the mic that matters.


What Audio Interface should I buy?

The one you can afford, and leaves enough budget left over to adequately isolate and treat your recording space. In all honesty, the interface is seldom the limiting factor in getting the best quality sound. Before worrying about upgrading your interface, I’d rather see you spend the money on coaching and training, then the space in which you record, then the microphone. I have many more thoughts about Audio Interfaces (audio sound cards) in this comprehensive article, as well as a number of specific model recommendations.

What is the best Audio Interface for voiceover?

The one you have.

Of course, if you spend a lot of time on gear discussion groups, people will make the strong claim that some converters are better than others, and some of the interfaces do offer interesting features if you are going to be live-streaming or doing other types of content creation. But for getting most voiceover work out the door, a good, simple Scarlett from Focusrite is a solid option and won’t prevent you from booking work.

Hey… I’m sensing a pattern here!

Not at all. This is easily the most common question which gets asked in VO community groups. It’s the question which will generate the highest number of responses, probably provoke an argument or three, and generally solve nothing. As I mentioned in “The Winter of Our (Microphone) Discontent”, that question generally misses the larger point – once we’ve got decent gear, it’s what we do behind the mic that matters.


Vocal Booths / VO Booths in the Home Studio

Recording booth resources on JustAskJimVO.studio:

VO Studio: Benefit of the Booth
In Your VO Booth: Position, Position, Position
Four Part Series – Upgrading the Acoustics on the Vocalbooth in my studio – starts here

Are Vocalbooths / Vocal Booths / Whisperrooms / StudioBricks / Gretch-Ken / prefab recording booths “Sound PROOF”?

Question: How do I get a soundproof booth? I hear my neighbor/kids/partner walking overhead when I record. Do I need a Whisperroom/VocalBooth/Studiobricks? Will that keep my neighbor’s leaf blower/Harley-Davidson/saxophone practice form bleeding into my recordings?

The simplest answer is that none of those are “Sound PROOF” – especially if you want to include “all outside noise”.

The actual structure it will be inside of matters, as certain types of buildings transmit more noise than others.

Sound transmits through the floors, joists and wall materials if they are wood. If the booth is in contact with that, it will vibrate into the booth.

The noise itself matters, as percussive, low-frequency sounds tend to penetrate through most coupled materials.

That being said, a decoupled structure inside of a room, behind walls built with varied materials used to reduce transmission of sound can reduce a lot of sound.

In other words, I’ve had clients with every type of booth still get sounds that infiltrate.

That being said – in the prefab booths, StudioBricks triple wall will probably have the best isolation (which is what you are asking about), with the VocalBooth Platinum Double-walled models pretty comparable.

In a well-sealed room with high quality external windows, that’s going to work about as well as anything.

Single-walled, not so much. I can direct folks through the wall of a single walled VocalBooth or Whisperroom.


Can I build a booth out of moving blankets? Which ones do I use?

There are actually a lot of benefits to working in a soft-sided booth. While that approach generally will not isolate as completely as a separated room, it also doesn’t tend to build up resonant frequencies as easily. They may not look as photogenic as a standalone commercially built recording booth, you can obtain highly professional results when using a blanket booth to record voiceover.

My go-to’s are the Producer’s Choice Sound Blankets from VocalBoothToGo dot com. They are the heaviest weight commercial moving blankets available – weighing in at around 10 pounds each. They are grommeted (and they have other options) and include a model which is white on one side and black on the other, so you can have a lighter visual quality to the inside of your soft sided booth.

They have a variety of specialty sizes as well, which can solve specific issues (e.g. doorways, windows).

The moving blankets you are likely to find at Harbor Freight, Home Depot, etc. will be about 1/3rd that weight. You can layer those up, but may end up spending about as much as simply obtaining the Producer’s Choice blanket.

If VocalBoothToGo.com is out of stock, these are two other options I’ve found –
Filmcraft – 72″ x 80″ grommetted (both sides are black)

US Cargo Control – 96″ x 80″ grommetted (both sides are black) are the largest I’ve found

Audimute (which I link to on my Gear page) make a heavier option – they can work better to isolate environmental sounds.

Pro Tip – a little air gap is helpful when mounting them, as is ruching them (as opposed to pulling them tight like a drum skin).

Building a PVC Frame for your moving blankets –
Build plan and Instructions available for download

I have a set of plans and build instructions which are in draft form – you are welcome to download and use these to design and fabricate your own freestanding, soft-walled voiceover recording booth. The only thing I’ll ask is that you send me an email with any feedback on those instructions.

You can download the current version of my PVC Blanket Booth Frame by clicking here.

If you don’t feel like doing the work yourself, I’ve built and set up many of these in the greater San Francisco Bay area. Please reach out through my contact page if you would like to have me build and install one for you.

Commercially available soft-sided recording booth solutions

The Tri-Booth is a solid option

VocalBoothToGo’s VOMO is another good solution


Should my computer be in my booth (recording closet) with me?

Should I have a separate recording booth aside from my editing area, to help reduce noise floor?

Whenever I’m setting up a studio for someone, we’ll talk through exactly what they need in terms of layout.

My personal bias is to separate the “creative” (behind-the-mic-stuff) from the “analytical” (editing/reviewing/processing/mastering) as I feel strongly that those are two separate parts of the brain.

Most new voice actors assume they need to be immediately speaking as soon as they hit record. In most cases, this is not necessary. For most of what I do – certainly auditioning – I hit record out at the desk, go into the booth, close the door, do the VO work, exit and stop the software.

That also gives me a mental break between the acting and the directing.

This can also work for more production oriented tasks. Using Twisted Wave’s remote on my phone, I can drop markers into the recording from within the booth if I need them. But I also work with a plan for editing, etc.

If I’m doing an audiobook, I do need to accurately place the cursor for punching in on mistakes. In that case, a monitor in the booth is necessary. Mine sits outside the window, and I have a trackpad and separate keyboard inside to control the computer. This is out of my line of site when I’m recording by design. Again, I want to separate the processes.

The other benefit of this setup is that the noises of the computer are outside the walls of my recording space. That was a no-brainer before the advent of the M1/M2-based Apple computers (or any other computer with no fan or no noise), but these days it’s less of a hard and fast rule.

Probably the only situation where you need to see a screen when you are recording would be dubbing/ADR where you are trying to match timing or specific lip-flaps. In that case, you would want to position your monitor so you can be squarely on mic. The challenge there is to not create reflections within your recording space due to the position of your computer monitor.


Can I use a Zoom or other recorder instead of my computer?

Topic: Recording Hardware, audio interface, workflow

Recording hardware resources on JustAskJimVO.studio:

Audio Interfaces for the Voiceover Studio
Microphones for your VO Recording Studio
Voiceover Home Recording Hardware Basics

Question: Can I use a separate recorder instead of my computer? My computer has a fan that makes noise.

There are two things to consider:
First, there’s no reason your computer should be in the booth with you. My consistent recommendation is to separate the computer and interface from your booth/recording space. It does not need to be in the booth with you. You can run an XLR cable 75-100′ with no audible degradation of the signal.

Second, in VO, we commonly are recording, reviewing, editing, and inserting audio (the ratio varies a bit depending upon whether we are providing auditions or finished work. The key point is that we’re regularly appending new chunks of recorded takes.

Each time you do that with a separate recording unit, you need to:

  • go into your booth and record
  • save the audio to the SD card
  • remove the SD card
  • insert the SD card into your computer
  • open the audio file in your software
  • identify the section you need (because it will have dead air before and after, and perhaps multiple takes)
  • copy that audio part from the new file
  • paste that part into the existing file
  • listen back and judge performance, timbre, sound quality match, etc.

Multiply all of those steps (and perhaps a few others I neglected to include) for every correction you will be doing or every additional take you want to try. That’s a lot of extra steps compared to just getting a longer XLR cable and locating the noisy computer outside of your space.

You can certainly use a separate recorder – I’ve had plenty of clients start with that because that’s the only equipment they had. But, the constant SD card swapping or file transfer for each step gets old after a while. Every one of them benefitted by using a more direct method.

For what we do in VO, a direct input path to your editing/recording software is a more efficient workflow. We need to quickly record, edit, correct, and deliver – a lot!


Source-Connect / ipDTL / Remote Connections / Remote Sessions

Remote connection resources on JustAskJimVO.studio:

Connecting a Director or Studio for Remote VO Sessions
It’s Time to Connect Your Studio
VO Recording Sessions: Is Remote the New Normal?
Source-Connect Quick Fix: Source Stream for MacOS


What is a “Live”, “Live-directed”, or “remote” voiceover session?

The term is used to describe a recording session in which you (the VO/voiceover talent) connect in real time with a director, producer or creative team. Depending upon the type of connection (Source-Connect, SessionLink Pro, ipDTL, BodalgoCall, Source-Connect Now, Zoom, Skype, Microsoft Teams, Webex, Discord, etc.), you will either record locally onto your computer, or they will be recording you over the internet.

I use two terms to distinguish these options:

  • Directed Session: Director giving instructions to you. You are responsible for recording.
  • Connected Session: Director giving instructions to you. They are recording high quality audio at the “far-end” of your connection. In other words, when you speak into your microphone, it is being recorded on their computer.

You should have good quality headphones for a directed or connected session. You should have high quality sound as well.


Do I need to offer directed VO sessions?
Do I need to let people direct me live?
Are live-directed voiceover sessions important?

For quite a few years before the pandemic, I’ve been strongly recommending that to be competitive in VO, it’s imperative to offer the service of live-directed sessions. Connected sessions where you receive live direction from someone over Zoom, Skype, BodalgoCall, Source-Connect Now or other options work in your favor. They are a huge benefit to you. Otherwise you simply lose too much time with the process of cyclical revisions.

With a live session, the client should have what you want when you are done, or pay for another session is they want to change things later on. Promising multiple revisions for a project is borrowing against your future.

If anything, COVID has made live, remote-directed voiceover sessions more of an absolute requirement. Since the pandemic hit, we’ve been making producers and directors comfortable with the process to giving remote live feedback and getting great audio from us.

Plus, people think of Zoom the way they used to think of a phone call…


Dropouts or Quality Loss During Source-Connect or ipDTL sessions

Most of the issues which occur during connected sessions are due to problems in the “last mile” of internet. This can occur at either end of the connection. When the pandemic forced large numbers of people to work from home, there were definite impacts on internet traffic load in areas which had not encountered them.

In other words, many residential areas have suffered high traffic through internet infrastructure that could not handle it.

One key point is to have a network cable connecting your computer to your router or modem. WiFi is always subject to interrupts as mobile or other devices find your network. That should solve most of the problems.

If you are still having problems getting reliable remote session connections through Source-Connect, Source-Connect Now, ipDTL, Session Link Pro/Dub, BodalgoCall or any of the common methods, it might be worth checking your round-trip time and consistency using a Ping test.

In MacOS, you can run this by going to >Applications >Utilities and opening the “Terminal” app.

Once in Terminal, type Ping ipdtl.com (or you can use another reliable server such as google.com). You’ll see the computer send a ping test and reply with numerical results. Let this run for 150 or so pings.

Assuming you are not receiving “lost packet” errors, the “time” result is most salient.

Press Control-C to stop the ping test.

You’ll get a result with information in this format:

— ipdtl.com ping statistics —
150 packets transmitted, 150 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 151.861/153.768/161.867/1.417 ms

What you are looking for is instances of significant delay popping up. In the example above, there’s only about a 10 millisecond range from the fastest and slowest responses. With some internet providers, you’ll see a sudden increase of 50 or more milliseconds at random times. If that’s happening regularly, that may be the cause of your dropouts.

It could be another computer accessing the internet (streaming downloads for example) or issues with your router hardware. But more often it’s localized load in your neighborhood. Contacting your internet service provider and reporting this may lead to them addressing the issue. Sometimes upgrading your service can fix this issue.

How fast does my internet have to be to use Source-Connect (or ipDTL, etc.)?

Consistency is a better variable to control. I’ve had clients do fine with relatively slow internet that was rock solid. By the same token, others have encountered issues with higher speed, but less reliable times (see the Ping test for a method to test this). Wired connection to the internet is always preferred.

The other variable to be mindful of is that as the “Talent” in a connected session, it’s your upload speed that is more important. Most ISP’s tout their screamingly fast download speeds, but tend to throttle your upload speed. You need to know both.

With a wired, consistent “last-mile” connection, I like to see 3 Mbps or higher. Faster is better, of course. 3 Mbps is really a minimum in my book. Many ISP’s offer a significant upload speed increase for residential service with a fairly minimal cost increase. In some cases, you may need to go up to a business class service. I’ve found that many of the service agents have to research a bit to find the more appropriate plans for fast upload residential service. In other words, keep asking if there’s another option with higher upload speeds.

If you want to test your current download/upload internet speed, you can use a tool like Speedtest.net (just don’t click on the myriad of ad links…)

How do I provide playback to a client during a Zoom directed session?

Since the start of the pandemic, voice actors have been tasked with figuring out ways to provide playback during “directed” sessions.

There are a variety of options if you have a more advanced interface with “Loopback” or the ability to set up a mix-minus playback feed.

If you are using Source-Connect Standard, the client should have the actual audio on their end of the connection and should be able to directly hear what they got from the VO talent. The engineer should be able to set that up, whether they are in the same room as the director/producer or in a completely separate location.

However, if you are using Zoom as the connection method, the VO talent will be recording locally, and from time to time, a director might want to hear a specific take.

Playing back audio when on a Zoom directed session

In Zoom it’s very straightforward using just the tools within Zoom:

– mouse down to the “Share Screen” option in Zoom

– when the “Share Screen” options pop up, you can check the box that says “Share sound”

– Zoom will install their proprietary audio driver if you don’t have it there already (you’ll see a dialog box

To share sound in Zoom during a directed session, you can check “Share sound” when you Share the Screen (Basic tab)

– In Twisted Wave, select AUDIO > Output Device, then “ZoomAudioDevice”

– Both of you will hear playback from Twisted Wave when this is done using most simple audio interfaces

Method Two – Share Audio Only via Zoom

Optionally (if you don’t want to share your actual screen) –

– mouse down to the “Share Screen” option in Zoom

– when the “Share Screen” options pop up, click on the “Advanced” tab and select “Share Computer Sound”

Once you have shared your audio using one of the above methods, you need to select the “ZoomAudioDevice” as your “Output Device”
Shown here in Twisted Wave, but works in other recording systems as well.

– In Twisted Wave, select AUDIO > Output Device, then “ZoomAudioDevice”

(Note, while it’s nice not to share your screen with the director and take over their screen real estate, the second approach will also feed all active system sounds down the line to your client, so if you are getting alerts or error sounds, they will hear those as well.)


Computers in the Home Voiceover Studio

What is best – MacOS or Windows?

Men more brave than I have attempted to solve that dilemma…

In all honesty, it doesn’t really matter.
Other than the fact that Twisted Wave is MacOS only, and Source-Connect 3.8 is a bit clunky on Windows, neither will record better than the other. If you use and understand Windows, have a Windows-based computer, and like working in Windows, no worries. I have plenty of clients who do so and there are plenty of examples of professional recording/production facilities using Windows. My go-to software audio routing tool Source-Nexus is now available for Windows, which is a very good thing for looping playback between applications or to a remote director.

“Understand Windows…” means that you can quickly dig into your Windows system audio settings, update audio drivers as needed, and can troubleshoot audio routing issues.

On MacOS, the Core Audio approach tends to reduce variables. There are fewer places to “lose” your audio, and the system doesn’t randomly restrict input volume. Most audio interfaces are also “plug & play” which eliminates the need for separate audio drivers.

One area that MacOS seems better is in the world of system updates. Windows users of 10 (and above) who have not upgraded to Windows Pro are at the mercy of forced system updates from Microsoft. (You can turn this off in Windows Pro at least through 11). On MacOS, you will get nagged to update (see below), but not forced to do so. That means you can maintain a consistent setup in your voiceover production studio. For me, this is one of the most important aspects.

Beyond that – in no particular order:

  • The “Apple tax” of supposed higher prices is kind of blown out of proportion. Yes, you can find a machine that will run Windows for the price of a nice dinner for four, but when you start comparing spec-to-spec (RAM, processor speed, drive type and size, etc.), the differences are not all that large. Divide that difference over a 5 year working life (as pessimistic as possible…) and it’s pretty much a rounding error. (And yes, I realize you can trick out a Mac Pro for 6 figures, but that’s not what we’re talking about here.)
  • Apple ecosystem. My iPhone plays well with my iPad and my Mac computers. I don’t need to worry about it. I can focus on other things. Applications are forced to conform to a standard so they have a certain level of cons
  • The M1 chip architecture is a game-changer. Low temps. High speed. No constant rewriting of data in memory to access different functions.
  • Apple Mail continues to be a hot mess. But, any computer-resident mail program will be sitting on and ever-expanding database. Outlook does not get a pass on this either.

If you are at all computer-tentative (or even computer-phobic), the MacOS is still a kinder, gentler place. You plug stuff in and it works. It tends to keep working.

Should I Update My OS to the Newest Version?
I keep getting notifications that there has been an update.

You do not have to allow MacOS updates. Ever.

For any system update – especially major updates to a new operating system (in MacOS, this will be a new “name” – e.g. El Capitan to High Sierra to Mojave to Catalina to Big Sur to Monterey…), it pays to be a late adopter. I tend to stay at least a year back on OS versions. I’d rather run a refined and stable version that works with the apps I use.

The key issue is that within our world – which is a subset of production recording, which is a subset of the broader user base of Apple computers – updates tend to break functionality which we depend upon.

Every time a new OS version comes out, I remind my audio clients not to be an early adopter. There’s just not any upside for VO unless you need a specific functionality for your non-voiceover work.

If you need your computer to record everyday, you should disable “Allow Automatic Updates” and ignore the nagging messages. I won’t even consider a system update until the OS has hit X.X.3 – by that point we at least know where the problems are.

When you are running a production studio (as we are in VO), it pays to be a bit behind the curve.


Can I upgrade my MacOS to Monterey?
Can I upgrade my MacOS to Big Sur?
Can I upgrade my MacOS to Catalina?
Can I upgrade my MacOS to an older version?

A number of years ago, Apple made the decision to not charge for the newest OS (Operating System) updates. As with most things Apple have done, this was a pretty significant game changer. Although new OS versions will generally break some functionality in our little corner of the computer-using world, having the option to keep your machine current is a good thing.

Apple did another thing which is a bit less well known – they have maintained older versions of their OS and kept them available for download. This is an amazing gift. Prior to this, restoring old OS versions meant tracking down CD’s to reinstall. It was a major pain.

The reason they do this is because as OS versions become more advanced, Apple are also dropping backward compatibility. That means that older Apple computers are hardware limited from using the newest OS version. (They’ve been doing that for a while in iOS – which is why your iPhone 6 will not update to any version higher than v12.x).

Honestly, I’m fine with this. For most users, a limitation on the OS version won’t really get in their way.

In fact, this is a good time to reread the above topic – “Should I update my OS….”

If things are working, the answer very likely could be “no, thank you”…

OK – you’ve been warned. Back to the resources.

OWC provides a very handy guide for which OS version will work with your Macs, iPhones, and iPad models. https://eshop.macsales.com/guides/Mac_OS_X_Compatibility

This is the master resource on Apple.com which gives links to hardware requirements and MacOS system version downloadshttps://support.apple.com/en-us/HT211683

An example: iMac Late 2013 – what MacOS can I upgrade to?

Let’s say you have an iMac. First, go up to the Apple menu and select “About this Mac”. A window will pop up that has the specs on when it was made. In some older OS versions, you may have to go to a deeper menu, but this information is often presented on the initial screen you see.

In this case, it says “iMac (21.5-inch, late 2013)”

You think “Big Sur” is a cool name, so you check here – https://support.apple.com/en-us/HT211238

Reading through that list, you see the oldest iMac listed is a mid-2014 model. Yours does not appear.

The next oldest OS is “Catalina” – https://support.apple.com/en-us/HT210222

This lists your model. It’s also not the absolute oldest one, which gives you some hope that things will work OK. Once again, you ask if there is an actual reason you are doing this.

At this point, you should have backed up EVERYTHING THAT MATTERS to another drive. Changing OS versions can cause data loss. With backups handled and perhaps even duplicated, you proceed.

Oops! You may find that you don’t have enough room on your computer to do the install. Reread the Apple guidelines on that page for how much free drive space you might need.

Ooops Pt. Duex! It might not update because you had the original OS version on the computer. In this case, your 2013 did have Mavericks, which – as the above page specifies – will allow you to update to Catalina.

You then go to this page – https://apps.apple.com/us/app/macos-catalina/id1466841314?mt=12 – which kicks you over to the Apple Mac App Store so you can download your version of Catalina and install it.

After a bit o’ downloading, a bit o’ file install, a restart or two, you should be looking at a welcome screen so you can reintroduce yourself to your newly system updated Mac.


Should I Buy One of the New M-Series (“Apple Silicon” / non-Intel) Macs?

Is it OK to buy one of the older Intel Macs?

Should I buy a refurbished Intel Mac? or Save up for a new/current model?

I bought the “last” Intel Mini model (new) and figured I’d be getting 5 years out of it. This was just before they actually released any of the new M1 models, so I had wanted to replace a very aged studio computer and it had to work.

Intel was a “known known” and was running the most advanced version of Catalina at that time. So, I was fine with it.

Right now today, given the consistent performance and – choosing the next words with great care – freaking AMAZING performance of the M1 architecture, I would save up the needed money and get an M-series chip. It’s the future. The future is here.

That being said, if the Intel-based Macintosh model was dirt-cheap – ~20-30% of a comparable current model – I might consider it if I needed hardware today. But, it’s time limited. I would say if you got two solid years out of it, that would be a win.

RE: Software – right now (early 2022), the focus is on rewriting current software to work natively on M1 processors. That means we really have not started to see software which maximizes the hardware. When that starts occurring, you’ll likely see diminishing effort for anything Intel-based. There’s just no impetus to keep that stuff fresh. As always, it will hang around but become more of an effort to make work.

Ultimately, it’s your studio, and there’s no reason to run the latest/greatest/current OS as long as you can open a WAV file from a client and deliver to spec. “It it ain’t broke, don’t fix it….” is a valid approach. Plenty of full studios ran on legacy versions of ProTools for years. But that does defer the pain rather than avoid it.


Where did my memory go? Maximizing available RAM for recording

If you are getting glitching or dropouts in your recordings, it may just be that your computer doesn’t have enough available RAM. To test this, quit all your other applications and try recording again. NOte that “Quitting” is different than just closing a window on the MacOS side of things.

Here are 4 things to check:

Thing #1:
Many computer users will close the main window of an application, but not “Quit” the application. You can press Command-TAB to see what applications are actually running. Use “Quit” inside of each one to make sure they are fully stopped.
Thing #2:
Anti-Virus apps. Some are worse than others. I actually find the most issues from Malwarebytes, which aggressively runs in the background and tends to interrupt processes. In most, you can set them to either run manually or run at midnight or some time when you aren’t doing anything on the computer.
Thing #3:
Backup apps. Anything like BackBlaze or even TimeMachine can be set to continuously back up. This can be problematic. I set mine (again) to run at times when I am not using the computer. In other words, it backs up my work from the day at the end of the day.
Thing #4:
Those pesky apps…. See #1 above to Quit everything you are not actively using. If you are recording, you really just need TW running. So, quit everything else.
If you can’t, keep in mind:
Chrome is a memory hog. If you have several tabs open, each may be draining system memory resources.
Apple Music always seems to be opened – many people never adjust their default audio application, so when double-clicking on a WAV, it opens Apple music.
Anything that communicates actively to the internet – Discord, Slack, etc.
Apple Mail (if you use that, which, honestly, I wouldn’t), or any application that checks your mailboxes on a regular basis.



Each week, I share a “Tuesday Tech Tip” with my email community.
If you would like to receive those emails the day they publish, please take a moment to share your contact information through this sign up form.
Thank you.

Follow me on Instagram – @JustAskJimVO | Subscribe to my JustAskJimVO YouTube Channel

Comments are closed.