NOTE: This article is in response to a discussion on the CCRM email list about audio data compression.
Apple itself (iTunes) does not deliver assets in the WAV format, however you can convert it on your iTunes software from the delivered AAC version to WAV. Side subject: Apple (and even streaming providers) does this to save on bandwidth. If 100 people download a 100 MB file, that's 1 GB of data they just pushed out of their data center. If 1000 people want to download that file that becomes 10 GB. While the data center pipes are very large, it is finite. Then there's the cost issue... At the data center level (just as in streaming), bandwidth is purchased and traded much like electricity. It's a commodity. Apple will buy 100 GB of transfer from Level 3 or AT&T or some other provider and will pay $10k. So there's a cost for distributing a WAV file versus an AAC file.
Here's where it gets a little nerdy...
Audio file data compression (such as MP3 and AAC) essentially take snapshots of the audio and remove parts of the spectrum in order to reduce the file size of the data. But because of the way our brains interpret sound we don't hear the result of the removal unless it gets too aggressive.
Now you start to get into HOW the particular audio file was produced (as in from the artist/label). In the mastering process, did they create essentially a square wave and use Adobe Audition's Hard Limiter feature with the sliders? In Example 1
This is a professionally produced song from a Christian artist, but notice the difference compared to Example 1. What do you see? Dynamic range! And there's nothing wrong with that! Your station's audio processing will utilize automatic gain control (AGC), multiband compression, limiting, and clipping to achieve the sound that we've all become accustomed to on FM radio. That's another topic of discussion that I'll leave for another time, but it is kind of a technical reason... all about signal to noise...
Anyways, here's where I wrap all of this up with a nice little bow.
When the music producer uses hard limiting and clipping it creates distortion. Remember this because this is one of the building blocks of the thrilling conclusion.
Does your station use a lossy STL? For example, do you use an analog STL or a Barix box? If you're using something like a Barix box then it's lossy data compression (unless you've got an amazing IP link to the transmitter site and can transport linear audio). If you're playing a song that is (or was originally) an MP3, or MP2, or AAC, you are going to experience a generational loss when the audio arrives at your transmitter site. What will that sound like?
Now, take that second generation encoded audio and run it through your station's audio processor and convert it to FM. All those artifacts that were caused by the original data compression, and all the artifacts caused by the second data compression, are now amplified.
You may not hear it with a short listening test, but prolonged listening? That will eventually cause listener fatigue and tune out.
So, the short lesson here is: let's try to do the best that we reasonably can. You can't reach a 10/10 in quality because the step from 8/10 to 9/10 is 10 times more difficult and expensive than 7/10 to 8/10. And going 9/10 to 10/10 is 10 times more difficult and expensive than 8/10 to 9/10. It's a logarithmic scale with perfect audio on the far right and unlimited budget at the top of the graph.
Next time, we'll discuss content and the changing listening landscape and how podcasts is changing things.