This topic again. My post may be a bit long as I really get into this stuff, so I apologize in advance. But If you read, I'm sure you'll learn something.
CD (a physical CD) = WAV = FLAC > 320 MP3 > iTunes AAC > MP3 lower than 256 kbps
Hmmm, depends on the person. There's alot of ppl out there (myself included) that think 320 is overkill. There's a reason that the Lame VBR encoder is so popular (and celebrated). Let's go to school right quick:
Music is very complex. We all know it's made up of bits of information. There are parts that require more bits than others. To make it simple, I'll use music vs. spoken word vs. silence as an example. Obviously, music would require more bits than spoken word, which would require more than silence (which should require none, but that's not possible when encoding). This is the exact reason ppl prefer uncompressed (or lossless) (WAV, AIFF, then into the lossless containers, mainly FLAC and ALAC). Uncompressed is exactly that: the sound file retains every bit, and you lose zero sound quality. Once you compress it (think MP3 and AAC, known as lossy files), you're taking bits away, thus losing sound quality along the way. The main (well, only really) drawback to lossless files is the size. FLAC and ALAC make it a bit more tolerable, but you're still talking 350-400mb for a 60 minute album.
One important thing about compressing, is it should only be done once. For example, ppl like to convert (known as transcoding) iTunes songs (AAC) into MP3 (because they're more comfortable with MP3's). That is a big time NO-NO. You should never, ever compress an already compressed file. You're losing even more bits of info, after already stripping a ton away the first time. Only time you should ever transcode is from lossless to lossy. Lossy to lossy, or lossy to lossless are, again, big time No-No's.
Now, let's talk about 320 vs. V0 (which is Lame's highest VBR switch, and now the scene standard). Again, I'll make it simple: let's take a last track of an album, which has the last track, then a hidden track after 3 minutes of silence. Ripped in 320, it's ripping every second of the song in 320, whether it needs it or not. That includes the 3 minutes of silence (which is pointless). However, when you use the V(#) switch (we'll say 0 (again, the highest switch) even though 2 still is popular), it rips each second only using what it calls for. Size comes into play here also, as obviously, a 10 minute track (just an example) encoded at 320 will be larger than a V(#) encoded file (sometimes considerably larger, depending on circumstances).
That is one big reason why 320 is basically overkill. There's a large amount of ppl that don't care, and prefer it over anything short of lossless. More power to 'em, but I'd personally take a V2 rip over 320, let alone V0.
As far as iTunes quality, that's another hotly debated topic amongst
audiophiles (of which most here are not). What's really not debatable, is soundwise, AAC >>> MP3. AAC has been proven to retain a higher sound quality, at a lower bit rate than MP3's. For example, a 128 AAC would sound as good, if not better than a 192, or even 224 MP3. Most files in the iTunes store are 256. You may find a few (older stuff) here or there that are still stuck at 128, but those are few and far between.
Some ppl detest buying from the iTunes store though, and think that Apple does something with their AAC encoded files. It's possible, but I think they sound just fine, and most ppl (with their cheap headphones, and cheap CPU speakers), if put to a blind listening test, wouldn't know an iTunes file from a song straight off the CD. Doesn't mean that there isn't a difference, just means that most (not all) ppl's ears won't be able to tell the difference without expensive ass audio equipment.
Another thing that is VERY important is how to get the music off of the cd and onto your CPU (for instance, iTunes is NOT a good ripper, or MP3 encoder), but I'll stop now, as I just saw how much I typed
If anyone wants to know, I'll go into that a bit...just ask. Peace.