OK, so we have shaped the frequency content of our sound into something cool/beautiful/awesome/grungy. Now what? One thing you will notice when working with recordings rather than synthesized sound (sine waves, white noise, etc), is wild inconsistency of the amplitude. One moment the singer is whispering into the microphone, the next she is screaming her lungs out. In the good old days this would be solved by a very nervous sound engineer that kept a finger on the mixing console, riding it up and down to make sure the audience could 1) hear the star whispering and 2) didn't destroy their hearing when she hit that high C. Thanks to technology this process can now be outsourced to a process called compression.
A compressor is a machine version of our nervous sound engineer, pushing the amplitude up and down in response to the incoming sound to keep the output as steady as possible.
If you were to design such a machine you would need to figure out a few things. The most important ones are:
In a normal compressor these parameters are called:
Make-up gain for 0dB after compressing. This means that the compressor will analyse the audio after compression and make sure the loudest parts of it are 0 dBFs.
In the next figure you will find an uncompressed and a compressed recording. Guess which is which?
Interestingly, the perceptual result of lowering the amplitude of the loudest parts is that the entire recording sounds louder. This is because the average amplitude goes up (if we use make-up gain, that is...), and is the root cause of the loudness war that's been going on since advanced compression algorithms became available in the 1990s.