No, this is apparently a common misconception. It has equal effect at all frequencies.
It determines the resolution of the D/A and A/D converters wrt Full Scale. More bits means a more accurate translation from the analog input to the digital realm for processing and back to analog again for output. 4 more bits means 16 times more resolution.
Imagine we have 1 (binary) bit to describe how much signal is present. There's only 2 possible outputs: 1 (signal is present) or 0 (signal is not present). Now imagine we've got 2 bits. Now we have 4 combinations where 00 is no signal and 11 is full scale. Our resolution is 1/3 of full scale. This is still not a very accurate representation of the infinitely variable input signal.
Audio signals are sine waves that exist in the positive and negative domains. Half our combinations will be used for the positive and half for the negative. So if we have 20 bits, then we have a little more than a million combinations. A half million for each side of zero. So our resolution is 1/524288th of full scale.
This seems like gross overkill until you recall the logarithmic nature of sound levels => A 40dB difference is a difference of 10,000X. 50dB is 100,000X. 57dB is 500,000X. So our 20bit A/D and D/A converters can accurately represent signal magnitudes (or differences in signal magnitudes) as small as 57dB less than full scale. 24bit converters would get you 69dB less than full scale.
The guys who have experience with both units have no complaints about the 1100.