I've been aligning the IF and detector stages of FM tuners for minimum stereo distortion using a 1 kHz single-channel sine wave deviated 75 kHz. THD usually is less than 0.1% in wide IF mode, approaching 0.01% for some tuners. In narrow, THD typically is 0.3% to 1%, depending on filter bandwidth and number, with some tuners approaching 0.1%. Although I can easily hear a difference with sinewave modulation, I've never been able to hear a difference in sound quality between wide and narrow IF filters on ordinary program material, except when multipath propagation is present.
I've noticed that the stereo subchannel in my Sony ST-S555ES tuner is consistently more grungy in narrow mode when listening with QMM, a circuit that demodulates the L−R subchannel spectrum while suppressing normal audio. It lets you listen for sounds that shouldn't be there. The grunge seems to be due to L+R harmonics spraying into the L−R region, probably from detector nonlinearities or nonflat IF-filter group delay. In addition to subchannel grunge, in normal audio I sometimes notice sibilant splash, where a burst of mid-frequency noise accompanies very high-frequency sounds, especially esses. This problem often is due to multipath interference, and sometimes it seems to be an artifact of improperly recorded program material. But some of it may be generated within the tuner by L+R harmonics.
Spraying L+R harmonics into the L−R subchannel region yields nonharmonic distortion products. Such dissonant sounds stand out. I wondered whether these artifacts might be reduced by aligning for minimum high-frequency harmonics in the L−R region rather than for minimum in-channel THD. Since I can't hear THD under ordinary conditions in my tuners but I think I sometimes hear L+R harmonic artifacts, perhaps this alternative alignment might improve the overall sound.
The left image shows the ST-S555ES detector spectrum for a 10 kHz monophonic tone deviated 75 kHz with normal alignment. This is with the wide IF filter (two 180 kHz GDTs). The span is 0 to 50 kHz on an HP 3580A spectrum analyzer. Sampling the signal at the detector avoids harmonic attenuation from the deemphasis network and ultrasonic filter. The right image shows the spectrum after realigning the tuner to minimize 10 kHz harmonics. I readjusted the mixer transformer, IF-filter termination resistances, and ratio detector. 10 kHz THD decreases from 1.4% to 0.5%.
This shows in-band distortion products for a 1 kHz stereo tone deviated 75 kHz. The frequency span is 0 to 5 kHz. There are no significant harmonics beyond 5 kHz. The alternative alignment increases distortion from 0.09% to 0.16%.
Here is 10 kHz distortion for the narrow IF filter (two 150s cascaded with the 180s). THD decreases from 4.1% to 3.4% with the alternative alignment, which was done with the wide IF filter.
This shows 1 kHz stereo distortion in narrow. THD increases from 0.12% to 0.34% with the alternative alignment.
With the alternative alignment, I can't perceive any distortion on program material even though its in-band level has increased. It's hard to be sure without making a direct A/B comparison, but I believe there now may be less grunge in the L−R region when listening with QMM.