5

When a sampled sequence is upsampled, zeroes are inserted between the original sample values. But how are the zero values used to effect an increase in the number of sampled values?

user149464
  • 51
  • 2
  • 4
    +1 for proper use of "effect" as a verb – DKNguyen May 18 '20 at 00:57
  • Don't know anything about upsampling. Didn't even realize it was a thing until this post. Maybe this will help: https://www.audioholics.com/audio-technologies/upsampling-vs-oversampling-for-digital-audio It looks like it artificially increases your sampling bandwidth so you can filter your signal more easily. – DKNguyen May 18 '20 at 01:00
  • After upsampling, you have to run the zero-inserted sample stream through a filter, that essentially interpolates between the zero inserted values and the "real" measured values. – SteveSh May 18 '20 at 01:01

2 Answers2

4

Conceptually, inserting zeros into a sequence of samples effectively creates an amplitude-modulated train of narrow pulses. In the frequency domain, this looks like many copies (or "images") of the original baseband signal, repeated at intervals corresponding to the original sampling rate.

Passing this sample stream through a low-pass filter eliminates most of those extra copies, leaving only the ones that correspond to the new sample rate. In the time domain, the sample stream is now a "smoother" representation of the original signal.

The ideal filter for this purpose is the so-called "brick wall" filter, which has a flat passband and the fastest possible cutoff. In the frequency domain, this looks like a rectangle. In the time domain, the impulse response is the sinc() function.

There is no new information in these new samples that wasn't in the original set of samples, but now the signal is easier to handle in certain ways, particularly with regard to D/A conversion and analog filtering.

Dave Tweed
  • 172,781
  • 17
  • 234
  • 402
  • 1
    Say you have two sample values, 17 and 23. Let’s further assume the upsampled sequence becomes 17, 0, 0, 23. Does not interpolation render the resultant 17, 19, 21, 23? If so, HOW? – user149464 May 18 '20 at 02:09
  • Well, no. That would be "linear interpolation", which is actually not optimal. The sinc() interpolation actually looks at a large number of surrounding samples in order to produce the optimal result. I wrote an article about this many years ago in Circuit Cellar. If I can dig up the diagrams from that, I'll add them to my answer. – Dave Tweed May 18 '20 at 03:39
  • @nowradioguy The length of typical low pass filters is many samples, so you get much better than linear interpolation (which removes high frequencies). The ideal brick wall filter has a sinc impulse response, so it considers infinitely many samples (not just 2), and is able to perfectly retain all frequencies when upsampling. Real filters aren't infinitely long, but usually you don't care about frequencies exactly at Nyquist anyway. – user1850479 May 18 '20 at 04:07
3

When we increase the sampling rate of a sampled system, we have to interpolate missing values.

There are actually many ways to interpolate signals. Starting with 'zero-stuffing' allows us to generalise the second part of the interpolation operation to low pass filtering at the new sample rate.

There is a cost/benefit tradeoff between how well the original signal is preserved when being interpolated, and how much mathematical effort, and how much latency we incur, in performing better and better theoretical interpolation.

Two measures of 'how good' interpolation is are ...

a) Do the frequency components of the interpolated signal match those of the original signal up to the original's Nyquist frequency? This might be called 'passband quality'.

b) Are there any extra components present at higher than the original's Nyquist frequency? This might be call 'stopband quality'.

=Ideal Nyquist/sinc interpolation=

This method goes for signal quality, at the expense of computation.

If the original signal is correctly Nyquist bandlimited, then we can fit a sinc curve to every data point, and recover the original unsampled analogue signal. From there, we can choose any new sampling rate we like.

Unfortunately, a sinc curve is not only quite expensive in mathematical operations to implement, it's also infinite in extent, so we would have an infinite latency.

This means that to make this work, we must accept some limitations in the interpolation goodness. Such limitations might be that the result is not correct in the 10% of the bandwidth around Nyquist, or that the passband deviates by 0.1%. This allows us to use a finite length sinc curve that we can actually implement.

=Polynominal Interpolation=

This method goes for conceptual and computational economy, at the expense of signal quality.

We can fit a function to two or more points. For instance, given the sequence [0, 10, 20, 10], can can put a new point 15 in the middle with linear interpolation of 10 and 20, or we can use cubic interpolation on all four to get a new middle point of 16.25.

While the operation is well defined, the quality of the interpolation is poor, there is a passband rolloff. While it works well at low frequency, it does not accurately interpolate signals even remotely close to Nyquist. Consider two sine signals sampled at half Nyquist. One is [1, 0, -1, 0] and the other is [0.7071, -0.7071, -0,7071, 0.7071]. One is a 45 degree phase shifted version of the other. Upon 2x interpolation, they should look identical, but for a phase shift. Clearly linear interpolation will not get close. Cubic interpolation would be better, but it turns out that higher order polynominals are not the way to go.

Not only is the interpolation quality poor, but it is fixed. There is no way to control the quality other than by choice of interpolation order.

=Filter based methods=

This methods allows a computation/quality tradeoff, and starts with zero-stuffing.

It starts out by taking the quality question head on. If we zero-stuff first, what does that do to the spectrum?

After zero-stuffing, the passband spectrum is still perfect. However, there are N full strength copies of the spectrum repeated about multiples of the Nyquist frequency.

The next stage is filtering. We design a filter to preserve the passband shape while attenuating the stopband.

When designing a filter, we have complete control of how accurately we keep the passband, and suppress the stopband, with better filters requiring more computation and latency. We can elect to design the passband and the stopband to different specifications, if that meets the requirements of the job.

In a job I did some while ago, the input signal occupied only 50% of Nyquist. An FIR filter used 8 taps to interpolate to within +/- 0.01dB passband and -100dB stopband, which we deemed to be 'sufficiently perfect'.

Linear interpolation can be analysed in terms of a filter, and it delivers a sinc-squared spectrum.

When we actually implement these filters, we don't blindly zero-stuff and then filter, although that is a convenient way to use when building test vectors with MATLAB. An FIR filter multiplies its coefficients with the input data. As most of the data to be filtered is zero, there is a systematic way to omit the multiply by zero operations, which saves a lot of computation.

Neil_UK
  • 166,079
  • 3
  • 185
  • 408
  • I used linear interpolation in my example above since it was simple. But to be very specific: does the interpolation filter actually “make something” of those stuffed zeroes, OR, by removing the images of the lower sampling frequency, do the equivalent of calculating values to replace the zeroes? – user149464 May 19 '20 at 17:53
  • @nowradioguy Yes, removing the images is exactly equivalent to calculating values to replace the zeroes. Instead of doing linear interpolation, first zero-stuff, and then FIR filter the stuffed sequence with a function that starts at 0 at t=-1, rises linearly to 1 at t=0, and then falls linearly to 0 at t=1. You'll get exactly the same result as for linear interpolation – Neil_UK May 19 '20 at 18:17
  • @nowradioguy The linear interpolation is a type of low pass filter, just a very bad one that deletes a lot of frequencies below Nyquist, while leaving some above it. Cubic interpolation is better. Sinc would be the best possible. This paper compares a lot of different interpolators for audio resampling if you want to see examples: http://yehar.com/blog/wp-content/uploads/2009/08/deip.pdf – user1850479 May 19 '20 at 18:32
  • Just wondering: would linear or circular convolution be preferable, and why? – user149464 Jun 17 '20 at 18:52
  • @nowradioguy It depends on the signal. If you have a circular signal, then you could use circular convolution. An extended signal must use linear convolution. – Neil_UK Jun 17 '20 at 20:53
  • By "circular" do you mean "true" periodic signals, or finite-length signals that can be made to behave as periodic? – user149464 Jun 21 '20 at 03:46
  • @nowradioguy You used the terms 'circular' and 'linear' in your comment of 3 ago, now it sounds like you don't understand what they mean. I mean circular in the FFT sense, that when you do an FFT, it treats the signal as if it's circular, even if it's not. We have two methods to make a linear signal work properly under the circular assumption. One is windowing, which degrades the signal in a controlled way. The other is identifying a periodicity, and using a section of exactly that length, to 'wrap around' circularly. – Neil_UK Jun 21 '20 at 05:40
  • I used a spreadsheet to demonstrate (to myself) both linear and circular convolution. My x(n) sequence was that of a 15-KHz sinusoid, originally sampled at 50 KHz, and then upsampled to 200 KHz by zero-insertion. The intent was to verify that an (ideal low-pass) interpolation filter would "fill in" the points of zero-insertion and effectively increase the sampling rate (in this case) by a factor of 4. While the circular output looked cleaner/neater, the linear was more representative of how I've always understood convolution. – user149464 Jun 22 '20 at 03:52
  • @nowradioguy If you're doing them properly, you should get exactly the same result. Do note however that you'd need to take several cycles of a 15k signal sampled at 50k to make it circular, one won't do. You'll need 3n cycles and 10n sample points, where n is any positive integer, to get to a common 5kHz/n resolution. The fact that you report a difference suggests that you haven't done this. You'll need to choose n big enough to make your filter sequence discrete, to get the same result as linear. – Neil_UK Jun 22 '20 at 05:01
  • I actually took 11 sample points over 3 cycles of the 15-KHz sinusoid. Between those sample points, I manually inserted L-1= 3 zeroes to simulate the 4X upsampling process. Then I convolved the latter with an ideal low-pass filter with cutoff at 20 KHz. The latter's time domain was the expected (sin x)/(x) of 41 points, symmetrical about zero. Here's a link to the spreadsheets: https://docs.google.com/spreadsheets/d/1TswQQauFALDMgCg1pPfLwpwhpkYlZGwknip_uEUBUjU/edit?usp=sharing – user149464 Jun 23 '20 at 03:45
  • @nowradioguy Thanks for the spreadsheet. Iassume you will reject the end portions of the linear convolution, which leaves little 'good stuff' in the middle, perhaps use more cycles of linear input. You made an important off by one error in taking 11 points for your circular convolution, should be 10n as I said in my previous comment. The end points are in fact the same point, one should be missing. For example, a 256 FFT has data numbered -128 to +127, not to +128. This is why your circular output is no longer antisymmetric about zero. – Neil_UK Jun 23 '20 at 08:29
  • Neil, definitely glad I shared that spreadsheet with you. You helped open my eyes to off-by-one errors, of which I seem to have made a few. I will review the spreadsheet and follow your guidelines, for I, too, surmised that both circular and linear convolutions should yield same result, since the input sequence x(n) is, in fact, periodic (though it need not be). – user149464 Jun 23 '20 at 17:37
  • @Neil_UK, I thought I understood what you were saying, but actually I'm confused. I performed the linear convolution two different ways, and both times got the same values, and the same curve shape, albeit not exactly 3 full cycles of the 15-KHz sinusoid. Yet, I did obtain 3 good sinusoidal cycles in my circular convolution calculations. Per my understanding, linear convolution works better for,say, real-time speech signals than would a circular convolution. The latter to me seems more like a theoretical nicety for when you are dealing with an actual periodic sequence. – user149464 Jun 24 '20 at 06:22
  • @nowradioguy You do understand that linear convolution of a sequence of length m with a filter of length n results in a sequence of length m+n-1, with only the middle abs(m-n)+1 samples of valid data. You have to either do 'overlap save' or 'overlap add' on multiple sliding convolutions to get continuous valid data, to make sense of those corrupted end regions. So we tend to use circular when preparing test vectors, and linear when doing real time filtering. But when you do them properly, that is, taking care of all the restrictions required to make them valid, you get the same results. – Neil_UK Jun 24 '20 at 09:23
  • @Neil_UK, it took some doing, but I finally got the linear and the circular convolution result to be the same. Essentially, for the circular convolution, I padded both x(n) and h(n) with zeroes so that each sequence was of length L = M + N - 1 = 41 + 41 - 1 = 81. I used Google Sheets' matrix multiplication formula (MMULT) to do the calculations. The hard work was getting the sequence values in the right array positions. You can see it here: https://docs.google.com/spreadsheets/d/1TswQQauFALDMgCg1pPfLwpwhpkYlZGwknip_uEUBUjU/edit?usp=sharing (Ignore the spreadsheet labeled "Sandbox".) – user149464 Jul 09 '20 at 02:17
  • Can anyone explain why matrix multiplication of zero-padded x(n) and h(n) works? I’ve seen this in various YouTube videos and can attest to the efficiency and ease of use, but can’t find how the matrices correspond to a folded and shifted version of one of the two sequences. – user149464 Jul 29 '20 at 20:50
  • I found this which provides a basis for using matrices for both linear and circular convolution: https://www.dsprelated.com/freebooks/filters/Matrix_Filter_Representations.html – user149464 Jul 31 '20 at 01:48