Adaptive EQ + QPSK Simulation

In digital communications, there is a phenomenon known as “Inter-Symbol Interference” (ISI) when transmitting data over a multipath channel. What does this mean in layman terms?

There is a high probability where a chunk of bits of information will interfere with each other through “wireless” transmission– distorting the signal.

Adaptive Equalization (Adaptive EQ)

The objective of this simulation is to investigate the performance of an adaptive equalizer for data transmission over a multipath channel that causes inter-symbol interference (ISI).

The data generator module is used to create a sequence of complex-valued information symbol s[n]. For this simulation, I will assume QPSK symbols. In other data will be drawn from the set {a+ja, a−ja,−a+ja,−a−ja}, where a represents the signal amplitude that is chosen according to a given signal-to-noise ratio (SNR). Assuming that the noise has unit power, then SNR = 20log10(√2a)

To verify performance, here is the list of requirements:

  1. A channel filter module will be used as an FIR filter with impulse response c[n] that simulates the channel distortion.

  2. A noise generator module will be used to generate additive noise that is present in any digital communication system. We assume unit-power, complex Gaussian noise.

  3. The adaptive equalizer module is a length M+1 FIR filter h[n] whose coefficients are adjusted using either the LMS or the normalized-LMS algorithm.

  4. A decision device module takes the output of the equalizer and will quantize it to one of the four possible transmitted symbols in QPSK, based on whichever is closest.

  5. A plot displaying the error e[n] as a function of n will be shown, averaged over the P experiments.

The Code

  1. To generate random complex data sequences from a given SNR value, I do the following sequence: generate the symbol table in amplitude_to_qpskSet(), next I generate random “complex binary” data by randomizing the real and complex components separately using sign(randn()) in generate_QPSK_data(), and finally I map the random complex data to the generated symbol table in qpsk_mod(). The combination will ultimately give a complex unit-vector in QPSK coordinates.

  2. I send the random complex data through the channel by using filter(c,1, sn); This essentially convolutes the input with the channel.

  3. I add N amounts of noise using xn = channel_out + (randn(size(channel_out))+ 1j*randn(size(channel_out)))/sqrt(2).

  4. I update my LMS and Normalized filter coefficients (of size M+1) using h = h + ( mu * conj(e(n))*xn_shifts ); and hn = hn + ( lambda * conj(en(n))*xn_shifts) ./ ( (xn_shifts)\' * (xn_shifts ) )

  5. The decisions are commented on the code. In sum, the minimum distance between the filter output and a constellation point (QPSK value) is calculated and is then mapped to a constellation point using an index. This decision is made after the training sequence has gone (when n >T).

  6. The resulting LMS filter experiment is done P times and is then plotted using stem() to see the channel coefficients and filter coefficients, and semilogy() to see the LMS and Normalized LMS Learning Curve.

Adaptive Equalization Code

  1function adapt_equal( c, SNR, mu, lambda, M, N, T, P )
  2
  3    average_J= zeros(N,1);
  4    average_Jn= zeros(N,1);
  5    e = zeros(1,N);
  6    en = zeros(1,N);
  7    J = zeros(1,N);
  8    Jn = zeros(1,N);
  9    xn_shifts = zeros([N 1]);
 10    f_out = zeros(N);
 11    f_out_n = zeros(N);
 12
 13    %generate qpsk symbols
 14    sn = generate_QPSK_data(SNR, N);
 15    finite_sn = amplitude_to_qpskSet(SNR);
 16
 17
 18    for p = 1:P
 19    %go through channel
 20    % channel_out=conv(c,sn);
 21    channel_out = filter(c,1,sn);
 22
 23    %add noise
 24    xn = channel_out + (randn(size(channel_out))+ 1j*randn(size(channel_out)))/sqrt(2);
 25
 26    %initialize filter size
 27    h=zeros(M+1,1);
 28    hn = zeros(M+1,1);
 29
 30        %per sample
 31        for n=1:N
 32
 33            xn_shifts = [xn(n) ; xn_shifts(1:M)];
 34
 35            f_out(n) = h' * xn_shifts;
 36            f_out_n(n) = hn' * xn_shifts;
 37
 38            %decision block
 39
 40            %LMS
 41            error_decision = f_out(n) - finite_sn;
 42            [~,decided_index] = min(abs(error_decision));
 43            s_hat = finite_sn(decided_index);
 44
 45            %normalized LMS
 46            error_decision_n = f_out_n(n) - finite_sn;
 47            [~,decided_index_n] = min(abs(error_decision_n));
 48            s_hat_n = finite_sn(decided_index_n);
 49
 50            %if training
 51            if n < T
 52                e(n) = sn(n) - f_out(n);
 53                en(n) = sn(n) - f_out_n(n);
 54            else
 55                e(n) = s_hat - f_out(n);
 56                en(n) = s_hat_n - f_out_n(n);
 57            end
 58
 59            %update coeff
 60             h = h + ( mu * conj(e(n))*xn_shifts );
 61             hn = hn + ( lambda * conj(en(n))*xn_shifts) ./ ( (xn_shifts)' * (xn_shifts ) );
 62
 63            J(n)=abs(e(n));
 64            Jn(n) = abs(en(n));
 65
 66            average_J(n)=average_J(n)+J(n);
 67            average_Jn(n)=average_Jn(n)+Jn(n);
 68        end
 69
 70        average_J=average_J/P;
 71        average_Jn=average_Jn/P;
 72
 73
 74        subplot(3,3,1)
 75        cplot(f_out)
 76        title('Filter Output')
 77
 78        subplot(3,3,2)
 79        stem(h)
 80    %     axis([-2 2 -1 1])
 81        title('Adaptive Filter Impulse Response Coefficients (Real Component)')
 82        xlabel('n')
 83
 84        subplot(3,3,3)
 85        stem(c)
 86    %     axis([-2 2 -1 1])
 87        title('Channel Impulse Response Coefficients')
 88        xlabel('n')
 89
 90        subplot(3,3,4)
 91        cplot(xn)
 92        axis([-35,35,-35,35])
 93        title('Channel with Noise')
 94
 95        subplot(3,3,5)
 96        cplot(e)
 97        title('Error (complex)')
 98        subplot(3,3,6)
 99        cplot(channel_out)
100        axis([-35,35,-35,35])
101        title('All Data through channel')
102
103        subplot(3,3,7)
104        cplot(sn)
105        title('Data Input (QPSK)')
106
107        subplot(3,3,[8 9])
108        drawnow
109
110        semilogy(average_J)
111        hold on
112
113        semilogy(average_Jn)
114        title('Learning curve abs(e(n))')
115
116        xlabel('time step n')
117        legend('LMS', 'Normalized LMS')
118
119        hold off
120
121
122    end
123
124end

Data Generation (BPSK) Code

 1function sn = generate_QPSK_data(sig_noise,numOfData)
 2
 3    for n = 1:numOfData
 4        lookupTable = amplitude_to_qpskSet(sig_noise);    %QPSK lookup table using SNR
 5
 6        %generate random digital data
 7        complex_binary = complex(sign(randn(numOfData,1)-0.5),sign(randn(numOfData,1)-0.5));
 8
 9        %map data to the QPSK lookup table
10        sn(n,1) = qpsk_mod(complex_binary, lookupTable);
11
12    end
13    %end data generator module!!!!
14end
15
16function signal = amplitude_to_qpskSet(SNR)
17%returns column vector
18%assume noise has unit power
19    a = ( 10 ^ (SNR/20) ) / sqrt(2);
20    signal = [complex(a,a), complex(a,-a), complex(-a,a), complex(-a,-a)];
21end
22
23function output = qpsk_mod(data, qpsk)
24
25        for k = 1:size(data)
26
27            if angle(data(k)) ==  angle(complex(1,1))
28                output = qpsk(1); %first quad
29
30            elseif angle(data(k)) == angle(complex(1,-1))
31                output = qpsk(2); %fourth quad
32
33            elseif angle(data(k)) == angle(complex(-1,1))
34                output = qpsk(3); %second quad
35
36            elseif angle(data(k)) == angle(complex(-1,-1))
37                output = qpsk(4); %third quad
38            else
39                output = 0;
40            end
41
42        end
43
44    output = output.';
45end

S-Plane Plotting

1function cplot(v)
2    drawnow
3%     pause(0.00001)
4    plot(real(v(1:end)),imag(v(1:end)),'x')
5    axis([-18,18,-18,18])
6end

RESULTS & CONCLUSION

Test Cases

To simulate adaptive equalization, I ran three test cases: a base case, a different channel (complex impulses), and different step sizes. The figures below show the last snapshot of the simulation in their respective order.

Figure 1: Assignment Given Paramenters

Figure 2: Change Channel

Figure 3: Change Step Sizes

Looking at the three figures, what seems to be most effective is step size. One can see the learning curve decrease per time step. Though not shown here, each experiment snapshot demonstrated that the filter output was attempting to cluster at the Data Input constellations.

To conclude, this experiment simulated the adaptive equalizer environment, and with the correct step size, one can recover distorted data by canceling channel and noise effects using LMS.

comments powered by Disqus