$conf['savedir'] = '/app/www/public/data'; notes:elen90057 [DokuWiki]

Site Tools


notes:elen90057

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Next revision
Previous revision
notes:elen90057 [2021/08/30 01:22] – created joelegnotes:elen90057 [2023/05/30 22:32] (current) – external edit 127.0.0.1
Line 156: Line 156:
  
 No matter the demodulation, as we are using probabilistic determination with noise, there exists a chance we can make an error. This is when we decide on the received signal being misidentified as another. No matter the demodulation, as we are using probabilistic determination with noise, there exists a chance we can make an error. This is when we decide on the received signal being misidentified as another.
 +
 +=== Binary PAM ===
  
 For Binary PAM, the probability we make an error is: $$P(err|s_2)=P(r>0,s_2)$$ We can recall that $r|s_2\sim\mathcal{N}\left(-\sqrt{\varepsilon_b},\frac{N_0}{2}\right)$, making the probability of an error equal to: $$P(err|s_2)=Q\left(\frac{0+\sqrt{\varepsilon_b}}{\sqrt{\frac{N_0}{2}}}\right)=Q\left(\sqrt{\frac{2\varepsilon_b}{N_0}}\right)$$ Due to symmetry, the error is the same for $s_1$. The bit error probability is: $$P_b=\frac{1}{2}P(err|s_1)+\frac{1}{2}P(err|s_2)=Q\left(\sqrt{\frac{2\varepsilon_b}{N_0}}\right)$$ For Binary PAM, the probability we make an error is: $$P(err|s_2)=P(r>0,s_2)$$ We can recall that $r|s_2\sim\mathcal{N}\left(-\sqrt{\varepsilon_b},\frac{N_0}{2}\right)$, making the probability of an error equal to: $$P(err|s_2)=Q\left(\frac{0+\sqrt{\varepsilon_b}}{\sqrt{\frac{N_0}{2}}}\right)=Q\left(\sqrt{\frac{2\varepsilon_b}{N_0}}\right)$$ Due to symmetry, the error is the same for $s_1$. The bit error probability is: $$P_b=\frac{1}{2}P(err|s_1)+\frac{1}{2}P(err|s_2)=Q\left(\sqrt{\frac{2\varepsilon_b}{N_0}}\right)$$
 +
 +=== Binary orthogonal signals ===
  
 For binary orthogonal signals, the signal is: $$s_m(t)=\sqrt{\varepsilon_b}\phi_m(t)$$ Where $\phi_1(t),\phi_2(t)$ are orthonormal. The ML detector uses: $$\hat{m}=\arg\min_m||r-s_m||$$ Being that the constellation point closest to the received point is assumed to be transmitted. The error probability consists of 2 normally distributed variables, unlike the one from BPSK. This causes the error distribution to be: $$n\sim\mathcal{N}(0,N_0)$$ The error probability is thus: $$P(err|s_1)=P(n_3>\sqrt{\varepsilon_b})=Q\left(\sqrt{\frac{\varepsilon_b}{N_0}}\right)$$ The probabilities are equiprobable, being $P(err|s_1)=P(err|s_2)$, so the overall bit error is: $$P_b=Q\left(\sqrt{\frac{\varepsilon_b}{N_0}}\right)$$ For the same energy, binary PAM has better error performance than binary orthogonal signalling. Alternatively, binary PAM can have the same error probability with half the energy. This is because for the same energy, the distance between constellation points is greater for 2-PAM. For binary orthogonal signals, the signal is: $$s_m(t)=\sqrt{\varepsilon_b}\phi_m(t)$$ Where $\phi_1(t),\phi_2(t)$ are orthonormal. The ML detector uses: $$\hat{m}=\arg\min_m||r-s_m||$$ Being that the constellation point closest to the received point is assumed to be transmitted. The error probability consists of 2 normally distributed variables, unlike the one from BPSK. This causes the error distribution to be: $$n\sim\mathcal{N}(0,N_0)$$ The error probability is thus: $$P(err|s_1)=P(n_3>\sqrt{\varepsilon_b})=Q\left(\sqrt{\frac{\varepsilon_b}{N_0}}\right)$$ The probabilities are equiprobable, being $P(err|s_1)=P(err|s_2)$, so the overall bit error is: $$P_b=Q\left(\sqrt{\frac{\varepsilon_b}{N_0}}\right)$$ For the same energy, binary PAM has better error performance than binary orthogonal signalling. Alternatively, binary PAM can have the same error probability with half the energy. This is because for the same energy, the distance between constellation points is greater for 2-PAM.
 +
 +=== M-PAM ===
 +
 +For M-PAM, the signal waveforms are written as: $$s_m(t)=A-mg(t)=A_m\sqrt{\varepsilon_g}\phi(t);1\leq m\leq Mm0\leq t\leq T$$ Where $A_m=2m-1-M$ is the set of possible amplitudes, $\phi(t)=\frac{g(t)}{\sqrt{\varepsilon_g}}$ and the pulse energy is $\varepsilon_g$, The minimum energy between constellation points is $\sqrt{2\varepsilon_g}$. The signal demodulator's output is the random variable: $$r=\langle r(t),\phi(t)\rangle=A_m\sqrt{\varepsilon_g}+n\implies r\sim\mathcal{N}(A_m\sqrt{\varepsilon_g},N_0/2)$$ For outer point ($m=1,M$), the error probability is: $$P(err|s_m)=Q\left(\sqrt{\frac{2\varepsilon_g}{N_0}}\right)$$ For inner points ($m=2,...,M-1$), the error probability is: $$P(err|s_m)=P(|r-s_m|>\sqrt{\varepsilon_g})=2Q\left(\sqrt{\frac{2\varepsilon_g}{N_0}}\right)$$ The symbol error probability is: $$P_e=\frac{1}{M}\left(2Q\left(\sqrt{\frac{2\varepsilon_g}{N_0}}\right)\right)+(M-2)2Q\left(\sqrt{\frac{2\varepsilon_g}{N_0}}\right)=\frac{2(M-1)}{M}Q\left(\sqrt{\frac{2\varepsilon_g}{N_0}}\right)$$ The average symbol energy is: $$\varepsilon_{avg}=\frac{1}{M}\sum_{m=1}^MA_m^2\varepsilon_g=\frac{\varepsilon_g(M^2-1)}{3}$$ The average bit energy can be useful to compare with other modulation schemes. $$\varepsilon_b=\frac{\varepsilon_{avg}}{\log_2M}=\frac{\varepsilon_g(M^2-1)}{3\log_2M}$$ After substitution the symbol error probability is: $$P_e=\frac{2(M-1)}{M}Q\left(\sqrt{\frac{4\log_2M\varepsilon_b}{(M^2-1)N_0}}\right)$$
 +
 +M-PAM bandpass signals are: $$s_m(t)A_mg(t)\cos(2\pi f_ct)=A_m\sqrt{\frac{\varepsilon_g}{2}}\phi(t);1\leq m\leq M,0\leq t\leq T$$ The minimum distance between constellations is $d_{min}=\sqrt{2\varepsilon_g}$. The symbol error probability is: $$P_e=\frac{2(M-1)}{M}Q\left(\sqrt{\frac{\varepsilon_g}{N_0}}\right)$$ The average symbol energy is: $$\varepsilon_{avg}=\frac{\varepsilon_g(M^2-1)}{6}$$ $$\varepsilon_b=\frac{\varepsilon_g(M^2-1)}{6\log_2M}$$ The symbol error probability becomes: $$P_e=\frac{2(M-1)}{M}Q\left(\sqrt{\frac{6\log_2M\varepsilon_b}{(M^2-1)N_0}}\right)$$ This is the same performance as M-ary baseband.
 +
 +=== M-PSK ===
 +
 +In M-ary PSK, all M signals have the same energy, so all constellation points are on a circle. M-ary signals can be represented as: $$s_m(t)=g(t)\cos(\theta_m)\cos(2\pi f_ct)-g(t)\sin(\theta_m)\sin(2\i f_ct)$$ Where $g(t)$ is the pulse shape and $\theta_m=\frac{2\pi}{M}(m-1)$ is the phase conveying the information. This forms the orthonormal basis: $$\{\phi_1(t),\phi_2(t)\}=\left\{\sqrt{\frac{2}{\varepsilon_g}}g(t)\cos(2\pi f_ct),-\sqrt{\frac{2}{\varepsilon_g}}g(t)\sin(2\pi f_ct)\right\}$$ This assumes $f_c>>\frac{1}{T}$. For QPSK, the probability of a correct transmission is: $$P(correct|s_1)=(r_1>0,r_2>0|s_1)=(1-P_{BPSK})^2$$ Hence the symbol error probability is: $$P(err|s_1)=1-(1-P_{BPSK})^2=2P_{BPSK}-P_{BPSK}^2=2Q\left(\sqrt{\frac{2\varepsilon_b}{N_0}}\right)-\left(Q\left(\sqrt{\frac{2\varepsilon_b}{N_0}}\right)\right)^2$$ For $M\geq8$, the symbol error probability calculations need a transformation to polar coordinates and can only be approximated. The bit error probability is the average proportion of erroneous bits per symbol. This depends on the mapping from the k-bit code-word to the symbol. At high SNR, most symbol errors involve erroneous detection of the transmitted signal as the nearest neighbour signal. Therefore fewer bit errors occur by ensuring that neighbouring symbols differ by only one bit, being Gray coding. At high SNR, the bit error probability can be approximated as the probability of picking a neighbouring signal, so assuming Gray coding: $$P_b\approx\frac{1}{k}P_e\approx\frac{P_e}{\log_2M}$$ Where $k=\log_2M$, the number of bits per symbol.
 +
 +=== QAM ===
 +
 +QAM is the most widely used constellation, due to its 2d scaling and efficiency. QAM allows signals to have different amplitudes and impress information bits on each of the quadrature carriers. The important performance parameters are average energy and minimum distance in the signal constellation. An orthonormal basis for the signal space is: $$\{\phi_1(t),\phi_2(t)\}=\left\{\sqrt{\frac{2}{\varepsilon_g}}g(t)\cos(2\pi f_ct),-\sqrt{\frac{2}{\varepsilon_g}}g(t)\sin(2\pi f_ct)\right\}$$ For QAM there are many different constellations possible, so we need to select the one with the greatest minimum distance and efficiency, which still including implementation ease. In general rectangular QAM is most frequently used, due to its ease. When $M=2^k$, with $k$ even, M-ary QAM is equivalent to two $\sqrt{M}$ ary PAM signals on quadrature carriers, each with half the equivalent QAM power. If $P_\sqrt{M}$ is the probability of symbol error of $\sqrt{M}$ ary PAM, then similar to the case of QPSK, the probability of a correct decision is given by: $$P(\text{no error})=(1-P_{\sqrt{M}})^2\implies P_e=1-(1-P_{\sqrt{M}})^2$$
 +
 +=== Orthogonal signalling ===
 +
 +Orthogonal signalling uses the waveforms: $$s_m(t)=\sqrt{\varepsilon_s}\phi_m(t),1\leq m\leq M$$ The minimum distance between signals is: $$d_{min}=\sqrt{2\varepsilon_s}=\sqrt{2\log_2(M)\varepsilon_b}$$ The ML criterion can be expressed as: $$\hat{m}=\arg\min_m||r-s_m||=\arg\max_m[\langle r,s_m\rangle+\eta_m]$$ Where $\eta_m=-\frac{1}{2}\varepsilon_m$ is a bias term that compensates for signal sets that have unequal energies, e.g. PAM. This makes the ML rule for orthogonal signalling to select the greatest $r_n$ of the demodulator output. If $r_m>r_k\forall k\neq m$, then $s_m$ was transmitted.
 +
 +If $s_m$ is transmitted, then: $$r_m\sim\mathcal{N}(\sqrt{\varepsilon_s},N_0/2)$$ $$r_k\sim\mathcal{N}(0,N_0/2),\forall k\neq m$$ The error probability is: $$P_e=1-\frac{1}{\sqrt{\pi N_0}}\int_{-\infty}^\infty\left[1-Q\left(\frac{r_m}{\sqrt{N_0/2}}\right)\right]^{M-1}\exp\left(-\frac{(r_m-\sqrt{\varepsilon_s})^2}{N_0}\right)dr_m$$ The bit probability is: $$P_b=2^{k-1}\frac{P_e}{M-1}=\frac{MP_e}{2(M-1)}\approx\frac{P_e}{2}$$ The error probability $P_e$ is upper bounded by the union bound of the $M-1$ events: $$P_c\leq(M-1)P_b^{bin}=(M-1)Q\left(\sqrt{\frac{\varepsilon_s}{N_2}}\right)<Me^{-\frac{\varepsilon_s}{2N_0}}=2^ke^{-\frac{k\varepsilon_b}{2N_0}}$$ $$P_e<e^{-k\frac{(\varepsilon_b/N_0-2\ln2)}{2}}$$ We can note that the error probability can be made arbitrarily small as $k\to\infty$, provided that $\varepsilon_b/N_0>2\ln2\approx1.39=1.42dB$. A tighter upper bound is given by: $$P_e<2e^{-k(\sqrt{\varepsilon_b/N_0}-\sqrt{\ln2})^2}$$ This bound is the Shannon limit for AWGN symbols, and is valid for $\ln2\leq\varepsilon_b/N_0\leq4\ln2$.
 +
 +====== Efficiency ======
 +
 +Spectral efficiency is the ratio of bitrate $R$ to bandwidth $W$. Bandwidth-limited signals are QAM, PAM, PSK and DPSK, where $R/W>1$. Increasing the bits improves the efficiency. Power-limited signals are Orthogonal signals, where $R/W<1$. Increasing the bits decreases the efficiency.
 +
 +The bandwidth of a pulse is the reciprocal of its length. $$W=\frac{1}{T}$$ This is the bandwidth of the centre lobe of the sinc function, the Fourier transform of the rectangular pulse. The Nyquist bandwidth is for digital systems of symbol period $T$, for which the minimum bandwidth to use is $\frac{1}{2T}$.
 +
 +The spectral efficiency of a modulation scheme is defined as; $$\nu=\frac{R_b}{W}$$ Where $R_b=\frac{k}{T}$ is the bitrate and $W$ is the bandwidth required. The spectral efficiency is a performance indicator for fundamental comparison of modulation schemes with respect to power and bandwidth usage.
 +
 +Passband PAM has a bandwidth of: $$B=\frac{1}{2T}$$ Bandpass PAM and QAM have a bandwidth of; $$B=2W=2\frac{1}{2T}=\frac{1}{T}$$ QAM uses the same frequencies as PAM, so has the same bandwidth. PSK has a bandwidth of: $$B=\frac{1}{T}$$ This is because PSK is a subset of QAM where $A_m,A_n$ are equal to $\cos(\theta_n),\sin(\theta_n)$ respectively.
 +
 +Passband PAM has a spectral efficiency of: $$\nu=\frac{R_b}{1/2T}=2k=2\log_2M\to\infty\text{ as }M\to\infty$$ Bandpass PAM has a spectral efficiency of: $$\nu=\frac{R_b}{1/T}=k=\log_2M\to\infty\text{ as }M\to\infty$$ M-ary PSK has a spectral efficiency of: $$\nu=\frac{R_b}{1/T}=k=\log_2M\to\infty\text{ as }M\to\infty$$ M-ary QAM has a spectral efficiency of: $$\nu=\frac{R_b}{1/T}=k=\log_2M\to\infty\text{ as }M\to\infty$$
 +
 +For the orthogonal signalling methods Pulse Position Modulation (PPM) and Frequency Shift Keyring (FSK), the spectral efficiency is: $$\nu=\frac{R_b}{\frac{M}{2T}}=\frac{2k}{M}=\frac{2\log_2M}{M}\to0\text{ as }M\to\infty$$ For PPM, the pulse duration is $T/M$, requiring $M$ times the PAM bandwidth. For FSK, the minimum frequency separation is $\frac{1}{2T}$.
 +
 +PAM/QAM/PSK have increasing M leading to more bandwidth efficiency and less power efficiency. These are appropriate for bandwidth-limited channels with plenty of power. PPM/FSK have increasing M leading to less bandwidth efficiency and more power efficiency. These are appropriate in power-limited channels with plenty of bandwidth. Theoretically the error probability can be made arbitrarily small as long as $\text{SNR}/\text{bit}>-1.6dB$, but this would require infinite bandwidth.
 +
 +====== Bandlimited communications ======
 +
 +Channels act as a LTI bandpass filter, removing all signals outside of a band. This can distort the output signal. Even without noise, we can result in a transmitted signal outside of the signal space.
 +
 +Ideally channels should have infinite bandwidth with a flat amplitude response and a linear phase response. In practice channels have finite bandwidth, non-flat amplitude responses and nonlinear phase responses. Signal distortions in amplitude and phase due to bandwidth limitations of the channel results in Inter-Symbol Interference (ISI).
 +
 +When signal $s(t)=\sum_{n=0}^{\infty}I_ng_T(t-nT)$, with $R=1/T$ symbol rate and $\{I_n\}$ symbol sequence, is transmitted through a bandlimited channel, the received signal is: $$r(t)=\sum_{n=0}^\infty I_nh{t-nT}+z(t)$$ Where $h(t)=(g_T\star c)(t)=\int_{-\infty}^\infty g_T(\tau)c(t-\tau)$ and $z(t)$ is the AWGN, The signal demodulator output is: $$y(t)=\sum_{n=0}^\infty I_nx(t-nT)+\nu(t)$$ Where $x(t)=(g_T\star c\star g_R)(t)$ and $\nu(t)=(z\star g_R)(t)$ is the filtered noise. The received filtered output is sampled at a rate of $1/T$, resulting in $$y(kT)=\sum_{n=0}^\infty I_nx((k-n)T)+\nu(kT)=I_kx(0)+\underbrace{\sum_{n=0,n\neq k}^\infty I_nx((k-n)T)}_{\text{ISI}}+\nu(kT)$$ If the pulse shaping functions can be designed such that the output $x$ results in a sinc function, then when sampling at integer multiples of the period, the original signal can be recovered without ISI.
 +
 +The Nyquist Pulse-Shaping Criterion is that a necessary and sufficient condition for $x(t)$ to satisfy $$x(kT)=\begin{cases}1,&k=0\\0,&k\neq0\end{cases}$$ Is that its frequency response $X(f)$ satisfies: $$\sum_{m=-\infty}^{\infty}X\left(f+\frac{m}{T}\right)=T$$ That is, the sum of all shifted versions of the frequency response result in a flat value. As a result, zero ISI cannot be achieved when $R>2W$, with $R$ being the symbol transmission rate and $W$ being the baseband channel bandwidth. When $R=2W$, the only choice for $X(f)$ is the rectangular pulse shape, corresponding to the sinc pulse in time domain. When $R<2W$, there are many choices for $X(f)$ to satisfy the criterion. To avoid ISI, the maximum symbol rate is $R=2W$, making the shortest signalling interval $T=\frac{1}{2W}$.
 +
 +A popular choice when $R<2W$ is the raised cosine spectrum: $$X_{rc}(f)=\begin{cases}T,&0\leq|f|\leq\frac{1-\beta}{2T}\\\frac{T}{2}\left(1+\cos\left[\frac{\pi T}{\beta}\left(|f|-\frac{1-\beta}{2T}\right)\right]\right),&\frac{1-\beta}{2T}\leq|f|\leq\frac{1+\beta}{2T}\\0,&|f|>\frac{1+\beta}{2T}\end{cases}$$ Where $\beta\in[0,1]$ is the roll-off factor and the pulse bandwidth is given by $(1+\beta)\frac{1}{2T}$. In time domain, the signal corresponding to the raised cosine spectrum is: $$x_{rc}(t)=\text{sinc}\left(\frac{t}{T}\right)\frac{\cos(\pi\beta t/T)}{1-(2\beta t/T)^2}$$ The side lobes of the pulse where $\beta=1$ are smaller than the side lobes of the sinc pulse ($\beta=0$). Smaller side lobes are better where there are timing errors because they lead to smaller ISI components. We call $2W$ the Nyquist frequency and see that $\beta=0$ corresponds to a symbol rate at Nyquist frequency, with the rectangular spectrum of the sync pulse. For a larger value of $\beta$ we need bandwidth beyond the Nyquist frequency. $$W=\frac{1+\beta}{2T}$$ We can say that $\beta=0.5$ involves an excess bandwidth of $50\%$, similarly $\beta=1$ involves an excess bandwidth of $100\%$. Thus choosing $\beta$ is a tradeoff between robustness against timing errors and transmission speed.
 +
 +A transmitted signal is $X(f)=G_T(f)C(f)G_R(f)$, which needs to satisfy the Nyquist criterion for zero ISI. The raised cosine spectrum is one possible choice for $X(f)$. $G_T(f)$ determines the transmitted pulse shape, $C(f)$ is the channel (cannot control) and $G_R(f)$ is the receive filter. We can design the filters $G(f)$ to compensate for the channel at the transmitter, giving: $$|G_T(f)|=\frac{|X(f)|^\frac{1}{2}}{|C(f)|}$$ $$|G_R(f)|=|X(f)|^\frac{1}{2}$$ Alternately we can design to compensate for the channel at both the transmitter and receiver: $$|G_T(f)|=\frac{|X(f)|^\frac{1}{2}}{|C(f)|^\frac{1}{2}}$$ $$|G_R(f)|=\frac{|X(f)|^\frac{1}{2}}{|C(f)|^\frac{1}{2}}$$ The second design is optimal in terms of error probability for Gaussian noise. Both designs use $P(f)=X(f)^\frac{1}{2}$, called the Square Root Raised Cosine (SRRC), abbreviated to RRC. For baseband channels, the Nyquist criterion says that zero ISI can be achieved if the symbol transmission rate is $R\leq2W$, where $W$ is the channel bandwidth. For bandpass channels with bandwidth $W$, the previous analysis still holds if $s(t),c(t),z(t),r(t),y(t)$ are the complex baseband equivalents. In this case the Nyquist criterion says that zero ISI can be achieved if $R\leq W$ (due to different ways of defining bandwidth).
 +
 +For bandpass channels, the Nyquist frequency equals the channel bandwidth $W$, so $\beta=0$ corresponds to a symbol rate at the Nyquist frequency with a rectangular spectrum of the sync pulse. For a larger $\beta$ we need bandwidth beyond the Nyquist frequency, hence for bandpass channels we have: $$W=\frac{1+\beta}{T}$$ Again, choosing the value of $\beta$ is a trade off between robustness against timing errors and transmission speed.
 +
 +An eye diagram is constructed by placing all the symbols into the same window of $T$. At position 0, all the symbols resolve to 0 or 1, but vary in between integer positions. A low beta causes a wide spread of symbols where a high beta clusters them more. As such, it is easy to see the effect of timing errors on the diagram. The plot carves out a region resembling an eye, giving the plot its name. As the eye closes, ISI increases and as it opens ISI is decreasing. Other diagnostics are the noise margin (eye opening) and sensitivity to timing error (eye width). Channel distortions lead to a smaller eye opening width and a smaller eye opening at $t=0$. Eye diagrams can easily be extended to multiple levels.
 +
 +====== Equalisation ======
 +
 +Channel equalisation are useful in detecting data in the presence of ISI in many types of channels The sampled output of a receive filter is: $$y_k=I_kx_0+\sum_{n=0,n\neq k}^\infty I_nx_{k-n}+\nu_k$$ This is the equivalent Discrete Time model for ISI. Here $\nu_k$ is filtered non-white noise, to whiten it we pass $y_k$ through a noise whitening filter. The whitening filter's output can be written as: $$u_k=f_0I_k+\sum_{n=1}^\infty f_nI_{k-n}+\eta_k$$ Where $\eta_k$ is white noise and $f_n$ are the filter coefficients of the cascade of filters used (transmitter, channel, receiver, whitening). In practice, we assume that ISI only affects a finite number of neighbouring signals, resulting in using a simpler FIR model. $$u_k=f_0I_k+\sum_{n=1}^L f_nI_{k-n}+\eta_k$$ Where $L$ is the channel memory. This is then fed to an equaliser, which aims to recover the transmitted symbol sequence $\{I_k\}$ from the received sequence of values $\{u_k\}$. This is different to "symbol-by-symbol" detection. A given sequence can be decoded to be the sequence that maximises the probability (MAP detection).
 +
 +If the channel memory is L, then the ML sequence detector is simplified as a product of likelihood values: $$(\hat{I}_0m,\hat{I}_1,...,\hat{I}_k)=\arg\max_{i_0,...,i_k}\prod_{k=0}^Kp(u_k|I_{k-L}=i_{k-L},...,I_k=i_k)$$ Which can be re-expressed in terms of log likelihood as: $$(\hat{I}_0m,\hat{I}_1,...,\hat{I}_k)=\arg\max_{i_0,...,i_k}\sum_{k=0}^K\ln(p(u_k|I_{k-L}=i_{k-L},...,I_k=i_k))$$ If $\eta_k\sim\mathcal{N}(0,\sigma^2)$, then $u_k=\sum_{n=0}^Lf_nI_{k-n}+\eta_k$ has distribution $u_k\sim\mathcal{N}(\sum_{n=0}^Lf_nI_{k-n},\sigma^2)$ and the ML detector is given by: $$(\hat{I}_0m,\hat{I}_1,...,\hat{I}_k)=\arg\max_{i_0,...,i_k}\sum_{k=0}^K\left[\ln\left(\frac{1}{\sqrt{2\pi\sigma^2}}\right)-\frac{(u_k-\sum_{n=0}^Lf_ni_{k-n})^2}{2\sigma^2}\right]=\arg\max_{i_0,...,i_k}\sum_{k=0}^K\left(u_k-\sum_{n=0}^Lf_ni_{k-n}\right)$$
 +
 +===== Viterbi Algorithm =====
 +
 +The Viterbi algorithm finds the shortest (or longest) path in a trellis. The main idea is to incrementally calculate the shortest path length along the way.
 +
 +We can construct a trellis where the path lengths are the log-likelihood values $\ln(p(u_k|I_{k-1}=i_{k-1},I_k=i_k))$. The most-likely transmitted sequence is the longest path through the trellis. In general, for M-ary signalling the trellis diagram has $M^L$ states $S_k=(I_{k-1},I_{k-2},...,I_{k_L})$ at a given time. For each new received signal $u_k$, the Viterbi algorithm computes $M^{L+1}$ metrics: $$\sum_{k=0}^K\ln(p(u_k|I_{k-L}=i_{k-L},...,I_k=i_k))$$ A Viterbi ML detector produces the ML sequence. For a large $L$, the implementation of the Viterbi algorithm can be computationally intensive. Because of this, suboptimal approaches such as linear equalisation is used.
 +
 +===== Linear equalisation =====
 +
 +A linear equaliser tries to invert the FIR model used to create the input. Zero-forcing equalisation convolves the input with the output, delaying the input to do so. The equaliser's output is: $$\hat{I}_k=\sum_{j=-K}^Kc_ju_{k-j}$$ Where $c_j$ are $2K+1\geq L$ filter tap coefficients of the equaliser to be designed so that: $$C(z)\cdot F(z)=1$$ Linear equalisers have complexity that grows linearly with channel memory $L$, as opposed to exponential growth with the Viterbi Algorithm ($M^L$).
 +
 +Given input $u_k=f_0I_k+\sum_{n=1}^Lf_nI_{k-n}+\eta_k$, the output of the linear filter equaliser is: $$\hat{I}_k=\sum_{j=-K}^Kc_ku_{k-j}=q_0I_k+\sum_{j=-K,j\neq k}^Kq_{k-j}I_j+\sum_{j=-K}^Kc_j\eta_{k-j}$$ Where $q_n=(f\star c)_n$. To satisfy zero-forcing criteria, choose tap filter coefficients $c_j$ such that: $$q_n=\sum_{j=-K}^Kc_jf_{n-j}=\begin{cases}1,&n=0\\0,&n=\pm1,\pm2,...,\pm K\end{cases}$$ In general, some $q_n$ in the range $n=K+1,...,K+L-1$ will still be non-zero due to finite filter length.
 +
 +As the value of $K$ increases, the error probability decreases before levelling out. A large amount of noise causes the probability to level out with a smaller value of $K$. An infinite number of filter taps would be able to invert the channel filter. In the presence of noise, the amplification of noise by many taps damages the signal.
 +
 +===== Minimum Mean Squared Error =====
 +
 +Another linear equaliser is the Minimum Mean Squared Error (MMSE) which minimises the MSE of the equaliser output, defined as: $$J(c)=E[|I_k-\hat{I}_k|^2]=E\left[|I_k-\sum_{j=-k}^Kc_ju_{k-j}|^2\right]$$ When the noise is small for all frequencies, then MMSE is identical to a zero-forcing equaliser. When noise is large, the MMSE equaliser does not produce large noise amplification like the zero-forcing equaliser.
  
notes/elen90057.1630286569.txt.gz · Last modified: 2023/05/30 22:32 (external edit)