$conf['savedir'] = '/app/www/public/data'; notes:elen90057 [DokuWiki]

Site Tools


notes:elen90057

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
notes:elen90057 [2021/10/04 01:22] joelegnotes:elen90057 [2023/05/30 22:32] (current) – external edit 127.0.0.1
Line 5: Line 5:
 ===== Bandpass spectra ===== ===== Bandpass spectra =====
  
-Often we are interested in spectra confined to $\pm0.5B\text{ Hz}$ of +Often we are interested in spectra confined to $\pm0.5B\text{ Hz}$ of a centre frequency $f_0$, being a band. If $f_0>>B$, this is a narrow band. For real signals and systems, the spectrum for the negative frequencies is redundant as $S(f)=S^*(-f)$. The signal can be in bandpass form, that is a modulation of a sinusoid of the carrier frequency. We want to work with the interesting signal, not the carrier, so we transform the bandpass to a lowpass representation. First we remove the negative frequencies, providing the envelope: $$Z(f)=(1+\text{sign}(f))S(f)=\begin{cases}2S(f),&f>0\\S(0),&f=0\\0,&f<0\end{cases}$$ Then we down-shift in the frequency domain, providing the lowpass highpass spectrum: $$S_l(f)=Z(f+f_0)$$ Note that in general $S_l(f)\neq S_l^*(-f)$, being that it is not a real signal in time domain ($s_l(t)$).
-a centre frequency $f_0$, being a band. If $f_0>>B$, this is a narrow +
-band. For real signals and systems, the spectrum for the negative +
-frequencies is redundant as $S(f)=S^*(-f)$. The signal can be in +
-bandpass form, that is a modulation of a sinusoid of the carrier +
-frequency. We want to work with the interesting signal, not the +
-carrier, so we transform the bandpass to a lowpass +
-representation. First we remove the negative frequencies, providing +
-the envelope: +
-$$Z(f)=(1+\text{sign}(f))S(f)=\begin{cases}2S(f),&f>0\\S(0),&f=0\\0,&f<0\end{cases}$$ +
-Then we down-shift in the frequency domain, providing the lowpass +
-highpass spectrum: $$S_l(f)=Z(f+f_0)$$ Note that in general +
-$S_l(f)\neq S_l^*(-f)$, being that it is not a real signal in time +
-domain ($s_l(t)$).+
  
-To convert from the lowpass to bandpass representation, we do the +To convert from the lowpass to bandpass representation, we do the inverse. First we shift the spectrum up: $$Z(f)=S_l(f-f_0)$$ Then we reflect around 0 Hz, conjugate, add to the pre-envelope and scale by 0.5: $$S(f)=\frac{Z(f)+Z^*(-f)}{2}=\frac{S_l(f-f_0)+S_l^*(-f-f_0)}{2}$$
-inverse. First we shift the spectrum up: $$Z(f)=S_l(f-f_0)$$ Then we +
-reflect around 0 Hz, conjugate, add to the pre-envelope and scale by +
-0.5: +
-$$S(f)=\frac{Z(f)+Z^*(-f)}{2}=\frac{S_l(f-f_0)+S_l^*(-f-f_0)}{2}$$+
  
-In time domain, the transformation is a bit more tricky. First, we +In time domain, the transformation is a bit more tricky. First, we note that $\text{sign}(t)\iff\frac{1}{j\pi f}\therefore\text{sign}(f)\iff\frac{-1}{j\pi t}=\frac{j}{\pi t}$ (this is taking the Hilbert Transform $H(f)=-j\text{sign}(f)$). We can find $z(t)$ from the IFT of $Z(f)$: $$z(t)=s(t)+\frac{j}{\pi t}*s(t)=s(t)+\frac{j}{\pi}\int_{-\infty}^\infty\frac{s(\tau)}{t-\tau}=s(t)+j\hat{s}(t)$$ Where $\hat{s}=s*\frac{1}{\pi t}$. Then we take the IFT of $Z(f)=S_l(f-f_0)$: $$s_l(t)=e^{-j2\pi f_0t}z_0(t)\implies z(t)=e^{j2\pi f_0t}s_l(t)$$ Lowpass to bandpass is: $$s(t)=\mathcal{R}(z(t))=\mathcal{R}(e^{j2\pi f_0t}s_l(t))$$ This lets us write $s(t)=|s_l(t)|e^{j\angle s_l(t)}=A(t)e^{j\phi(t)}$. As a lowpass, $A(t)$, $\phi(t)$ change slowly with time. $$z(t)=A(t)e^{j(2\pi f_ct+\phi(t)}$$ The phasor rotates slowly at angular velocity $\approx 2\pi f_c$, with slowly varying magnitude and phase. The real part is: $$s(t)=A(t)\cos(2\pi f_ct+\phi(t))$$ Any real bandpass signal can be represented in this way. As $s_l(t)=s_c+js_s$, the real part is the in-phase component and the complex is the quadrature. We can find that: $$s(t)=s_c(t)\cos(2\pi f_0t)-s_s(t)\sin(2\pi f_0t)$$ This is the canonical form of a bandpass signal, whereas $s_l(t)e^{j2\pi f_0t}$ is the polar form. We can find that: $$s_c(t)=s(t)\cos(2\pi f_0t)+\hat{s}(t)\sin(2\pi f_0t)$$ $$s_s(t)=-s(t)\sin(2\pi f_0t)+\hat{s}(t)\cos(2\pi f_0t)$$
-note that $\text{sign}(t)\iff\frac{1}{j\pi +
-f}\therefore\text{sign}(f)\iff\frac{-1}{j\pi t}=\frac{j}{\pi t}$ (this +
-is taking the Hilbert Transform $H(f)=-j\text{sign}(f)$). We can find +
-$z(t)$ from the IFT of $Z(f)$: $$z(t)=s(t)+\frac{j}{\pi +
-t}*s(t)=s(t)+\frac{j}{\pi}\int_{-\infty}^\infty\frac{s(\tau)}{t-\tau}=s(t)+j\hat{s}(t)$$ +
-Where $\hat{s}=s*\frac{1}{\pi t}$. Then we take the IFT of +
-$Z(f)=S_l(f-f_0)$: $$s_l(t)=e^{-j2\pi f_0t}z_0(t)\implies +
-z(t)=e^{j2\pi f_0t}s_l(t)$$ Lowpass to bandpass is: +
-$$s(t)=\mathcal{R}(z(t))=\mathcal{R}(e^{j2\pi f_0t}s_l(t))$$ This lets +
-us write $s(t)=|s_l(t)|e^{j\angle s_l(t)}=A(t)e^{j\phi(t)}$. As a +
-lowpass, $A(t)$, $\phi(t)$ change slowly with +
-time. $$z(t)=A(t)e^{j(2\pi f_ct+\phi(t)}$$ The phasor rotates slowly +
-at angular velocity $\approx 2\pi f_c$, with slowly varying magnitude +
-and phase. The real part is: $$s(t)=A(t)\cos(2\pi f_ct+\phi(t))$$ Any +
-real bandpass signal can be represented in this way. As +
-$s_l(t)=s_c+js_s$, the real part is the in-phase component and the +
-complex is the quadrature. We can find that: $$s(t)=s_c(t)\cos(2\pi +
-f_0t)-s_s(t)\sin(2\pi f_0t)$$ This is the canonical form of a bandpass +
-signal, whereas $s_l(t)e^{j2\pi f_0t}$ is the polar form. We can find +
-that: $$s_c(t)=s(t)\cos(2\pi f_0t)+\hat{s}(t)\sin(2\pi f_0t)$$ +
-$$s_s(t)=-s(t)\sin(2\pi f_0t)+\hat{s}(t)\cos(2\pi f_0t)$$+
  
-The bandpass response of the Hilbert transform is $\frac{1}{\pi t}$, +The bandpass response of the Hilbert transform is $\frac{1}{\pi t}$, so $H(f)=-j\text{sign}(f)$. The amplitude response is unchanged, but the positive phase is less $\pi/2$ and the negative phase is added $\pi/2$. Taking the lowpass representation removes the high frequency carrier signal. $$A(t)\cos(2\pi f_0t+\phi(t))\iff A(t)e^{j\phi(t)}$$
-so $H(f)=-j\text{sign}(f)$. The amplitude response is unchanged, but +
-the positive phase is less $\pi/2$ and the negative phase is added +
-$\pi/2$. Taking the lowpass representation removes the high frequency +
-carrier signal. $$A(t)\cos(2\pi f_0t+\phi(t))\iff A(t)e^{j\phi(t)}$$+
  
 ===== Double side band suppressed carrier modulation ===== ===== Double side band suppressed carrier modulation =====
  
-Here, $m(t),M(f)$ is the message signal. To up-convert, we multiply a +Here, $m(t),M(f)$ is the message signal. To up-convert, we multiply a message by a carrier wave $A_c\cos(2\pi f_ct)$. $$s(t)=m(t)A_c\cos(2\pi f_ct)=0.5A_cm(t)e^{j2\pi f_ct}+0.5A_cm(t)e^{-j2\pi f_ct}$$ $$S(f)=0.5A_CM(f-f_c)+0.5A_CM(f+f_c)$$ DSBSC modulation shifts the message spectrum left and right by $\pm f_c$. This creates a bandpass signal. We can then find the low pass representation: $$s_l(t)=A_cm(t)$$ This is a real signal, whereas in general the lowpass is a complex number. This suggests that we could send another signal in the complex, quadrature component. This lets us send two separate messages at the same time, called quadrature carrier multiplexing. $$s_c(t)=A_cm_1(t)$$ $$s_s(t)=A_cm_2(t)$$ This requires using a balanced modulator (mixer or multiplier) to combine the carrier and the message. The second message has its carrier shifted by $90^\circ$. The resultant signal is: $$s(t)=A_cm_1(t)\cos(2\pi f_ct)+A_cm_2(t)\sin(2\pi f_ct)$$ $$s_c=A_cm_1(t),s_s=-A_cm_2(t)$$ To recover these signals at the receiving end, a Phase Locked Loop is required to recover the carrier from the signal. This recovered signal is then fed through a balanced modulator with the received signal, and the message is recovered after the modulator output is fed through a lowpass filter.
-message by a carrier wave $A_c\cos(2\pi +
-f_ct)$. $$s(t)=m(t)A_c\cos(2\pi f_ct)=0.5A_cm(t)e^{j2\pi +
-f_ct}+0.5A_cm(t)e^{-j2\pi f_ct}$$ +
-$$S(f)=0.5A_CM(f-f_c)+0.5A_CM(f+f_c)$$ DSBSC modulation shifts the +
-message spectrum left and right by $\pm f_c$. This creates a bandpass +
-signal. We can then find the low pass representation: +
-$$s_l(t)=A_cm(t)$$ This is a real signal, whereas in general the +
-lowpass is a complex number. This suggests that we could send another +
-signal in the complex, quadrature component. This lets us send two +
-separate messages at the same time, called quadrature carrier +
-multiplexing. $$s_c(t)=A_cm_1(t)$$ $$s_s(t)=A_cm_2(t)$$ This requires +
-using a balanced modulator (mixer or multiplier) to combine the +
-carrier and the message. The second message has its carrier shifted by +
-$90^\circ$. The resultant signal is: $$s(t)=A_cm_1(t)\cos(2\pi +
-f_ct)+A_cm_2(t)\sin(2\pi f_ct)$$ $$s_c=A_cm_1(t),s_s=-A_cm_2(t)$$ To +
-recover these signals at the receiving end, a Phase Locked Loop is +
-required to recover the carrier from the signal. This recovered signal +
-is then fed through a balanced modulator with the received signal, and +
-the message is recovered after the modulator output is fed through a +
-lowpass filter.+
  
 ===== Amplitude modulation ===== ===== Amplitude modulation =====
  
-Suppose we transmit: $$s(t)=A_c(1+\mu m(t))\cos(2\pi f_ct)$$ We choose +Suppose we transmit: $$s(t)=A_c(1+\mu m(t))\cos(2\pi f_ct)$$ We choose amplitude sensitivity or modulation index $\mu$ such that $$|m(t)|<1/\mu\implies1+\mu m(t)>1-\mu|m(t)|>0,\forall t$$ This is that $1=\mu m(t)$ is the envelope of $s(t)$. Further we choose $f_c>>W$ (message BW), so that $m(t)\approx c$, a constant. Over each carrier cycle, the maxima of $s(t)$ follows the envelope.
-amplitude sensitivity or modulation index $\mu$ such that +
-$$|m(t)|<1/\mu\implies1+\mu m(t)>1-\mu|m(t)|>0,\forall t$$ This is +
-that $1=\mu m(t)$ is the envelope of $s(t)$. Further we choose +
-$f_c>>W$ (message BW), so that $m(t)\approx c$, a constant. Over each +
-carrier cycle, the maxima of $s(t)$ follows the envelope.+
  
-A phase reversal is related to the change in the sign of the +A phase reversal is related to the change in the sign of the amplitude. When $A(t)<0$, $s(t)=-|A(t)|\cos(2\pi f_ct)=|A(t)|\cos(2\pi f_ct+\pi)$, causing a bump in the signal from the sudden change in phase. $A(t)$ needs to stay positive all the time. When $\mu<\frac{1}{|m(t)|}$, meaning $1-\mu|m(t)|>0$, and as $m(t)\geq -|m(t)|$, $1+\mu m(t)\geq 1\mu|m(t)|>0$, resulting in the amplitude always being positive. This removes phase reversals, making envelope detection easier.
-amplitude. When $A(t)<0$, $s(t)=-|A(t)|\cos(2\pi f_ct)=|A(t)|\cos(2\pi +
-f_ct+\pi)$, causing a bump in the signal from the sudden change in +
-phase. $A(t)$ needs to stay positive all the time. When +
-$\mu<\frac{1}{|m(t)|}$, meaning $1-\mu|m(t)|>0$, and as $m(t)\geq +
--|m(t)|$, $1+\mu m(t)\geq 1\mu|m(t)|>0$, resulting in the amplitude +
-always being positive. This removes phase reversals, making envelope +
-detection easier.+
  
-Envelope detection tries to remove the high frequency carrier to find +Envelope detection tries to remove the high frequency carrier to find the message. It results in finding the magnitude of the signal. Envelope detection recovers the signal without needing a PLL to regenerate the carrier, unlike Quadrature-carrier multiplexing. This is also called asynchronous detection, incoherent detection or direct detection.
-the message. It results in finding the magnitude of the +
-signal. Envelope detection recovers the signal without needing a PLL +
-to regenerate the carrier, unlike Quadrature-carrier +
-multiplexing. This is also called asynchronous detection, incoherent +
-detection or direct detection.+
  
 ===== Side bands ===== ===== Side bands =====
  
-Side bands are the parts of the signal outside 0 in the passband +Side bands are the parts of the signal outside 0 in the passband representation. The upper side band is to the right, and the lower is to the left. The upper and lower side bands are related as: $$USB=LSB^*$$ When we convert to a bandpass signal, we are centred around $f_c$, but still have upper and lower side bands flanking $f_c$. In a DSBSC signal, there is four-fold redundancy, as there are two copies of the LSB and of the USB. As such, we can transmit only one of the side bands and still receive the whole signal. This single side band representation is denoted with as $\tilde{m}(t)$ and $\tilde{M}(f)$. The signal is: $$\tilde{M}(f)=(1+\text{sign}(f))M(f)=\begin{cases}2M(f),&f>0\\0,&f<0\end{cases}$$ $$S_{SSB}(f)=0.5A_C\tilde{M}(f-f_c)+0.5A_C\tilde{M}(-f-f_c)=\begin{cases}A_CM(f-f_c),&f>f_c\\A_CM(f+f_c),&f<f_c\\0,&\text{elsewhere}\end{cases}$$ In time domain this corresponds to: $$\tilde{m}(t)=m(t)+j\hat{m}(t)$$ $$s_{SSB}(t)=A_Cm(t)\cos(2\pi f_ct)-A_C\hat{m}(t)\sin(2\pi f_ct)=\mathcal{R}\{\tilde{m}(t)A_Ce^{j2\pi f_ct}\}$$ $$s_C=A_Cm(t),s_S=A_C\hat{m}(t)$$ This is the same as quadrature carrier multiplexing, with $m_2=\hat{m}$.
-representation. The upper side band is to the right, and the lower is +
-to the left. The upper and lower side bands are related as: +
-$$USB=LSB^*$$ When we convert to a bandpass signal, we are centred +
-around $f_c$, but still have upper and lower side bands flanking +
-$f_c$. In a DSBSC signal, there is four-fold redundancy, as there are +
-two copies of the LSB and of the USB. As such, we can transmit only +
-one of the side bands and still receive the whole signal. This single +
-side band representation is denoted with as $\tilde{m}(t)$ and +
-$\tilde{M}(f)$. The signal is: +
-$$\tilde{M}(f)=(1+\text{sign}(f))M(f)=\begin{cases}2M(f),&f>0\\0,&f<0\end{cases}$$ +
-$$S_{SSB}(f)=0.5A_C\tilde{M}(f-f_c)+0.5A_C\tilde{M}(-f-f_c)=\begin{cases}A_CM(f-f_c),&f>f_c\\A_CM(f+f_c),&f<f_c\\0,&\text{elsewhere}\end{cases}$$ +
-In time domain this corresponds to: $$\tilde{m}(t)=m(t)+j\hat{m}(t)$$ +
-$$s_{SSB}(t)=A_Cm(t)\cos(2\pi f_ct)-A_C\hat{m}(t)\sin(2\pi +
-f_ct)=\mathcal{R}\{\tilde{m}(t)A_Ce^{j2\pi f_ct}\}$$ +
-$$s_C=A_Cm(t),s_S=A_C\hat{m}(t)$$ This is the same as quadrature +
-carrier multiplexing, with $m_2=\hat{m}$.+
  
 ====== Digital communications ====== ====== Digital communications ======
  
-Digital signal are encoded and modulated before being transmitted over +Digital signal are encoded and modulated before being transmitted over a channel, demodulated and decoded. The modulator turns the digital signal into a time domain signal. A communication channel can include space, atmosphere, optical disks, cables, etc. Different channels cause different types of impairments and require different types of modulation.
-a channel, demodulated and decoded. The modulator turns the digital +
-signal into a time domain signal. A communication channel can include +
-space, atmosphere, optical disks, cables, etc. Different channels +
-cause different types of impairments and require different types of +
-modulation.+
  
-Modulation involves mapping the digital information into analogue +Modulation involves mapping the digital information into analogue signals for transmission over physical channels. It requires parsing of the incoming bit sequence into a sequence of binary words length $k$. Each binary word corresponds to a symbol, with $M=2^k$ possible symbols. Each symbol has a signalling interval of length $T$. $1/T$ is the symbol rate, with $k/T$ being the bit rate.
-signals for transmission over physical channels. It requires parsing +
-of the incoming bit sequence into a sequence of binary words length +
-$k$. Each binary word corresponds to a symbol, with $M=2^k$ possible +
-symbols. Each symbol has a signalling interval of length $T$. $1/T$ is +
-the symbol rate, with $k/T$ being the bit rate.+
  
-A baseband signal is a low frequency real signal centred on 0. A +A baseband signal is a low frequency real signal centred on 0. A bandpass signal is a high frequency signal centred on $f_C$. There is no need to use a carrier waveform to transmit a baseband signal. A bandpass signal needs passbands far from 0 to be removed.
-bandpass signal is a high frequency signal centred on $f_C$. There is +
-no need to use a carrier waveform to transmit a baseband signal. A +
-bandpass signal needs passbands far from 0 to be removed.+
  
 ===== Signal space ===== ===== Signal space =====
  
-To simplify analysis, geometric vector representation is used for +To simplify analysis, geometric vector representation is used for baseband and bandpass signals. A vector space or linear space $L$ of a field $F$ (usually $\mathbb{R}$ or $\mathbb{C}$) is a set that is closed over addition and scalar multiplication and is:
-baseband and bandpass signals. A vector space or linear space $L$ of a +
-field $F$ (usually $\mathbb{R}$ or $\mathbb{C}$) is a set that is +
-closed over addition and scalar multiplication and is:+
  
-  * Associative Commutative Distributive Has additive identity Has +  * Associative 
-  * multiplicative identity+  * Commutative 
 +  * Distributive 
 +  * Has additive identity 
 +  * Has multiplicative identity
  
-Signal space is a vector space consisting of functions $x(t)$ defined +Signal space is a vector space consisting of functions $x(t)$ defined on a time set $T$. The modulation scheme is visualised as a finite set of points, called the signal space diagram or signal constellation. This enables a geometric interpretation, and allows us to treat bandpass modulation similarly to baseband modulation.
-on a time set $T$. The modulation scheme is visualised as a finite set +
-of points, called the signal space diagram or signal +
-constellation. This enables a geometric interpretation, and allows us +
-to treat bandpass modulation similarly to baseband modulation.+
  
-The inner product of two complex valued signals is: $$\langle +The inner product of two complex valued signals is: $$\langle x_1(t),x_2(t)\rangle=\int_{-\infty}^\infty x_1(t)x^*_2(t)dt$$ Two signals are orthogonal if $\langle x_1(t),x_2(t)\rangle=0$. The norm of $x(t)$ is: $$||x(t)||=\sqrt{\langle x(t),x(t)\rangle}=\sqrt{\int_{-\infty}^\infty|x(t)|^2dt}=\sqrt{\varepsilon_x}$$ Where $\varepsilon_x$ is the energy in $x(t)$. The distance between two signals is: $$d(x_1(t),x_2(t))=||x_1(t)-x_2(t)||$$ The Cauchy-Schwartz inequality for two signals is: $$|\langle x_1(t),x_2(t)\rangle|\leq||x_1(t)||\cdot||x_2(t)||=\sqrt{\varepsilon_{x_1}\varepsilon_{x_2}}$$ With equality when $x_1(t)=\alpha x_2(t)$, where $\alpha$ is any complex number. The triangle inequality is: $$||x_1(t)+x_2(t)||\leq||x_1(t)||+||x_2(t)||$$
-x_1(t),x_2(t)\rangle=\int_{-\infty}^\infty x_1(t)x^*_2(t)dt$$ Two +
-signals are orthogonal if $\langle x_1(t),x_2(t)\rangle=0$. The norm +
-of $x(t)$ is: $$||x(t)||=\sqrt{\langle +
-x(t),x(t)\rangle}=\sqrt{\int_{-\infty}^\infty|x(t)|^2dt}=\sqrt{\varepsilon_x}$$ +
-Where $\varepsilon_x$ is the energy in $x(t)$. The distance between +
-two signals is: $$d(x_1(t),x_2(t))=||x_1(t)-x_2(t)||$$ The +
-Cauchy-Schwartz inequality for two signals is: $$|\langle +
-x_1(t),x_2(t)\rangle|\leq||x_1(t)||\cdot||x_2(t)||=\sqrt{\varepsilon_{x_1}\varepsilon_{x_2}}$$ +
-With equality when $x_1(t)=\alpha x_2(t)$, where $\alpha$ is any +
-complex number. The triangle inequality is: +
-$$||x_1(t)+x_2(t)||\leq||x_1(t)||+||x_2(t)||$$+
  
-A set of $N$ signals $\{\phi_j(t),j=1,2,...,N\}$ spans a subspace $S$ +A set of $N$ signals $\{\phi_j(t),j=1,2,...,N\}$ spans a subspace $S$ if any signal can be written as a linear combination of the $N$ signals. $$s(t)=\sum_{j=1}^N s_j\phi_j(t)$$ Where $s_j$ are scalar coefficients. A set of signals is linearly independent if no signal in the set can be represented as a linear combination of the other signals in the set. A basis for $S$ is any linearly independent set that spans the whole space. The dimension of $S$ is the number of elements in any basis for $S$. An orthonormal basis is a basis such that: $$\langle \phi_j(t),\phi_n(t)\rangle=\int_{-\infty}^\infty \phi_j(t)\phi_n^*(t)dt=\begin{cases}1,j=n\\0,j\neq n\end{cases}$$ Orthonormal bases provide convenient ways of representing any set. Using an orthonormal basis allows us to easily express a signal as a point in signal space. Each point using a basis of $M$ dimensions corresponds to $k=\log_2M$ bits of information. The square of the Euclidean distance of a point to the origin equals the energy of the corresponding signal: $$\varepsilon_{s_m}=s_{m1}^2+s_{m2}^2+...+s_{mN}^2$$ If a given signal $r(t)$ is outside of the subspace, we can project the signal onto the space to get $\hat{s}(t)$. The Gram-Schmidt procedure can be used to construct an orthonormal basis, by iteratively projecting vectors onto vectors in the basis and removing the projection from the original vector. This can always be used to construct an orthonormal basis.
-if any signal can be written as a linear combination of the $N$ +
-signals. $$s(t)=\sum_{j=1}^N s_j\phi_j(t)$$ Where $s_j$ are scalar +
-coefficients. A set of signals is linearly independent if no signal in +
-the set can be represented as a linear combination of the other +
-signals in the set. A basis for $S$ is any linearly independent set +
-that spans the whole space. The dimension of $S$ is the number of +
-elements in any basis for $S$. An orthonormal basis is a basis such +
-that: $$\langle \phi_j(t),\phi_n(t)\rangle=\int_{-\infty}^\infty +
-\phi_j(t)\phi_n^*(t)dt=\begin{cases}1,j=n\\0,j\neq n\end{cases}$$ +
-Orthonormal bases provide convenient ways of representing any +
-set. Using an orthonormal basis allows us to easily express a signal +
-as a point in signal space. Each point using a basis of $M$ dimensions +
-corresponds to $k=\log_2M$ bits of information. The square of the +
-Euclidean distance of a point to the origin equals the energy of the +
-corresponding signal: +
-$$\varepsilon_{s_m}=s_{m1}^2+s_{m2}^2+...+s_{mN}^2$$ If a given signal +
-$r(t)$ is outside of the subspace, we can project the signal onto the +
-space to get $\hat{s}(t)$. The Gram-Schmidt procedure can be used to +
-construct an orthonormal basis, by iteratively projecting vectors onto +
-vectors in the basis and removing the projection from the original +
-vector. This can always be used to construct an orthonormal basis.+
  
 ===== Digital Modulation ===== ===== Digital Modulation =====
  
-For Pulse Amplitude Modulation (PAM), we turn each block into a +For Pulse Amplitude Modulation (PAM), we turn each block into a distinct amplitude. This uses a signal generator to map the sequence of blocks of length $k$ into the sequence of $M=2^k$ possible symbols. We then use a modulator to map the symbol sequence to a continuous time signal.
-distinct amplitude. This uses a signal generator to map the sequence +
-of blocks of length $k$ into the sequence of $M=2^k$ possible +
-symbols. We then use a modulator to map the symbol sequence to a +
-continuous time signal.+
  
-One-dimensional modulation provides one of the simplest modulation +One-dimensional modulation provides one of the simplest modulation schemes, One Off Keyring (OOK). A baseband OOK modulator maps a binary symbol sequence $a(n)$ to continuous time signal $s(t)$ by: $$s(t)=\sum_{n\in\mathbb{Z}} a(n)p(t-nT)$$ Where $1/T$ is the symbol rate and $p(t)$ is a pulse signal. There are various ways of encoding the bits, such as non-return-to-zero (NRZ), return-to-zero (RZ), Manchester (MAN) and half-sine (HS), each with distinct pulse shapes. Baseband OOK modulation produces a signal of the form: $$s_m(t)=A_mp(t);m=1,2;A_1=1,A_2=0$$ Bandpass OOK employs a carrier frequency $f_c$ to produce a signal of the form: $$s_m(t)=A_mg(t)\cos(2\pi f_ct);m=1,2;A_1=1,A_2=0$$ Where the pulse signal is denoted by $g(t)$. This is the same as DSBSC modulation. The constellation is given by points at $(0,0)$ and $(1,0)$.
-schemes, One Off Keyring (OOK). A baseband OOK modulator maps a binary +
-symbol sequence $a(n)$ to continuous time signal $s(t)$ by: +
-$$s(t)=\sum_{n\in\mathbb{Z}} a(n)p(t-nT)$$ Where $1/T$ is the symbol +
-rate and $p(t)$ is a pulse signal. There are various ways of encoding +
-the bits, such as non-return-to-zero (NRZ), return-to-zero (RZ), +
-Manchester (MAN) and half-sine (HS), each with distinct pulse +
-shapes. Baseband OOK modulation produces a signal of the form: +
-$$s_m(t)=A_mp(t);m=1,2;A_1=1,A_2=0$$ Bandpass OOK employs a carrier +
-frequency $f_c$ to produce a signal of the form: +
-$$s_m(t)=A_mg(t)\cos(2\pi f_ct);m=1,2;A_1=1,A_2=0$$ Where the pulse +
-signal is denoted by $g(t)$. This is the same as DSBSC modulation. The +
-constellation is given by points at $(0,0)$ and $(1,0)$.+
  
-A baseband PAM modulator maps a symbol sequence maps a signal $a(n)$ +A baseband PAM modulator maps a symbol sequence maps a signal $a(n)$ to a continuous time signal $s(t)$: $$s(t)=\sum_{n\in\mathbb{Z}}a(n)p(t-nT)$$ This uses amplitudes above and below 0, at many different levels. It produces a constellation on the x-axis mirrored on the y-axis. The mapping on the constellation uses grey coding for adjacent points. Grey coding minimises the errors in transmission.
-to a continuous time signal $s(t)$: +
-$$s(t)=\sum_{n\in\mathbb{Z}}a(n)p(t-nT)$$ This uses amplitudes above +
-and below 0, at many different levels. It produces a constellation on +
-the x-axis mirrored on the y-axis. The mapping on the constellation +
-uses grey coding for adjacent points. Grey coding minimises the errors +
-in transmission.+
  
-When choosing the signals, we want the to be as far apart from each +When choosing the signals, we want the to be as far apart from each other when drawn as a constellation. The energy for a constellation point is: $$\varepsilon_m=||s_m(t)||^2=\int_0^TA_m^2p^2(t)dt=A^2\varepsilon_p$$ Where $\varepsilon_p$ is the energy in $p(t)$ and $m=1,...,M$. An orthonormal basis vector for PAM is given by: $$\phi(t)=\frac{p(t)}{\sqrt{\varepsilon_p}}$$ For carrier modulated PAM signals, we have: $$s_m(t)=A_mg(t)\cos(2\pi f_ct),1\leq m\leq M, 0\leq t<T$$ The bandpass PAM energy for a constellation point equals: $$\varepsilon_m=||s_m(t)||^2=\frac{A_m^2}{2}\int_0^Tg^2(t)dt+\frac{A_m^2}{2}\int_0^Tg^2(t)\cos(4\pi f_ct)dt\approx\frac{A_m^2}{2}\varepsilon_g$$ Binary bandpass PAM is also called Binary Phase Shift Keyring (BPSK) because the symbol values inform the phase of $s_m(t)$. Modulation of the signal waveform $s_m(t)$ with carrier $\cos(2\pi f_c t)$ shifts the spectrum of the baseband signal by $f_c$: $$S_m(f)=\frac{A_m}{2}(G_T(f-f_c)+G_T(f+g_c))$$ For bandpass PAM signalling, the orthonormal basis vector is given by: $$\phi(t)=\sqrt{\frac{2}{\varepsilon_p}}g(t)\cos(2\pi f_ct)$$ Which results in $s_m(t)=A_m\sqrt{\frac{\varepsilon_g}{2}}\phi(t)$. Bandpass PAM has the same signal space diagram as baseband PAM, but with a different basis vector.
-other when drawn as a constellation. The energy for a constellation +
-point is: +
-$$\varepsilon_m=||s_m(t)||^2=\int_0^TA_m^2p^2(t)dt=A^2\varepsilon_p$$ +
-Where $\varepsilon_p$ is the energy in $p(t)$ and $m=1,...,M$. An +
-orthonormal basis vector for PAM is given by: +
-$$\phi(t)=\frac{p(t)}{\sqrt{\varepsilon_p}}$$ For carrier modulated +
-PAM signals, we have: $$s_m(t)=A_mg(t)\cos(2\pi f_ct),1\leq m\leq M, +
-0\leq t<T$$ The bandpass PAM energy for a constellation point equals: +
-$$\varepsilon_m=||s_m(t)||^2=\frac{A_m^2}{2}\int_0^Tg^2(t)dt+\frac{A_m^2}{2}\int_0^Tg^2(t)\cos(4\pi +
-f_ct)dt\approx\frac{A_m^2}{2}\varepsilon_g$$ Binary bandpass PAM is +
-also called Binary Phase Shift Keyring (BPSK) because the symbol +
-values inform the phase of $s_m(t)$. Modulation of the signal waveform +
-$s_m(t)$ with carrier $\cos(2\pi f_c t)$ shifts the spectrum of the +
-baseband signal by $f_c$: +
-$$S_m(f)=\frac{A_m}{2}(G_T(f-f_c)+G_T(f+g_c))$$ For bandpass PAM +
-signalling, the orthonormal basis vector is given by: +
-$$\phi(t)=\sqrt{\frac{2}{\varepsilon_p}}g(t)\cos(2\pi f_ct)$$ Which +
-results in $s_m(t)=A_m\sqrt{\frac{\varepsilon_g}{2}}\phi(t)$. Bandpass +
-PAM has the same signal space diagram as baseband PAM, but with a +
-different basis vector.+
  
-The modulator is implemented by feeding the input bits into a serial +The modulator is implemented by feeding the input bits into a serial to parallel converter, outputting $\log_2M$ bits. This is fed into a Look Up Table (LUT) to find the symbols, which is fed into a pulse shaping filter to generate the signal. The signal may be up-sampled in filtering (FIR possibly) and fed into a DAC to create the analogue form.
-to parallel converter, outputting $\log_2M$ bits. This is fed into a +
-Look Up Table (LUT) to find the symbols, which is fed into a pulse +
-shaping filter to generate the signal. The signal may be up-sampled in +
-filtering (FIR possibly) and fed into a DAC to create the analogue +
-form.+
  
 ==== Two dimensional modulation ==== ==== Two dimensional modulation ====
  
-Orthogonal signalling involves modulation using two signals that are +Orthogonal signalling involves modulation using two signals that are orthogonal: $$s(t)=s_1\phi_1(t)+s_2\phi_2(t)$$ We can denote the modulated signals as a vector $s(t)=(s_1,s_2)$. Signals with M-PSK modulation can be represented as: $$s_m(t)=g(t)\cos(2\pi f_ct+\theta_m)=\mathcal{R}(g(t)e^{j\theta_m}e^{j2\\pi f_ct})=g(t)\cos(\theta_m)\cos(2\pi f_ct)-g(t)\sin(\theta_m)\sin(2\pi f_ct)$$ Where $g(t)$ is the signal pulse shape and $\theta_m=\frac{2\pi}{M}(m-1)$ is the phase the conveys the transmitted information. An orthonormal basis for the signal space is: $$\{\phi_1(t),\phi_2(t)\}=\left\{\sqrt{\frac{2}{\varepsilon_g}}g(t)\cos(2\pi f_ct),-\sqrt{\frac{2}{\varepsilon_g}}g(t)\sin(2\pi f_ct)\right\}$$ Where the basis functions are unit norm, $||\phi_1(t)||=||\phi_2(t)||=1$. In M-ary phase-shift keyring (PSK), all M bandpass signals are constrained to have the same energy (signal constellation points lie on a circle). Grey encoding is used so that adjacent phases differ by only one bit. This leads to a better average bit error rate (BER). The transmitted information is impressed on 2 orthogonal carrier signals, the in-phase carrier $\cos(2\pi f_ct)$ and the quadrature carrier $\sin(2\pi f_ct)$. $$s_m(t)=g(t)\cos(\theta_m)\cos(2\pi f_ct)-g(t)\sin(\theta_m)\sin(2\pi f_ct)$$ Its lowpass equivalent signal is: $$s_m^{lowpass}(t)=g(t)e^{j\Theta_m}=I(t)+jQ(t)$$ Where $I(t)=g(t)\cos(\theta_m)$ is the in-phase component and $Q(t)=g(t)\sin(\theta_m)$ is the quadrature component.
-orthogonal: $$s(t)=s_1\phi_1(t)+s_2\phi_2(t)$$ We can denote the +
-modulated signals as a vector $s(t)=(s_1,s_2)$. Signals with M-PSK +
-modulation can be represented as: $$s_m(t)=g(t)\cos(2\pi +
-f_ct+\theta_m)=\mathcal{R}(g(t)e^{j\theta_m}e^{j2\\pi +
-f_ct})=g(t)\cos(\theta_m)\cos(2\pi f_ct)-g(t)\sin(\theta_m)\sin(2\pi +
-f_ct)$$ Where $g(t)$ is the signal pulse shape and +
-$\theta_m=\frac{2\pi}{M}(m-1)$ is the phase the conveys the +
-transmitted information. An orthonormal basis for the signal space is: +
-$$\{\phi_1(t),\phi_2(t)\}=\left\{\sqrt{\frac{2}{\varepsilon_g}}g(t)\cos(2\pi +
-f_ct),-\sqrt{\frac{2}{\varepsilon_g}}g(t)\sin(2\pi f_ct)\right\}$$ +
-Where the basis functions are unit norm, +
-$||\phi_1(t)||=||\phi_2(t)||=1$. In M-ary phase-shift keyring (PSK), +
-all M bandpass signals are constrained to have the same energy (signal +
-constellation points lie on a circle). Grey encoding is used so that +
-adjacent phases differ by only one bit. This leads to a better average +
-bit error rate (BER). The transmitted information is impressed on 2 +
-orthogonal carrier signals, the in-phase carrier $\cos(2\pi f_ct)$ and +
-the quadrature carrier $\sin(2\pi +
-f_ct)$. $$s_m(t)=g(t)\cos(\theta_m)\cos(2\pi +
-f_ct)-g(t)\sin(\theta_m)\sin(2\pi f_ct)$$ Its lowpass equivalent +
-signal is: $$s_m^{lowpass}(t)=g(t)e^{j\Theta_m}=I(t)+jQ(t)$$ Where +
-$I(t)=g(t)\cos(\theta_m)$ is the in-phase component and +
-$Q(t)=g(t)\sin(\theta_m)$ is the quadrature component.+
  
-In quadrature/quaternary PSK (QPSK), the $M=4$ signal points are +In quadrature/quaternary PSK (QPSK), the $M=4$ signal points are differentiated by phase shifts in multiples of $\pi/2$. $$s_m(t)=g(t)\cos\left(2\pi f_ct+\frac{\pi}{2}(m-1)\right)$$ Equivalently the signal constellation can be rotated so that the vectors are in the quadrants rather than on the axes. $$s(t)=I(t)\sqrt{2}\cos(2\pi f_ct)-Q(t)\sqrt{2}\sin(2\pi f_c(t))$$ $I(t)=\sum_na_1(n)f(t-nT)$ is the in-phase component of $s(t)$ and $Q(t)=\sum_na_2(n)g(t-nT)$ is the quadrature component of $s(t)$. $I(t)$ and $Q(t)$ are binary PAM pulse trains, making QPSK interpretable as two PAM signal constellations on orthogonal axes.
-differentiated by phase shifts in multiples of +
-$\pi/2$. $$s_m(t)=g(t)\cos\left(2\pi f_ct+\frac{\pi}{2}(m-1)\right)$$ +
-Equivalently the signal constellation can be rotated so that the +
-vectors are in the quadrants rather than on the +
-axes. $$s(t)=I(t)\sqrt{2}\cos(2\pi f_ct)-Q(t)\sqrt{2}\sin(2\pi +
-f_c(t))$$ $I(t)=\sum_na_1(n)f(t-nT)$ is the in-phase component of +
-$s(t)$ and $Q(t)=\sum_na_2(n)g(t-nT)$ is the quadrature component of +
-$s(t)$. $I(t)$ and $Q(t)$ are binary PAM pulse trains, making QPSK +
-interpretable as two PAM signal constellations on orthogonal axes.+
  
 ==== Two dimensional bandpass modulation ==== ==== Two dimensional bandpass modulation ====
  
-Quadrature amplitude modulation (QAM) allows signals to have different +Quadrature amplitude modulation (QAM) allows signals to have different amplitudes and impress separate information bits on each of the quadrature carriers. Important performance parameters are the average energy and minimum distance in the signal constellation. The signal constellation consists of concentric circles. The points are all off axis in each quadrant for maximum efficiency (minimises energy and maximises minimum distance).
-amplitudes and impress separate information bits on each of the +
-quadrature carriers. Important performance parameters are the average +
-energy and minimum distance in the signal constellation. The signal +
-constellation consists of concentric circles. The points are all off +
-axis in each quadrant for maximum efficiency (minimises energy and +
-maximises minimum distance).+
  
 ==== Comparison ==== ==== Comparison ====
  
-|Scheme |$s_m(t)$ |$s_m$ |$E_{avg}$ |$E_{bavg}$ |$d_{min}$ | Baseband +|Scheme      |$s_m(t)$                                             |$s_m$                                                                                       |$E_{avg}$                        |$E_{bavg}$                              |$d_{min}$                                                          
-|PAM|$A_mp(t)$ |$A_m\sqrt{\varepsilon_p}$ +|Baseband PAM|$A_mp(t)$                                            |$A_m\sqrt{\varepsilon_p}$                                                                   |$\frac{2(M^2-1)}{3}\varepsilon_p$|$\frac{2(M^2-1)}{3\log_2M}\varepsilon_p$|$\sqrt{\frac{6\log_2M}{M^2-1}\varepsilon_{bavg}}$                  | 
-||$\frac{2(M^2-1)}{3}\varepsilon_p$|$\frac{2(M^2-1)}{3\log_2M}\varepsilon_p$|$\sqrt{\frac{6\log_2M}{M^2-1}\varepsilon_{bavg}}$ +|Bandpass PAM|$A_mg(t)\cos(2\pi f_ct)$                             |$A_m\sqrt{\frac{\varepsilon_p}{2}}$                                                         |$\frac{M^2-1}{3}\varepsilon_p$   |$\frac{M^2-1}{3\log_2M}\varepsilon_p$   |$\sqrt{\frac{6\log_2M}{M^2-1}\varepsilon_{bavg}}$                  
-|| Bandpass PAM|$A_mg(t)\cos(2\pi f_ct)$ +|PSK         |$g(t)\cos\left[2\pi f_ct+\frac{2\pi}{M}(m-1)\right]$ |$\sqrt{\frac{\varepsilon_g}{2}}\left(\cos\frac{2\pi}{M}(m-1),\sin\frac{2\pi}{M}(m-1)\right)$|$\frac{1}{2}\varepsilon_g$       |$\frac{1}{2\log_2M}\varepsilon_g$       |$2\sqrt{\log_2M\sin^2\left(\frac{\pi}{M}\right)\varepsilon_{bavg}}$| 
-||$A_m\sqrt{\frac{\varepsilon_p}{2}}$ |$\frac{M^2-1}{3}\varepsilon_p$ +|QAM         |$A_{mi}g(t)\cos(2\pi f_ct)-A_{mq}g(t)\sin(2\pi f_ct)$|$\sqrt{\frac{\varepsilon_g}{2}}(A_{mi}\cdot A_{mq})$                                        |$\frac{M-1}{3}\varepsilon_g$     |$\frac{M-1}{3\log_2M}\varepsilon_g$     |$\sqrt{\frac{6\log_2M}{M-1}\varepsilon_{bavg}}$                    |
-||$\frac{M^2-1}{3\log_2M}\varepsilon_p$ +
-||$\sqrt{\frac{6\log_2M}{M^2-1}\varepsilon_{bavg}}$ | PSK +
-||$g(t)\cos\left[2\pi f_ct+\frac{2\pi}{M}(m-1)\right]$ +
-||$\sqrt{\frac{\varepsilon_g}{2}}\left(\cos\frac{2\pi}{M}(m-1),\sin\frac{2\pi}{M}(m-1)\right)$|$\frac{1}{2}\varepsilon_g$ +
-||$\frac{1}{2\log_2M}\varepsilon_g$ +
-||$2\sqrt{\log_2M\sin^2\left(\frac{\pi}{M}\right)\varepsilon_{bavg}}$| +
-|QAM |$A_{mi}g(t)\cos(2\pi f_ct)-A_{mq}g(t)\sin(2\pi +
-|f_ct)$|$\sqrt{\frac{\varepsilon_g}{2}}(A_{mi}\cdot A_{mq})$ +
-||$\frac{M-1}{3}\varepsilon_g$ |$\frac{M-1}{3\log_2M}\varepsilon_g$ +
-||$\sqrt{\frac{6\log_2M}{M-1}\varepsilon_{bavg}}$ |+
  
 ==== Multidimensional modulation ==== ==== Multidimensional modulation ====
  
-We can use time domain and/or frequency domain to increase the number +We can use time domain and/or frequency domain to increase the number of dimensions. Orthogonal Signalling (Baseband) is one example, e.g. Pule position modulation (PPM). This varies where in the pulse position within the pulse time, i.e. first quarter of pulse or third quarter. Alternatively we can frequency shift (FSK) to create orthogonal signals: $$s_m(t)=\sqrt{\frac{3\varepsilon}{T}}\cos(2\pi(f_c+m\Delta f)t), 0\leq m\leq M-1,0\leq t\leq T$$ For orthogonality, we need a minimal frequency separation of $\Delta f=1/(2T)$.
-of dimensions. Orthogonal Signalling (Baseband) is one example, +
-e.g. Pule position modulation (PPM). This varies where in the pulse +
-position within the pulse time, i.e. first quarter of pulse or third +
-quarter. Alternatively we can frequency shift (FSK) to create +
-orthogonal signals: +
-$$s_m(t)=\sqrt{\frac{3\varepsilon}{T}}\cos(2\pi(f_c+m\Delta f)t), +
-0\leq m\leq M-1,0\leq t\leq T$$ For orthogonality, we need a minimal +
-frequency separation of $\Delta f=1/(2T)$.+
  
 ===== Receiving signals ===== ===== Receiving signals =====
Line 338: Line 91:
 ==== Additive White Gaussian Noise ==== ==== Additive White Gaussian Noise ====
  
-A received signal has Additive White Gaussian Noise added to it from +A received signal has Additive White Gaussian Noise added to it from the channel. $$r(t)=s_m(t)+n(t)$$ Where $n(t)$ is a AWGN random noise process with power spectral density $\frac{N_0}{2}$ W/Hz. Given $r(t)$, a receiver must decide which $s_m(t)$ was transmitted, minimising the error probability, or other criteria.
-the channel. $$r(t)=s_m(t)+n(t)$$ Where $n(t)$ is a AWGN random noise +
-process with power spectral density $\frac{N_0}{2}$ W/Hz. Given +
-$r(t)$, a receiver must decide which $s_m(t)$ was transmitted, +
-minimising the error probability, or other criteria.+
  
-The $Q$ function is useful for tail probabilities, and is defined as: +The $Q$ function is useful for tail probabilities, and is defined as: $$Q(x)=P[X>x]=\frac{1}{\sqrt{2\pi}}\int_x^\infty\exp\left(-\frac{t^2}{2}\right)dt$$ For a general Gaussian RV $X\sim\mathcal{N}(\mu,\sigma^2)$, we know that $\frac{X-\mu}{\sigma}\sim\mathcal{N}(0,1)$, so: $$P[X>x]=Q(\frac{x-\mu}{\sigma})$$ A continuous time random process $X(t)$ is completely characterised bu joint PDFs of the form: $$f_{X(t_1),X(t_2),...,X(t_n)}(x_1,x_2,...,x_n)$$ $X(t)$ is strictly stationary if for all $n,\Delta,(t_1,t_2,...,t_n)$, we have: $$f_{X(t_1),X(t_2),...,X(t_n)}(x_1,x_2,...,x_n)=f_{X(t_1+\Delta),X(t_2+\Delta),...,X(t_n+\Delta)}(x_1,x_2,...,x_n)$$ Random processes have a mean and autocorrelation: $$m_X(t)=E[X(t)]$$ $$R_X(t_1,t_2)=E[X(t_1)X^*(t_2)]$$ A random process is Wide Sense Stationary (WSS) if its mean is constant for all time and its autocorrelation depends only on the time difference $\tau=t_1-t_2$, allowing us to write the autocorrelation as $R_X(\tau)$. The Power Spectral Density (PSD) of a WSS process is the fourier transform of the autocorrelation $\mathcal{S}_X(f)=\mathcal{F}[F_X(\tau)]$. The total power content of the process is: $$P_X=E[|X(t)|^2]=R_X(0)=\int_{-\infty}^\infty\mathcal{S}_X(f)df$$
-$$Q(x)=P[X>x]=\frac{1}{\sqrt{2\pi}}\int_x^\infty\exp\left(-\frac{t^2}{2}\right)dt$$ +
-For a general Gaussian RV $X\sim\mathcal{N}(\mu,\sigma^2)$, we know +
-that $\frac{X-\mu}{\sigma}\sim\mathcal{N}(0,1)$, so: +
-$$P[X>x]=Q(\frac{x-\mu}{\sigma})$$ A continuous time random process +
-$X(t)$ is completely characterised bu joint PDFs of the form: +
-$$f_{X(t_1),X(t_2),...,X(t_n)}(x_1,x_2,...,x_n)$$ $X(t)$ is strictly +
-stationary if for all $n,\Delta,(t_1,t_2,...,t_n)$, we have: +
-$$f_{X(t_1),X(t_2),...,X(t_n)}(x_1,x_2,...,x_n)=f_{X(t_1+\Delta),X(t_2+\Delta),...,X(t_n+\Delta)}(x_1,x_2,...,x_n)$$ +
-Random processes have a mean and autocorrelation: $$m_X(t)=E[X(t)]$$ +
-$$R_X(t_1,t_2)=E[X(t_1)X^*(t_2)]$$ A random process is Wide Sense +
-Stationary (WSS) if its mean is constant for all time and its +
-autocorrelation depends only on the time difference $\tau=t_1-t_2$, +
-allowing us to write the autocorrelation as $R_X(\tau)$. The Power +
-Spectral Density (PSD) of a WSS process is the fourier transform of +
-the autocorrelation $\mathcal{S}_X(f)=\mathcal{F}[F_X(\tau)]$. The +
-total power content of the process is: +
-$$P_X=E[|X(t)|^2]=R_X(0)=\int_{-\infty}^\infty\mathcal{S}_X(f)df$$+
  
-If a WSS process $X(t)$ passes through a LTI system with impulse +If a WSS process $X(t)$ passes through a LTI system with impulse response $h(t)$ and frequency response $H(f)$, the output $Y(t)=\int_{-\infty}^\infty X(\tau)h(t-\tau)d\tau$ is also a WSS process. The output process has a mean of $m_Y=m_x\int_{-\infty}^\infty h(t)dt=m_XH(0)$, autocorrelation $R_Y=R_X\star h\star \tilde{h}$ ($\tilde{h}(t)=h(-t)$) and PSD $\mathcal{S}_Y(f)=\mathcal{S}_X(f)|H(f)|^2$. $X(t)$ is a Gaussian random process if $\{X(t_1),X(t_2),...,X(t_n)\}$ has a jointly Gaussian PDF, making $X(t_k)$ a Gaussian RV for any fixed $t_k\in\mathbb{R}$. $X(t)$ is a white noise process if its PSD $S_X(f)$ is constant for all frequencies. The power content of white noise processes $P_X=\int_{-\infty}^\infty\mathcal{S}_X(f)=\infty$, so not physically realisable. A Gaussian process into an LTI system produces a Gaussian process, but a white input does not necessarily produce a white output.
-response $h(t)$ and frequency response $H(f)$, the output +
-$Y(t)=\int_{-\infty}^\infty X(\tau)h(t-\tau)d\tau$ is also a WSS +
-process. The output process has a mean of +
-$m_Y=m_x\int_{-\infty}^\infty h(t)dt=m_XH(0)$, autocorrelation +
-$R_Y=R_X\star h\star \tilde{h}$ ($\tilde{h}(t)=h(-t)$) and PSD +
-$\mathcal{S}_Y(f)=\mathcal{S}_X(f)|H(f)|^2$. $X(t)$ is a Gaussian +
-random process if $\{X(t_1),X(t_2),...,X(t_n)\}$ has a jointly +
-Gaussian PDF, making $X(t_k)$ a Gaussian RV for any fixed +
-$t_k\in\mathbb{R}$. $X(t)$ is a white noise process if its PSD +
-$S_X(f)$ is constant for all frequencies. The power content of white +
-noise processes $P_X=\int_{-\infty}^\infty\mathcal{S}_X(f)=\infty$, so +
-not physically realisable. A Gaussian process into an LTI system +
-produces a Gaussian process, but a white input does not necessarily +
-produce a white output.+
  
 The AWGN process can be modelled as a random process that is: The AWGN process can be modelled as a random process that is:
  
-  * WSS, with +  * WSS, with $R_N(\tau)=\mathcal{F}^{-1}[\mathcal{S}_N(f)]=\frac{N_0}{2}\delta(\tau)$ 
-  * $R_N(\tau)=\mathcal{F}^{-1}[\mathcal{S}_N(f)]=\frac{N_0}{2}\delta(\tau)$ +  * Zero mean 
-  * Zero mean Gaussian, with $N(t)\sim\mathcal{N}(0,\frac{N_0}{2})$+  * Gaussian, with $N(t)\sim\mathcal{N}(0,\frac{N_0}{2})$
   * White, with $\mathcal{S}_N(f)=\frac{N_0}{2}$   * White, with $\mathcal{S}_N(f)=\frac{N_0}{2}$
  
 ==== Demodulation ==== ==== Demodulation ====
  
-The first thing a receiver does is project the received waveform onto +The first thing a receiver does is project the received waveform onto a signal $\mathbf{r}=(r_1,r_2,...,r_N)$ in the signal space. The detector then decides which of the possible signal waveforms was transmitted. Any signal can be written as: $$s_m(t)=\sum_{k=1}^Ns_{mk}\phi_k(t)$$ This is added to noise to produce the received signal, which is projected into the signal space. In projecting the signal, we get:
-a signal $\mathbf{r}=(r_1,r_2,...,r_N)$ in the signal space. The +
-detector then decides which of the possible signal waveforms was +
-transmitted. Any signal can be written as: +
-$$s_m(t)=\sum_{k=1}^Ns_{mk}\phi_k(t)$$ This is added to noise to +
-produce the received signal, which is projected into the signal +
-space. In projecting the signal, we get:+
  
-$$r_{mj}=\langle r_m(t),\phi_j\rangle=s_{mj}+n_j$$ Assuming AWGN, +$$r_{mj}=\langle r_m(t),\phi_j\rangle=s_{mj}+n_j$$ Assuming AWGN, $n_j\sim\mathcal{N}(0,\frac{N_0}{2})$, and $E\{n_in_j\}=\delta_{ij}=\frac{N_0}{2}$.
-$n_j\sim\mathcal{N}(0,\frac{N_0}{2})$, and +
-$E\{n_in_j\}=\delta_{ij}=\frac{N_0}{2}$.+
  
 There are two main approaches to demodulating $r(t)$: There are two main approaches to demodulating $r(t)$:
  
-  * **Correlation-type** The incoming signal is modulated with the +  * **Correlation-type** The incoming signal is modulated with the basis signals in parallel, then integrated to find each $r_k$. 
-  * **basis signals in parallel, then integrated to find each $r_k$. +  * **Matched-type** The received signal is convoluted with the basis signals in parallel to produce each $r_k$, $r_k(t)=\int_0^tr(t)\phi_k(T-(t-\tau))d\tau=\int_0^tr(\tau)\phi_k(\tau)d\tau$.
-  * **Matched-type** The received signal is convoluted with the basis +
-  * **signals in parallel to produce each $r_k$, +
-  * **$r_k(t)=\int_0^tr(t)\phi_k(T-(t-\tau))d\tau=\int_0^tr(\tau)\phi_k(\tau)d\tau$.+
  
-Both of these produce signals that are equal at integer multiples of +Both of these produce signals that are equal at integer multiples of $T_s$.
-$T_s$.+
  
-A correlation-type demolulator uses a parallel bank of $N$ correlators +A correlation-type demolulator uses a parallel bank of $N$ correlators with multiplies $r(t)$ with $\{\phi_k(t)\}_{k=1}^N$. The output is: $$r_k=\int_0^Tr(t)\phi_k(t)dt=s_{mk}+n_k$$ This makes the overall result: $$\mathbf{r}=\mathbf{s}_m+\mathbf{n}$$ The received signal can be expressed as: $$r(t)=\sum_{k=1}^Ns_{mk}\psi_k(t)+\sum_{k=1}^Nn_k\phi_k(t)+n'(t)=\sum_{k-1}^Nr_k\phi_k(t)+n'(t)$$ We ignore $n'(t)=n(t)-\sum_{k=1}^Nn_k\phi_k(t)$ because RV $n'(t)$ and $r_k(t)$ are independent.
-with multiplies $r(t)$ with $\{\phi_k(t)\}_{k=1}^N$. The output is: +
-$$r_k=\int_0^Tr(t)\phi_k(t)dt=s_{mk}+n_k$$ This makes the overall +
-result: $$\mathbf{r}=\mathbf{s}_m+\mathbf{n}$$ The received signal can +
-be expressed as: +
-$$r(t)=\sum_{k=1}^Ns_{mk}\psi_k(t)+\sum_{k=1}^Nn_k\phi_k(t)+n'(t)=\sum_{k-1}^Nr_k\phi_k(t)+n'(t)$$ +
-We ignore $n'(t)=n(t)-\sum_{k=1}^Nn_k\phi_k(t)$ because RV $n'(t)$ and +
-$r_k(t)$ are independent.+
  
-A matched filter-type demodulator uses a parallel bank of $N$ linear +A matched filter-type demodulator uses a parallel bank of $N$ linear filters with impulse response $h_k(t)=\phi_k(T-t)$. The output is: $$r_k(t)=(r\star h_k)(t)=\int_0^Tr(\tau)\phi(T-t+\tau)d\tau$$ Hence, $r_k(t)$ is a Gaussian process, which when sampling at $t=T$, we get: $$r_k=\int_0^Tr(\tau)\phi_k(\tau)d\tau$$ A matched filter to a signal $s(t)$ is a filter whose impulse response is $h(t)=s(T-t)$, over the confined time interval $0\leq t\leq T$. A matched filter maximises the signal to noise ratio for a signal corrupted by AWGN. The output SNR from the filter depends on the energy of the waveform $s(t)$ but not on the detailed characteristics of $s(t)$. Sampling the output at time $t=T$ gives the signal and noise components: $$y(t)=\underbrace{\int_0^Ts(\tau)h(T_\tau)d\tau}_{y_s(T)}+\underbrace{\int_0^Tn(\tau)h(T_\tau)d\tau}_{y_n(T)}$$ We need to choose a $h(t)$ to maximise $\frac{y_s^2(T)}{E[y_n^2(T)]}$, being $h(t)=ks(T-t)$ for some constant $k$. This means that we need a filter response that is matched to the signal. The output Signal to Noise Ratio is $\frac{2\varepsilon_s}{N_0}$, depending only on the signal energy.
-filters with impulse response $h_k(t)=\phi_k(T-t)$. The output is: +
-$$r_k(t)=(r\star h_k)(t)=\int_0^Tr(\tau)\phi(T-t+\tau)d\tau$$ Hence, +
-$r_k(t)$ is a Gaussian process, which when sampling at $t=T$, we get: +
-$$r_k=\int_0^Tr(\tau)\phi_k(\tau)d\tau$$ A matched filter to a signal +
-$s(t)$ is a filter whose impulse response is $h(t)=s(T-t)$, over the +
-confined time interval $0\leq t\leq T$. A matched filter maximises the +
-signal to noise ratio for a signal corrupted by AWGN. The output SNR +
-from the filter depends on the energy of the waveform $s(t)$ but not +
-on the detailed characteristics of $s(t)$. Sampling the output at time +
-$t=T$ gives the signal and noise components: +
-$$y(t)=\underbrace{\int_0^Ts(\tau)h(T_\tau)d\tau}_{y_s(T)}+\underbrace{\int_0^Tn(\tau)h(T_\tau)d\tau}_{y_n(T)}$$ +
-We need to choose a $h(t)$ to maximise $\frac{y_s^2(T)}{E[y_n^2(T)]}$, +
-being $h(t)=ks(T-t)$ for some constant $k$. This means that we need a +
-filter response that is matched to the signal. The output Signal to +
-Noise Ratio is $\frac{2\varepsilon_s}{N_0}$, depending only on the +
-signal energy.+
  
 ==== Detection ==== ==== Detection ====
  
-The projected signal is mapped to a point in signal space and forms +The projected signal is mapped to a point in signal space and forms spherical noise clouds due to the Gaussian components. In choosing which signal to map to, we need to chose the closest point. We want to minimise the overall probability of error. This is done by partitioning the signal space into $M$ nonoverlapping regions $D_1,D_2,...,D_M$. The detector design is in choosing these partitions.
-spherical noise clouds due to the Gaussian components. In choosing +
-which signal to map to, we need to chose the closest point. We want to +
-minimise the overall probability of error. This is done by +
-partitioning the signal space into $M$ nonoverlapping regions +
-$D_1,D_2,...,D_M$. The detector design is in choosing these +
-partitions.+
  
-The prior probability is the probability that a signal was transmitted +The prior probability is the probability that a signal was transmitted ($P(s_m\text{ transmitted})$), the posterior is the probability that a signal was transmitted given what was received ($P(s_m\text{ transmitted}|r\text{ received})$). The likelihood function forms a conditional PDF for $P(r\text{ received}|s_m\text{ transmitted})$. These are all related with Bayes' theorem: $$P(s_m|r)=\frac{P(r|s_m)P(s_m)}{P(r)}$$ We should note that: $$P(\text{No error}|s_m)=P(r\in D_m|s_m)=\int_{D_m}P(r|s_m)dr$$ Minimising the probability of error means maximising the probability of no error. As such, we want to construct the decision regions such that: $$D_m=\{r\in\mathbb{R}^N:P(s_m|r)\geq P(s_{m'}|r),\forall m'\neq m\}$$
-($P(s_m\text{ transmitted})$), the posterior is the probability that a +
-signal was transmitted given what was received ($P(s_m\text{ +
-transmitted}|r\text{ received})$). The likelihood function forms a +
-conditional PDF for $P(r\text{ received}|s_m\text{ +
-transmitted})$. These are all related with Bayes' theorem: +
-$$P(s_m|r)=\frac{P(r|s_m)P(s_m)}{P(r)}$$ We should note that: +
-$$P(\text{No error}|s_m)=P(r\in D_m|s_m)=\int_{D_m}P(r|s_m)dr$$ +
-Minimising the probability of error means maximising the probability +
-of no error. As such, we want to construct the decision regions such +
-that: $$D_m=\{r\in\mathbb{R}^N:P(s_m|r)\geq P(s_{m'}|r),\forall m'\neq +
-m\}$$+
  
-The Maximum A Posteriori (MAP) criterion for this is: +The Maximum A Posteriori (MAP) criterion for this is: $$\hat{m}=\arg\max_mP(r|s_m)P(s_m)$$ The MAP detector is an optimal detector for minimising the probability of error.
-$$\hat{m}=\arg\max_mP(r|s_m)P(s_m)$$ The MAP detector is an optimal +
-detector for minimising the probability of error.+
  
-The Maximum Likelihood (ML) criterion is: +The Maximum Likelihood (ML) criterion is: $$\hat{m}=\arg\max_mP(r|s_m)$$ The ML detector is optimal when all signals are equiprobable (MAP devolves to ML in such a case).
-$$\hat{m}=\arg\max_mP(r|s_m)$$ The ML detector is optimal when all +
-signals are equiprobable (MAP devolves to ML in such a case).+
  
-In AWGN channels, the received vector components of +In AWGN channels, the received vector components of $r=(r_1,r_2,...,r_n)$ are: $$r_k=s_{mk}+n_k$$ Were $n_k\sim\mathcal(0,N_0/2)$ such that $r_i\sim\mathcal{N}(s_{mk},N_0/2)$. The likelihood function can be calculated to be: $$P(r|s_m)=\prod_{k=1}^NP(r_k|s_m)=\frac{1}{(\pi N_0)^{\frac{N}{2}}}\exp\left(-\frac{||r-s_m||^2}{N_0}\right)$$ The ML detection criterion for AWGN channels is given by: $$\hat{m}=\arg\max_m\frac{1}{(\pi N_0)^{\frac{N}{2}}}\exp\left(-\frac{||r-s_m||^2}{N_0}\right)$$ This can be simplified to: $$\hat{m}=\arg\min_m||r-s_m||$$ This means we decide on $s_m$ that is closest to $r$, being minimum distance detection. The decision regions are represented graphically as the midlines between $s_1,s_2,...,s_M$. We can expand the distance metric for ML to: $$||r-s_m||^2=\underbrace{||r||^2}_{\text{Independent of }m}-2\langle r,s_m\rangle+\underbrace{||s_m||^2}_{\text{Energy of m-th signal, }\varepsilon_m}$$ This allows us to reexpress the ML criterion as: $$\hat{m}=\arg\max_m[\langle r,s_m\rangle+\eta_m]$$ Where $\eta=-\frac{1}{2}\varepsilon_m$ is a bias term compensating for signal sets that have unequal energies such as PAM.
-$r=(r_1,r_2,...,r_n)$ are: $$r_k=s_{mk}+n_k$$ Were +
-$n_k\sim\mathcal(0,N_0/2)$ such that +
-$r_i\sim\mathcal{N}(s_{mk},N_0/2)$. The likelihood function can be +
-calculated to be: $$P(r|s_m)=\prod_{k=1}^NP(r_k|s_m)=\frac{1}{(\pi +
-N_0)^{\frac{N}{2}}}\exp\left(-\frac{||r-s_m||^2}{N_0}\right)$$ The ML +
-detection criterion for AWGN channels is given by: +
-$$\hat{m}=\arg\max_m\frac{1}{(\pi +
-N_0)^{\frac{N}{2}}}\exp\left(-\frac{||r-s_m||^2}{N_0}\right)$$ This +
-can be simplified to: $$\hat{m}=\arg\min_m||r-s_m||$$ This means we +
-decide on $s_m$ that is closest to $r$, being minimum distance +
-detection. The decision regions are represented graphically as the +
-midlines between $s_1,s_2,...,s_M$. We can expand the distance metric +
-for ML to: $$||r-s_m||^2=\underbrace{||r||^2}_{\text{Independent of +
-}m}-2\langle r,s_m\rangle+\underbrace{||s_m||^2}_{\text{Energy of m-th +
-signal, }\varepsilon_m}$$ This allows us to reexpress the ML criterion +
-as: $$\hat{m}=\arg\max_m[\langle r,s_m\rangle+\eta_m]$$ Where +
-$\eta=-\frac{1}{2}\varepsilon_m$ is a bias term compensating for +
-signal sets that have unequal energies such as PAM.+
  
 ==== Phase mismatch ==== ==== Phase mismatch ====
  
-When the phase of the carrier and receiver are not synchronised, or +When the phase of the carrier and receiver are not synchronised, or the carrier frequency is only approximately known there is a mismatch affecting the signal. This mismatch causes: $$r=\pm\sqrt{\frac{2}{\varepsilon_g}}\int_0^Tg^2(t)\cos(2\pi f_ct+\theta)\cos(2\pi f_ct)dt+n\approx\pm\sqrt{\frac{\varepsilon_g}{2}}\cos(\theta)+n$$ When there is a phase mismatch ($\cos(\theta)<1$), there is a loss of information. The phase mismatch causes a rotation that may lead to the projection lying in the wrong region, even in the noiseless case.
-the carrier frequency is only approximately known there is a mismatch +
-affecting the signal. This mismatch causes: +
-$$r=\pm\sqrt{\frac{2}{\varepsilon_g}}\int_0^Tg^2(t)\cos(2\pi +
-f_ct+\theta)\cos(2\pi +
-f_ct)dt+n\approx\pm\sqrt{\frac{\varepsilon_g}{2}}\cos(\theta)+n$$ When +
-there is a phase mismatch ($\cos(\theta)<1$), there is a loss of +
-information. The phase mismatch causes a rotation that may lead to the +
-projection lying in the wrong region, even in the noiseless case.+
  
-If the phase mismatch is unknown and changing rapidly, and if small, +If the phase mismatch is unknown and changing rapidly, and if small, it is ingnored and treated as random noise and an otherwise optimal detector is designed. If the mismatch is large then noncoherent demodulators are used. These work best for modulation schemes that ignore phase, such as envelope detectors for orthogonal signalling (FSK) and on-off keyring (OOK). Alternatively is the phase mismatch is unknown but fixed or varying slowly, then using differential modulation that modulate the phase differences rather than absolute phase (e.g. DBPSK).
-it is ingnored and treated as random noise and an otherwise optimal +
-detector is designed. If the mismatch is large then noncoherent +
-demodulators are used. These work best for modulation schemes that +
-ignore phase, such as envelope detectors for orthogonal signalling +
-(FSK) and on-off keyring (OOK). Alternatively is the phase mismatch is +
-unknown but fixed or varying slowly, then using differential +
-modulation that modulate the phase differences rather than absolute +
-phase (e.g. DBPSK).+
  
 === Non-coherent OOK Demodulation === === Non-coherent OOK Demodulation ===
  
-The transmit signal for OOK is: $$s_m(t)=A_mg(t)\cos(2\pi +The transmit signal for OOK is: $$s_m(t)=A_mg(t)\cos(2\pi f_ct);m=1,2;A_1=1,A_2=0$$ The received signal, after AWGN with imperfect synchronisation, is: $$r(t)=A_mg(t)\cos(2\pi f_ct+\theta)+n(t)=A_mg(t)\cos(\theta)\cos(2\pi f_ct)-A_mg(t)\sin(\theta)\sin(2\pi f_ct)+n(t)$$ The dimension of the signal space is 1, but the dimension of the received signal space is 2 due to the phase mismatch. The signal demodulator must have 2 correlators, otherwise some symbol information will be lost. The signal demodulator output is: $$r_1=\sqrt{\frac{1}{\varepsilon_g}}\int_0^\infty A_mg^2(t)\cos(2\pi f_ct+\theta)\cos(2\pi f_ct)dt+n_1\approx\sqrt{\frac{\varepsilon_g}{2}}A_m\cos(\theta)+n_1$$ $$r_2\approx\sqrt{\frac{\varepsilon_g}{2}}A_m\sin(\theta)+n_2$$ This rotates the signal along the signal space circle of radius $\sqrt{\frac{\varepsilon_g}{2}}$. This requires a circular decision region.
-f_ct);m=1,2;A_1=1,A_2=0$$ The received signal, after AWGN with +
-imperfect synchronisation, is: $$r(t)=A_mg(t)\cos(2\pi +
-f_ct+\theta)+n(t)=A_mg(t)\cos(\theta)\cos(2\pi +
-f_ct)-A_mg(t)\sin(\theta)\sin(2\pi f_ct)+n(t)$$ The dimension of the +
-signal space is 1, but the dimension of the received signal space is 2 +
-due to the phase mismatch. The signal demodulator must have 2 +
-correlators, otherwise some symbol information will be lost. The +
-signal demodulator output is: +
-$$r_1=\sqrt{\frac{1}{\varepsilon_g}}\int_0^\infty A_mg^2(t)\cos(2\pi +
-f_ct+\theta)\cos(2\pi +
-f_ct)dt+n_1\approx\sqrt{\frac{\varepsilon_g}{2}}A_m\cos(\theta)+n_1$$ +
-$$r_2\approx\sqrt{\frac{\varepsilon_g}{2}}A_m\sin(\theta)+n_2$$ This +
-rotates the signal along the signal space circle of radius +
-$\sqrt{\frac{\varepsilon_g}{2}}$. This requires a circular decision +
-region.+
  
-The optimal detector uses: +The optimal detector uses: $$\hat{m}=\arg\max_{m=1,2}P(s_m)P(r|s_m)=\arg\max_{m=1,2}P(s_m)\int_0^{2\pi}P(r|s_m,\theta)P(\theta)d\theta$$ The worst case has phase uncertainty uniformly distributed over $[0,2\pi)$. Assuming equiprobable symbols, we get: $$\hat{m}=\begin{cases}1,&r_1^2+r_2^2>V_T\\2,&r_1^2+r_2^2<V_T\end{cases}$$ Which depend only on the envelope of the properly received signal ($V_T$ in terms of Bessel function). At high SNRs: $$r_1^2+r_2^2\approx\frac{1}{2}A_m^2\varepsilon_g(\cos^2(\theta)+\sin^2(\theta))=\begin{cases}\frac{1}{2}\varepsilon_g,&m=1\\0,&m=2\end{cases}$$ Which is independent of $\theta$.
-$$\hat{m}=\arg\max_{m=1,2}P(s_m)P(r|s_m)=\arg\max_{m=1,2}P(s_m)\int_0^{2\pi}P(r|s_m,\theta)P(\theta)d\theta$$ +
-The worst case has phase uncertainty uniformly distributed over +
-$[0,2\pi)$. Assuming equiprobable symbols, we get: +
-$$\hat{m}=\begin{cases}1,&r_1^2+r_2^2>V_T\\2,&r_1^2+r_2^2<V_T\end{cases}$$ +
-Which depend only on the envelope of the properly received signal +
-($V_T$ in terms of Bessel function). At high SNRs: +
-$$r_1^2+r_2^2\approx\frac{1}{2}A_m^2\varepsilon_g(\cos^2(\theta)+\sin^2(\theta))=\begin{cases}\frac{1}{2}\varepsilon_g,&m=1\\0,&m=2\end{cases}$$ +
-Which is independent of $\theta$.+
  
 === Non-coherent FSK demodulation === === Non-coherent FSK demodulation ===
  
-Similarly to non-coherent OOK, we can demodulate Binary FSK. That is +Similarly to non-coherent OOK, we can demodulate Binary FSK. That is we demodulate against sin and cos, and use the squares to make a decision with a larger envelope. This can be extended to M-ary FSK using a larger envelope still.
-we demodulate against sin and cos, and use the squares to make a +
-decision with a larger envelope. This can be extended to M-ary FSK +
-using a larger envelope still.+
  
 === Differential Modulation === === Differential Modulation ===
  
-Differential MPSK modulation involves precoding of the infomation +Differential MPSK modulation involves precoding of the infomation symbol sequence $b(n)$ into a symbol sequence $\delta(n)$ that is then input to a MPSK modulator. The DMPSK demodulator needs to invert the precoding. In MPSK each symbol value determines the actual value of the phase, but in DMPSK it determines the phase change form the previous signalling interval's period. This makes DMPSK modulation with memory. Differential demodulation is used in situations where the MPSK demodulator's errors are always of the same type, determined by a fixed (or slowly varying) phase mismatch $\varphi$, the value of which we do not know (may be zero). Differential modulation leads to a better symbol error performance since it is robust against a fixed unknown phase mismatch. If the demodulator makes an error, there are bit errors over that bit and the next one, due to the differential coding nature.
-symbol sequence $b(n)$ into a symbol sequence $\delta(n)$ that is then +
-input to a MPSK modulator. The DMPSK demodulator needs to invert the +
-precoding. In MPSK each symbol value determines the actual value of +
-the phase, but in DMPSK it determines the phase change form the +
-previous signalling interval's period. This makes DMPSK modulation +
-with memory. Differential demodulation is used in situations where the +
-MPSK demodulator's errors are always of the same type, determined by a +
-fixed (or slowly varying) phase mismatch $\varphi$, the value of which +
-we do not know (may be zero). Differential modulation leads to a +
-better symbol error performance since it is robust against a fixed +
-unknown phase mismatch. If the demodulator makes an error, there are +
-bit errors over that bit and the next one, due to the differential +
-coding nature.+
  
 ==== Error probability ==== ==== Error probability ====
  
-No matter the demodulation, as we are using probabilistic +No matter the demodulation, as we are using probabilistic determination with noise, there exists a chance we can make an error. This is when we decide on the received signal being misidentified as another.
-determination with noise, there exists a chance we can make an +
-error. This is when we decide on the received signal being +
-misidentified as another.+
  
 === Binary PAM === === Binary PAM ===
  
-For Binary PAM, the probability we make an error is: +For Binary PAM, the probability we make an error is: $$P(err|s_2)=P(r>0,s_2)$$ We can recall that $r|s_2\sim\mathcal{N}\left(-\sqrt{\varepsilon_b},\frac{N_0}{2}\right)$, making the probability of an error equal to: $$P(err|s_2)=Q\left(\frac{0+\sqrt{\varepsilon_b}}{\sqrt{\frac{N_0}{2}}}\right)=Q\left(\sqrt{\frac{2\varepsilon_b}{N_0}}\right)$$ Due to symmetry, the error is the same for $s_1$. The bit error probability is: $$P_b=\frac{1}{2}P(err|s_1)+\frac{1}{2}P(err|s_2)=Q\left(\sqrt{\frac{2\varepsilon_b}{N_0}}\right)$$
-$$P(err|s_2)=P(r>0,s_2)$$ We can recall that +
-$r|s_2\sim\mathcal{N}\left(-\sqrt{\varepsilon_b},\frac{N_0}{2}\right)$, +
-making the probability of an error equal to: +
-$$P(err|s_2)=Q\left(\frac{0+\sqrt{\varepsilon_b}}{\sqrt{\frac{N_0}{2}}}\right)=Q\left(\sqrt{\frac{2\varepsilon_b}{N_0}}\right)$$ +
-Due to symmetry, the error is the same for $s_1$. The bit error +
-probability is: +
-$$P_b=\frac{1}{2}P(err|s_1)+\frac{1}{2}P(err|s_2)=Q\left(\sqrt{\frac{2\varepsilon_b}{N_0}}\right)$$+
  
 === Binary orthogonal signals === === Binary orthogonal signals ===
  
-For binary orthogonal signals, the signal is: +For binary orthogonal signals, the signal is: $$s_m(t)=\sqrt{\varepsilon_b}\phi_m(t)$$ Where $\phi_1(t),\phi_2(t)$ are orthonormal. The ML detector uses: $$\hat{m}=\arg\min_m||r-s_m||$$ Being that the constellation point closest to the received point is assumed to be transmitted. The error probability consists of 2 normally distributed variables, unlike the one from BPSK. This causes the error distribution to be: $$n\sim\mathcal{N}(0,N_0)$$ The error probability is thus: $$P(err|s_1)=P(n_3>\sqrt{\varepsilon_b})=Q\left(\sqrt{\frac{\varepsilon_b}{N_0}}\right)$$ The probabilities are equiprobable, being $P(err|s_1)=P(err|s_2)$, so the overall bit error is: $$P_b=Q\left(\sqrt{\frac{\varepsilon_b}{N_0}}\right)$$ For the same energy, binary PAM has better error performance than binary orthogonal signalling. Alternatively, binary PAM can have the same error probability with half the energy. This is because for the same energy, the distance between constellation points is greater for 2-PAM.
-$$s_m(t)=\sqrt{\varepsilon_b}\phi_m(t)$$ Where $\phi_1(t),\phi_2(t)$ +
-are orthonormal. The ML detector uses: $$\hat{m}=\arg\min_m||r-s_m||$$ +
-Being that the constellation point closest to the received point is +
-assumed to be transmitted. The error probability consists of 2 +
-normally distributed variables, unlike the one from BPSK. This causes +
-the error distribution to be: $$n\sim\mathcal{N}(0,N_0)$$ The error +
-probability is thus: +
-$$P(err|s_1)=P(n_3>\sqrt{\varepsilon_b})=Q\left(\sqrt{\frac{\varepsilon_b}{N_0}}\right)$$ +
-The probabilities are equiprobable, being $P(err|s_1)=P(err|s_2)$, so +
-the overall bit error is: +
-$$P_b=Q\left(\sqrt{\frac{\varepsilon_b}{N_0}}\right)$$ For the same +
-energy, binary PAM has better error performance than binary orthogonal +
-signalling. Alternatively, binary PAM can have the same error +
-probability with half the energy. This is because for the same energy, +
-the distance between constellation points is greater for 2-PAM.+
  
 === M-PAM === === M-PAM ===
  
-For M-PAM, the signal waveforms are written as: +For M-PAM, the signal waveforms are written as: $$s_m(t)=A-mg(t)=A_m\sqrt{\varepsilon_g}\phi(t);1\leq m\leq Mm0\leq t\leq T$$ Where $A_m=2m-1-M$ is the set of possible amplitudes, $\phi(t)=\frac{g(t)}{\sqrt{\varepsilon_g}}$ and the pulse energy is $\varepsilon_g$, The minimum energy between constellation points is $\sqrt{2\varepsilon_g}$. The signal demodulator's output is the random variable: $$r=\langle r(t),\phi(t)\rangle=A_m\sqrt{\varepsilon_g}+n\implies r\sim\mathcal{N}(A_m\sqrt{\varepsilon_g},N_0/2)$$ For outer point ($m=1,M$), the error probability is: $$P(err|s_m)=Q\left(\sqrt{\frac{2\varepsilon_g}{N_0}}\right)$$ For inner points ($m=2,...,M-1$), the error probability is: $$P(err|s_m)=P(|r-s_m|>\sqrt{\varepsilon_g})=2Q\left(\sqrt{\frac{2\varepsilon_g}{N_0}}\right)$$ The symbol error probability is: $$P_e=\frac{1}{M}\left(2Q\left(\sqrt{\frac{2\varepsilon_g}{N_0}}\right)\right)+(M-2)2Q\left(\sqrt{\frac{2\varepsilon_g}{N_0}}\right)=\frac{2(M-1)}{M}Q\left(\sqrt{\frac{2\varepsilon_g}{N_0}}\right)$$ The average symbol energy is: $$\varepsilon_{avg}=\frac{1}{M}\sum_{m=1}^MA_m^2\varepsilon_g=\frac{\varepsilon_g(M^2-1)}{3}$$ The average bit energy can be useful to compare with other modulation schemes. $$\varepsilon_b=\frac{\varepsilon_{avg}}{\log_2M}=\frac{\varepsilon_g(M^2-1)}{3\log_2M}$$ After substitution the symbol error probability is: $$P_e=\frac{2(M-1)}{M}Q\left(\sqrt{\frac{4\log_2M\varepsilon_b}{(M^2-1)N_0}}\right)$$
-$$s_m(t)=A-mg(t)=A_m\sqrt{\varepsilon_g}\phi(t);1\leq m\leq Mm0\leq +
-t\leq T$$ Where $A_m=2m-1-M$ is the set of possible amplitudes, +
-$\phi(t)=\frac{g(t)}{\sqrt{\varepsilon_g}}$ and the pulse energy is +
-$\varepsilon_g$, The minimum energy between constellation points is +
-$\sqrt{2\varepsilon_g}$. The signal demodulator's output is the random +
-variable: $$r=\langle +
-r(t),\phi(t)\rangle=A_m\sqrt{\varepsilon_g}+n\implies +
-r\sim\mathcal{N}(A_m\sqrt{\varepsilon_g},N_0/2)$$ For outer point +
-($m=1,M$), the error probability is: +
-$$P(err|s_m)=Q\left(\sqrt{\frac{2\varepsilon_g}{N_0}}\right)$$ For +
-inner points ($m=2,...,M-1$), the error probability is: +
-$$P(err|s_m)=P(|r-s_m|>\sqrt{\varepsilon_g})=2Q\left(\sqrt{\frac{2\varepsilon_g}{N_0}}\right)$$ +
-The symbol error probability is: +
-$$P_e=\frac{1}{M}\left(2Q\left(\sqrt{\frac{2\varepsilon_g}{N_0}}\right)\right)+(M-2)2Q\left(\sqrt{\frac{2\varepsilon_g}{N_0}}\right)=\frac{2(M-1)}{M}Q\left(\sqrt{\frac{2\varepsilon_g}{N_0}}\right)$$ +
-The average symbol energy is: +
-$$\varepsilon_{avg}=\frac{1}{M}\sum_{m=1}^MA_m^2\varepsilon_g=\frac{\varepsilon_g(M^2-1)}{3}$$ +
-The average bit energy can be useful to compare with other modulation +
-schemes. $$\varepsilon_b=\frac{\varepsilon_{avg}}{\log_2M}=\frac{\varepsilon_g(M^2-1)}{3\log_2M}$$ +
-After substitution the symbol error probability is: +
-$$P_e=\frac{2(M-1)}{M}Q\left(\sqrt{\frac{4\log_2M\varepsilon_b}{(M^2-1)N_0}}\right)$$+
  
-M-PAM bandpass signals are: $$s_m(t)A_mg(t)\cos(2\pi +M-PAM bandpass signals are: $$s_m(t)A_mg(t)\cos(2\pi f_ct)=A_m\sqrt{\frac{\varepsilon_g}{2}}\phi(t);1\leq m\leq M,0\leq t\leq T$$ The minimum distance between constellations is $d_{min}=\sqrt{2\varepsilon_g}$. The symbol error probability is: $$P_e=\frac{2(M-1)}{M}Q\left(\sqrt{\frac{\varepsilon_g}{N_0}}\right)$$ The average symbol energy is: $$\varepsilon_{avg}=\frac{\varepsilon_g(M^2-1)}{6}$$ $$\varepsilon_b=\frac{\varepsilon_g(M^2-1)}{6\log_2M}$$ The symbol error probability becomes: $$P_e=\frac{2(M-1)}{M}Q\left(\sqrt{\frac{6\log_2M\varepsilon_b}{(M^2-1)N_0}}\right)$$ This is the same performance as M-ary baseband.
-f_ct)=A_m\sqrt{\frac{\varepsilon_g}{2}}\phi(t);1\leq m\leq M,0\leq +
-t\leq T$$ The minimum distance between constellations is +
-$d_{min}=\sqrt{2\varepsilon_g}$. The symbol error probability is: +
-$$P_e=\frac{2(M-1)}{M}Q\left(\sqrt{\frac{\varepsilon_g}{N_0}}\right)$$ +
-The average symbol energy is: +
-$$\varepsilon_{avg}=\frac{\varepsilon_g(M^2-1)}{6}$$ +
-$$\varepsilon_b=\frac{\varepsilon_g(M^2-1)}{6\log_2M}$$ The symbol +
-error probability becomes: +
-$$P_e=\frac{2(M-1)}{M}Q\left(\sqrt{\frac{6\log_2M\varepsilon_b}{(M^2-1)N_0}}\right)$$ +
-This is the same performance as M-ary baseband.+
  
 === M-PSK === === M-PSK ===
  
-In M-ary PSK, all M signals have the same energy, so all constellation +In M-ary PSK, all M signals have the same energy, so all constellation points are on a circle. M-ary signals can be represented as: $$s_m(t)=g(t)\cos(\theta_m)\cos(2\pi f_ct)-g(t)\sin(\theta_m)\sin(2\i f_ct)$$ Where $g(t)$ is the pulse shape and $\theta_m=\frac{2\pi}{M}(m-1)$ is the phase conveying the information. This forms the orthonormal basis: $$\{\phi_1(t),\phi_2(t)\}=\left\{\sqrt{\frac{2}{\varepsilon_g}}g(t)\cos(2\pi f_ct),-\sqrt{\frac{2}{\varepsilon_g}}g(t)\sin(2\pi f_ct)\right\}$$ This assumes $f_c>>\frac{1}{T}$. For QPSK, the probability of a correct transmission is: $$P(correct|s_1)=(r_1>0,r_2>0|s_1)=(1-P_{BPSK})^2$$ Hence the symbol error probability is: $$P(err|s_1)=1-(1-P_{BPSK})^2=2P_{BPSK}-P_{BPSK}^2=2Q\left(\sqrt{\frac{2\varepsilon_b}{N_0}}\right)-\left(Q\left(\sqrt{\frac{2\varepsilon_b}{N_0}}\right)\right)^2$$ For $M\geq8$, the symbol error probability calculations need a transformation to polar coordinates and can only be approximated. The bit error probability is the average proportion of erroneous bits per symbol. This depends on the mapping from the k-bit code-word to the symbol. At high SNR, most symbol errors involve erroneous detection of the transmitted signal as the nearest neighbour signal. Therefore fewer bit errors occur by ensuring that neighbouring symbols differ by only one bit, being Gray coding. At high SNR, the bit error probability can be approximated as the probability of picking a neighbouring signal, so assuming Gray coding: $$P_b\approx\frac{1}{k}P_e\approx\frac{P_e}{\log_2M}$$ Where $k=\log_2M$, the number of bits per symbol.
-points are on a circle. M-ary signals can be represented as: +
-$$s_m(t)=g(t)\cos(\theta_m)\cos(2\pi f_ct)-g(t)\sin(\theta_m)\sin(2\i +
-f_ct)$$ Where $g(t)$ is the pulse shape and +
-$\theta_m=\frac{2\pi}{M}(m-1)$ is the phase conveying the +
-information. This forms the orthonormal basis: +
-$$\{\phi_1(t),\phi_2(t)\}=\left\{\sqrt{\frac{2}{\varepsilon_g}}g(t)\cos(2\pi +
-f_ct),-\sqrt{\frac{2}{\varepsilon_g}}g(t)\sin(2\pi f_ct)\right\}$$ +
-This assumes $f_c>>\frac{1}{T}$. For QPSK, the probability of a +
-correct transmission is: +
-$$P(correct|s_1)=(r_1>0,r_2>0|s_1)=(1-P_{BPSK})^2$$ Hence the symbol +
-error probability is: +
-$$P(err|s_1)=1-(1-P_{BPSK})^2=2P_{BPSK}-P_{BPSK}^2=2Q\left(\sqrt{\frac{2\varepsilon_b}{N_0}}\right)-\left(Q\left(\sqrt{\frac{2\varepsilon_b}{N_0}}\right)\right)^2$$ +
-For $M\geq8$, the symbol error probability calculations need a +
-transformation to polar coordinates and can only be approximated. The +
-bit error probability is the average proportion of erroneous bits per +
-symbol. This depends on the mapping from the k-bit code-word to the +
-symbol. At high SNR, most symbol errors involve erroneous detection of +
-the transmitted signal as the nearest neighbour signal. Therefore +
-fewer bit errors occur by ensuring that neighbouring symbols differ by +
-only one bit, being Gray coding. At high SNR, the bit error +
-probability can be approximated as the probability of picking a +
-neighbouring signal, so assuming Gray coding: +
-$$P_b\approx\frac{1}{k}P_e\approx\frac{P_e}{\log_2M}$$ Where +
-$k=\log_2M$, the number of bits per symbol.+
  
 === QAM === === QAM ===
  
-QAM is the most widely used constellation, due to its 2d scaling and +QAM is the most widely used constellation, due to its 2d scaling and efficiency. QAM allows signals to have different amplitudes and impress information bits on each of the quadrature carriers. The important performance parameters are average energy and minimum distance in the signal constellation. An orthonormal basis for the signal space is: $$\{\phi_1(t),\phi_2(t)\}=\left\{\sqrt{\frac{2}{\varepsilon_g}}g(t)\cos(2\pi f_ct),-\sqrt{\frac{2}{\varepsilon_g}}g(t)\sin(2\pi f_ct)\right\}$$ For QAM there are many different constellations possible, so we need to select the one with the greatest minimum distance and efficiency, which still including implementation ease. In general rectangular QAM is most frequently used, due to its ease. When $M=2^k$, with $k$ even, M-ary QAM is equivalent to two $\sqrt{M}$ ary PAM signals on quadrature carriers, each with half the equivalent QAM power. If $P_\sqrt{M}$ is the probability of symbol error of $\sqrt{M}$ ary PAM, then similar to the case of QPSK, the probability of a correct decision is given by: $$P(\text{no error})=(1-P_{\sqrt{M}})^2\implies P_e=1-(1-P_{\sqrt{M}})^2$$
-efficiency. QAM allows signals to have different amplitudes and +
-impress information bits on each of the quadrature carriers. The +
-important performance parameters are average energy and minimum +
-distance in the signal constellation. An orthonormal basis for the +
-signal space is: +
-$$\{\phi_1(t),\phi_2(t)\}=\left\{\sqrt{\frac{2}{\varepsilon_g}}g(t)\cos(2\pi +
-f_ct),-\sqrt{\frac{2}{\varepsilon_g}}g(t)\sin(2\pi f_ct)\right\}$$ For +
-QAM there are many different constellations possible, so we need to +
-select the one with the greatest minimum distance and efficiency, +
-which still including implementation ease. In general rectangular QAM +
-is most frequently used, due to its ease. When $M=2^k$, with $k$ even, +
-M-ary QAM is equivalent to two $\sqrt{M}$ ary PAM signals on +
-quadrature carriers, each with half the equivalent QAM power. If +
-$P_\sqrt{M}$ is the probability of symbol error of $\sqrt{M}$ ary PAM, +
-then similar to the case of QPSK, the probability of a correct +
-decision is given by: $$P(\text{no error})=(1-P_{\sqrt{M}})^2\implies +
-P_e=1-(1-P_{\sqrt{M}})^2$$+
  
 === Orthogonal signalling === === Orthogonal signalling ===
  
-Orthogonal signalling uses the waveforms: +Orthogonal signalling uses the waveforms: $$s_m(t)=\sqrt{\varepsilon_s}\phi_m(t),1\leq m\leq M$$ The minimum distance between signals is: $$d_{min}=\sqrt{2\varepsilon_s}=\sqrt{2\log_2(M)\varepsilon_b}$$ The ML criterion can be expressed as: $$\hat{m}=\arg\min_m||r-s_m||=\arg\max_m[\langle r,s_m\rangle+\eta_m]$$ Where $\eta_m=-\frac{1}{2}\varepsilon_m$ is a bias term that compensates for signal sets that have unequal energies, e.g. PAM. This makes the ML rule for orthogonal signalling to select the greatest $r_n$ of the demodulator output. If $r_m>r_k\forall k\neq m$, then $s_m$ was transmitted.
-$$s_m(t)=\sqrt{\varepsilon_s}\phi_m(t),1\leq m\leq M$$ The minimum +
-distance between signals is: +
-$$d_{min}=\sqrt{2\varepsilon_s}=\sqrt{2\log_2(M)\varepsilon_b}$$ The +
-ML criterion can be expressed as: +
-$$\hat{m}=\arg\min_m||r-s_m||=\arg\max_m[\langle +
-r,s_m\rangle+\eta_m]$$ Where $\eta_m=-\frac{1}{2}\varepsilon_m$ is a +
-bias term that compensates for signal sets that have unequal energies, +
-e.g. PAM. This makes the ML rule for orthogonal signalling to select +
-the greatest $r_n$ of the demodulator output. If $r_m>r_k\forall k\neq +
-m$, then $s_m$ was transmitted.+
  
-If $s_m$ is transmitted, then: +If $s_m$ is transmitted, then: $$r_m\sim\mathcal{N}(\sqrt{\varepsilon_s},N_0/2)$$ $$r_k\sim\mathcal{N}(0,N_0/2),\forall k\neq m$$ The error probability is: $$P_e=1-\frac{1}{\sqrt{\pi N_0}}\int_{-\infty}^\infty\left[1-Q\left(\frac{r_m}{\sqrt{N_0/2}}\right)\right]^{M-1}\exp\left(-\frac{(r_m-\sqrt{\varepsilon_s})^2}{N_0}\right)dr_m$$ The bit probability is: $$P_b=2^{k-1}\frac{P_e}{M-1}=\frac{MP_e}{2(M-1)}\approx\frac{P_e}{2}$$ The error probability $P_e$ is upper bounded by the union bound of the $M-1$ events: $$P_c\leq(M-1)P_b^{bin}=(M-1)Q\left(\sqrt{\frac{\varepsilon_s}{N_2}}\right)<Me^{-\frac{\varepsilon_s}{2N_0}}=2^ke^{-\frac{k\varepsilon_b}{2N_0}}$$ $$P_e<e^{-k\frac{(\varepsilon_b/N_0-2\ln2)}{2}}$$ We can note that the error probability can be made arbitrarily small as $k\to\infty$, provided that $\varepsilon_b/N_0>2\ln2\approx1.39=1.42dB$. A tighter upper bound is given by: $$P_e<2e^{-k(\sqrt{\varepsilon_b/N_0}-\sqrt{\ln2})^2}$$ This bound is the Shannon limit for AWGN symbols, and is valid for $\ln2\leq\varepsilon_b/N_0\leq4\ln2$.
-$$r_m\sim\mathcal{N}(\sqrt{\varepsilon_s},N_0/2)$$ +
-$$r_k\sim\mathcal{N}(0,N_0/2),\forall k\neq m$$ The error probability +
-is: $$P_e=1-\frac{1}{\sqrt{\pi +
-N_0}}\int_{-\infty}^\infty\left[1-Q\left(\frac{r_m}{\sqrt{N_0/2}}\right)\right]^{M-1}\exp\left(-\frac{(r_m-\sqrt{\varepsilon_s})^2}{N_0}\right)dr_m$$ +
-The bit probability is: +
-$$P_b=2^{k-1}\frac{P_e}{M-1}=\frac{MP_e}{2(M-1)}\approx\frac{P_e}{2}$$ +
-The error probability $P_e$ is upper bounded by the union bound of the +
-$M-1$ events: +
-$$P_c\leq(M-1)P_b^{bin}=(M-1)Q\left(\sqrt{\frac{\varepsilon_s}{N_2}}\right)<Me^{-\frac{\varepsilon_s}{2N_0}}=2^ke^{-\frac{k\varepsilon_b}{2N_0}}$$ +
-$$P_e<e^{-k\frac{(\varepsilon_b/N_0-2\ln2)}{2}}$$ We can note that the +
-error probability can be made arbitrarily small as $k\to\infty$, +
-provided that $\varepsilon_b/N_0>2\ln2\approx1.39=1.42dB$. A tighter +
-upper bound is given by: +
-$$P_e<2e^{-k(\sqrt{\varepsilon_b/N_0}-\sqrt{\ln2})^2}$$ This bound is +
-the Shannon limit for AWGN symbols, and is valid for +
-$\ln2\leq\varepsilon_b/N_0\leq4\ln2$.+
  
 ====== Efficiency ====== ====== Efficiency ======
  
-Spectral efficiency is the ratio of bitrate $R$ to bandwidth +Spectral efficiency is the ratio of bitrate $R$ to bandwidth $W$. Bandwidth-limited signals are QAM, PAM, PSK and DPSK, where $R/W>1$. Increasing the bits improves the efficiency. Power-limited signals are Orthogonal signals, where $R/W<1$. Increasing the bits decreases the efficiency.
-$W$. Bandwidth-limited signals are QAM, PAM, PSK and DPSK, where +
-$R/W>1$. Increasing the bits improves the efficiency. Power-limited +
-signals are Orthogonal signals, where $R/W<1$. Increasing the bits +
-decreases the efficiency.+
  
-The bandwidth of a pulse is the reciprocal of its +The bandwidth of a pulse is the reciprocal of its length. $$W=\frac{1}{T}$$ This is the bandwidth of the centre lobe of the sinc function, the Fourier transform of the rectangular pulse. The Nyquist bandwidth is for digital systems of symbol period $T$, for which the minimum bandwidth to use is $\frac{1}{2T}$.
-length. $$W=\frac{1}{T}$$ This is the bandwidth of the centre lobe of +
-the sinc function, the Fourier transform of the rectangular pulse. The +
-Nyquist bandwidth is for digital systems of symbol period $T$, for +
-which the minimum bandwidth to use is $\frac{1}{2T}$.+
  
-The spectral efficiency of a modulation scheme is defined as; +The spectral efficiency of a modulation scheme is defined as; $$\nu=\frac{R_b}{W}$$ Where $R_b=\frac{k}{T}$ is the bitrate and $W$ is the bandwidth required. The spectral efficiency is a performance indicator for fundamental comparison of modulation schemes with respect to power and bandwidth usage.
-$$\nu=\frac{R_b}{W}$$ Where $R_b=\frac{k}{T}$ is the bitrate and $W$ +
-is the bandwidth required. The spectral efficiency is a performance +
-indicator for fundamental comparison of modulation schemes with +
-respect to power and bandwidth usage.+
  
-Passband PAM has a bandwidth of: $$B=\frac{1}{2T}$$ Bandpass PAM and +Passband PAM has a bandwidth of: $$B=\frac{1}{2T}$$ Bandpass PAM and QAM have a bandwidth of; $$B=2W=2\frac{1}{2T}=\frac{1}{T}$$ QAM uses the same frequencies as PAM, so has the same bandwidth. PSK has a bandwidth of: $$B=\frac{1}{T}$$ This is because PSK is a subset of QAM where $A_m,A_n$ are equal to $\cos(\theta_n),\sin(\theta_n)$ respectively.
-QAM have a bandwidth of; $$B=2W=2\frac{1}{2T}=\frac{1}{T}$$ QAM uses +
-the same frequencies as PAM, so has the same bandwidth. PSK has a +
-bandwidth of: $$B=\frac{1}{T}$$ This is because PSK is a subset of QAM +
-where $A_m,A_n$ are equal to $\cos(\theta_n),\sin(\theta_n)$ +
-respectively.+
  
-Passband PAM has a spectral efficiency of: +Passband PAM has a spectral efficiency of: $$\nu=\frac{R_b}{1/2T}=2k=2\log_2M\to\infty\text{ as }M\to\infty$$ Bandpass PAM has a spectral efficiency of: $$\nu=\frac{R_b}{1/T}=k=\log_2M\to\infty\text{ as }M\to\infty$$ M-ary PSK has a spectral efficiency of: $$\nu=\frac{R_b}{1/T}=k=\log_2M\to\infty\text{ as }M\to\infty$$ M-ary QAM has a spectral efficiency of: $$\nu=\frac{R_b}{1/T}=k=\log_2M\to\infty\text{ as }M\to\infty$$
-$$\nu=\frac{R_b}{1/2T}=2k=2\log_2M\to\infty\text{ as }M\to\infty$$ +
-Bandpass PAM has a spectral efficiency of: +
-$$\nu=\frac{R_b}{1/T}=k=\log_2M\to\infty\text{ as }M\to\infty$$ M-ary +
-PSK has a spectral efficiency of: +
-$$\nu=\frac{R_b}{1/T}=k=\log_2M\to\infty\text{ as }M\to\infty$$ M-ary +
-QAM has a spectral efficiency of: +
-$$\nu=\frac{R_b}{1/T}=k=\log_2M\to\infty\text{ as }M\to\infty$$+
  
-For the orthogonal signalling methods Pulse Position Modulation (PPM) +For the orthogonal signalling methods Pulse Position Modulation (PPM) and Frequency Shift Keyring (FSK), the spectral efficiency is: $$\nu=\frac{R_b}{\frac{M}{2T}}=\frac{2k}{M}=\frac{2\log_2M}{M}\to0\text{ as }M\to\infty$$ For PPM, the pulse duration is $T/M$, requiring $M$ times the PAM bandwidth. For FSK, the minimum frequency separation is $\frac{1}{2T}$.
-and Frequency Shift Keyring (FSK), the spectral efficiency is: +
-$$\nu=\frac{R_b}{\frac{M}{2T}}=\frac{2k}{M}=\frac{2\log_2M}{M}\to0\text{ +
-as }M\to\infty$$ For PPM, the pulse duration is $T/M$, requiring $M$ +
-times the PAM bandwidth. For FSK, the minimum frequency separation is +
-$\frac{1}{2T}$.+
  
-PAM/QAM/PSK have increasing M leading to more bandwidth efficiency and +PAM/QAM/PSK have increasing M leading to more bandwidth efficiency and less power efficiency. These are appropriate for bandwidth-limited channels with plenty of power. PPM/FSK have increasing M leading to less bandwidth efficiency and more power efficiency. These are appropriate in power-limited channels with plenty of bandwidth. Theoretically the error probability can be made arbitrarily small as long as $\text{SNR}/\text{bit}>-1.6dB$, but this would require infinite bandwidth.
-less power efficiency. These are appropriate for bandwidth-limited +
-channels with plenty of power. PPM/FSK have increasing M leading to +
-less bandwidth efficiency and more power efficiency. These are +
-appropriate in power-limited channels with plenty of +
-bandwidth. Theoretically the error probability can be made arbitrarily +
-small as long as $\text{SNR}/\text{bit}>-1.6dB$, but this would +
-require infinite bandwidth.+
  
 ====== Bandlimited communications ====== ====== Bandlimited communications ======
  
-Channels act as a LTI bandpass filter, removing all signals outside of +Channels act as a LTI bandpass filter, removing all signals outside of a band. This can distort the output signal. Even without noise, we can result in a transmitted signal outside of the signal space.
-a band. This can distort the output signal. Even without noise, we can +
-result in a transmitted signal outside of the signal space.+
  
-Ideally channels should have infinite bandwidth with a flat amplitude +Ideally channels should have infinite bandwidth with a flat amplitude response and a linear phase response. In practice channels have finite bandwidth, non-flat amplitude responses and nonlinear phase responses. Signal distortions in amplitude and phase due to bandwidth limitations of the channel results in Inter-Symbol Interference (ISI).
-response and a linear phase response. In practice channels have finite +
-bandwidth, non-flat amplitude responses and nonlinear phase +
-responses. Signal distortions in amplitude and phase due to bandwidth +
-limitations of the channel results in Inter-Symbol Interference (ISI).+
  
-When signal $s(t)=\sum_{n=0}^{\infty}I_ng_T(t-nT)$, with $R=1/T$ +When signal $s(t)=\sum_{n=0}^{\infty}I_ng_T(t-nT)$, with $R=1/T$ symbol rate and $\{I_n\}$ symbol sequence, is transmitted through a bandlimited channel, the received signal is: $$r(t)=\sum_{n=0}^\infty I_nh{t-nT}+z(t)$$ Where $h(t)=(g_T\star c)(t)=\int_{-\infty}^\infty g_T(\tau)c(t-\tau)$ and $z(t)$ is the AWGN, The signal demodulator output is: $$y(t)=\sum_{n=0}^\infty I_nx(t-nT)+\nu(t)$$ Where $x(t)=(g_T\star c\star g_R)(t)$ and $\nu(t)=(z\star g_R)(t)$ is the filtered noise. The received filtered output is sampled at a rate of $1/T$, resulting in $$y(kT)=\sum_{n=0}^\infty I_nx((k-n)T)+\nu(kT)=I_kx(0)+\underbrace{\sum_{n=0,n\neq k}^\infty I_nx((k-n)T)}_{\text{ISI}}+\nu(kT)$$ If the pulse shaping functions can be designed such that the output $x$ results in a sinc function, then when sampling at integer multiples of the period, the original signal can be recovered without ISI.
-symbol rate and $\{I_n\}$ symbol sequence, is transmitted through a +
-bandlimited channel, the received signal is: $$r(t)=\sum_{n=0}^\infty +
-I_nh{t-nT}+z(t)$$ Where $h(t)=(g_T\star c)(t)=\int_{-\infty}^\infty +
-g_T(\tau)c(t-\tau)$ and $z(t)$ is the AWGN, The signal demodulator +
-output is: $$y(t)=\sum_{n=0}^\infty I_nx(t-nT)+\nu(t)$$ Where +
-$x(t)=(g_T\star c\star g_R)(t)$ and $\nu(t)=(z\star g_R)(t)$ is the +
-filtered noise. The received filtered output is sampled at a rate of +
-$1/T$, resulting in $$y(kT)=\sum_{n=0}^\infty +
-I_nx((k-n)T)+\nu(kT)=I_kx(0)+\underbrace{\sum_{n=0,n\neq k}^\infty +
-I_nx((k-n)T)}_{\text{ISI}}+\nu(kT)$$ If the pulse shaping functions +
-can be designed such that the output $x$ results in a sinc function, +
-then when sampling at integer multiples of the period, the original +
-signal can be recovered without ISI.+
  
-The Nyquist Pulse-Shaping Criterion is that a necessary and sufficient +The Nyquist Pulse-Shaping Criterion is that a necessary and sufficient condition for $x(t)$ to satisfy $$x(kT)=\begin{cases}1,&k=0\\0,&k\neq0\end{cases}$$ Is that its frequency response $X(f)$ satisfies: $$\sum_{m=-\infty}^{\infty}X\left(f+\frac{m}{T}\right)=T$$ That is, the sum of all shifted versions of the frequency response result in a flat value. As a result, zero ISI cannot be achieved when $R>2W$, with $R$ being the symbol transmission rate and $W$ being the baseband channel bandwidth. When $R=2W$, the only choice for $X(f)$ is the rectangular pulse shape, corresponding to the sinc pulse in time domain. When $R<2W$, there are many choices for $X(f)$ to satisfy the criterion. To avoid ISI, the maximum symbol rate is $R=2W$, making the shortest signalling interval $T=\frac{1}{2W}$.
-condition for $x(t)$ to satisfy +
-$$x(kT)=\begin{cases}1,&k=0\\0,&k\neq0\end{cases}$$ Is that its +
-frequency response $X(f)$ satisfies: +
-$$\sum_{m=-\infty}^{\infty}X\left(f+\frac{m}{T}\right)=T$$ That is, +
-the sum of all shifted versions of the frequency response result in a +
-flat value. As a result, zero ISI cannot be achieved when $R>2W$, with +
-$R$ being the symbol transmission rate and $W$ being the baseband +
-channel bandwidth. When $R=2W$, the only choice for $X(f)$ is the +
-rectangular pulse shape, corresponding to the sinc pulse in time +
-domain. When $R<2W$, there are many choices for $X(f)$ to satisfy the +
-criterion. To avoid ISI, the maximum symbol rate is $R=2W$, making the +
-shortest signalling interval $T=\frac{1}{2W}$.+
  
-A popular choice when $R<2W$ is the raised cosine spectrum: +A popular choice when $R<2W$ is the raised cosine spectrum: $$X_{rc}(f)=\begin{cases}T,&0\leq|f|\leq\frac{1-\beta}{2T}\\\frac{T}{2}\left(1+\cos\left[\frac{\pi T}{\beta}\left(|f|-\frac{1-\beta}{2T}\right)\right]\right),&\frac{1-\beta}{2T}\leq|f|\leq\frac{1+\beta}{2T}\\0,&|f|>\frac{1+\beta}{2T}\end{cases}$$ Where $\beta\in[0,1]$ is the roll-off factor and the pulse bandwidth is given by $(1+\beta)\frac{1}{2T}$. In time domain, the signal corresponding to the raised cosine spectrum is: $$x_{rc}(t)=\text{sinc}\left(\frac{t}{T}\right)\frac{\cos(\pi\beta t/T)}{1-(2\beta t/T)^2}$$ The side lobes of the pulse where $\beta=1$ are smaller than the side lobes of the sinc pulse ($\beta=0$). Smaller side lobes are better where there are timing errors because they lead to smaller ISI components. We call $2W$ the Nyquist frequency and see that $\beta=0$ corresponds to a symbol rate at Nyquist frequency, with the rectangular spectrum of the sync pulse. For a larger value of $\beta$ we need bandwidth beyond the Nyquist frequency. $$W=\frac{1+\beta}{2T}$$ We can say that $\beta=0.5$ involves an excess bandwidth of $50\%$, similarly $\beta=1$ involves an excess bandwidth of $100\%$. Thus choosing $\beta$ is a tradeoff between robustness against timing errors and transmission speed.
-$$X_{rc}(f)=\begin{cases}T,&0\leq|f|\leq\frac{1-\beta}{2T}\\\frac{T}{2}\left(1+\cos\left[\frac{\pi +
-T}{\beta}\left(|f|-\frac{1-\beta}{2T}\right)\right]\right),&\frac{1-\beta}{2T}\leq|f|\leq\frac{1+\beta}{2T}\\0,&|f|>\frac{1+\beta}{2T}\end{cases}$$ +
-Where $\beta\in[0,1]$ is the roll-off factor and the pulse bandwidth +
-is given by $(1+\beta)\frac{1}{2T}$. In time domain, the signal +
-corresponding to the raised cosine spectrum is: +
-$$x_{rc}(t)=\text{sinc}\left(\frac{t}{T}\right)\frac{\cos(\pi\beta +
-t/T)}{1-(2\beta t/T)^2}$$ The side lobes of the pulse where $\beta=1$ +
-are smaller than the side lobes of the sinc pulse ($\beta=0$). Smaller +
-side lobes are better where there are timing errors because they lead +
-to smaller ISI components. We call $2W$ the Nyquist frequency and see +
-that $\beta=0$ corresponds to a symbol rate at Nyquist frequency, with +
-the rectangular spectrum of the sync pulse. For a larger value of +
-$\beta$ we need bandwidth beyond the Nyquist +
-frequency. $$W=\frac{1+\beta}{2T}$$ We can say that $\beta=0.5$ +
-involves an excess bandwidth of $50\%$, similarly $\beta=1$ involves +
-an excess bandwidth of $100\%$. Thus choosing $\beta$ is a tradeoff +
-between robustness against timing errors and transmission speed.+
  
-A transmitted signal is $X(f)=G_T(f)C(f)G_R(f)$, which needs to +A transmitted signal is $X(f)=G_T(f)C(f)G_R(f)$, which needs to satisfy the Nyquist criterion for zero ISI. The raised cosine spectrum is one possible choice for $X(f)$. $G_T(f)$ determines the transmitted pulse shape, $C(f)$ is the channel (cannot control) and $G_R(f)$ is the receive filter. We can design the filters $G(f)$ to compensate for the channel at the transmitter, giving: $$|G_T(f)|=\frac{|X(f)|^\frac{1}{2}}{|C(f)|}$$ $$|G_R(f)|=|X(f)|^\frac{1}{2}$$ Alternately we can design to compensate for the channel at both the transmitter and receiver: $$|G_T(f)|=\frac{|X(f)|^\frac{1}{2}}{|C(f)|^\frac{1}{2}}$$ $$|G_R(f)|=\frac{|X(f)|^\frac{1}{2}}{|C(f)|^\frac{1}{2}}$$ The second design is optimal in terms of error probability for Gaussian noise. Both designs use $P(f)=X(f)^\frac{1}{2}$, called the Square Root Raised Cosine (SRRC), abbreviated to RRC. For baseband channels, the Nyquist criterion says that zero ISI can be achieved if the symbol transmission rate is $R\leq2W$, where $W$ is the channel bandwidth. For bandpass channels with bandwidth $W$, the previous analysis still holds if $s(t),c(t),z(t),r(t),y(t)$ are the complex baseband equivalents. In this case the Nyquist criterion says that zero ISI can be achieved if $R\leq W$ (due to different ways of defining bandwidth).
-satisfy the Nyquist criterion for zero ISI. The raised cosine spectrum +
-is one possible choice for $X(f)$. $G_T(f)$ determines the transmitted +
-pulse shape, $C(f)$ is the channel (cannot control) and $G_R(f)$ is +
-the receive filter. We can design the filters $G(f)$ to compensate for +
-the channel at the transmitter, giving: +
-$$|G_T(f)|=\frac{|X(f)|^\frac{1}{2}}{|C(f)|}$$ +
-$$|G_R(f)|=|X(f)|^\frac{1}{2}$$ Alternately we can design to +
-compensate for the channel at both the transmitter and receiver: +
-$$|G_T(f)|=\frac{|X(f)|^\frac{1}{2}}{|C(f)|^\frac{1}{2}}$$ +
-$$|G_R(f)|=\frac{|X(f)|^\frac{1}{2}}{|C(f)|^\frac{1}{2}}$$ The second +
-design is optimal in terms of error probability for Gaussian +
-noise. Both designs use $P(f)=X(f)^\frac{1}{2}$, called the Square +
-Root Raised Cosine (SRRC), abbreviated to RRC. For baseband channels, +
-the Nyquist criterion says that zero ISI can be achieved if the symbol +
-transmission rate is $R\leq2W$, where $W$ is the channel +
-bandwidth. For bandpass channels with bandwidth $W$, the previous +
-analysis still holds if $s(t),c(t),z(t),r(t),y(t)$ are the complex +
-baseband equivalents. In this case the Nyquist criterion says that +
-zero ISI can be achieved if $R\leq W$ (due to different ways of +
-defining bandwidth).+
  
-For bandpass channels, the Nyquist frequency equals the channel +For bandpass channels, the Nyquist frequency equals the channel bandwidth $W$, so $\beta=0$ corresponds to a symbol rate at the Nyquist frequency with a rectangular spectrum of the sync pulse. For a larger $\beta$ we need bandwidth beyond the Nyquist frequency, hence for bandpass channels we have: $$W=\frac{1+\beta}{T}$$ Again, choosing the value of $\beta$ is a trade off between robustness against timing errors and transmission speed.
-bandwidth $W$, so $\beta=0$ corresponds to a symbol rate at the +
-Nyquist frequency with a rectangular spectrum of the sync pulse. For a +
-larger $\beta$ we need bandwidth beyond the Nyquist frequency, hence +
-for bandpass channels we have: $$W=\frac{1+\beta}{T}$$ Again, choosing +
-the value of $\beta$ is a trade off between robustness against timing +
-errors and transmission speed.+
  
-An eye diagram is constructed by placing all the symbols into the same +An eye diagram is constructed by placing all the symbols into the same window of $T$. At position 0, all the symbols resolve to 0 or 1, but vary in between integer positions. A low beta causes a wide spread of symbols where a high beta clusters them more. As such, it is easy to see the effect of timing errors on the diagram. The plot carves out a region resembling an eye, giving the plot its name. As the eye closes, ISI increases and as it opens ISI is decreasing. Other diagnostics are the noise margin (eye opening) and sensitivity to timing error (eye width). Channel distortions lead to a smaller eye opening width and a smaller eye opening at $t=0$. Eye diagrams can easily be extended to multiple levels
-window of $T$. At position 0, all the symbols resolve to 0 or 1, but + 
-vary in between integer positions. A low beta causes a wide spread of +====== Equalisation ====== 
-symbols where a high beta clusters them more. As such, it is easy to + 
-see the effect of timing errors on the diagram. The plot carves out a +Channel equalisation are useful in detecting data in the presence of ISI in many types of channels The sampled output of a receive filter is: $$y_k=I_kx_0+\sum_{n=0,n\neq k}^\infty I_nx_{k-n}+\nu_k$$ This is the equivalent Discrete Time model for ISI. Here $\nu_k$ is filtered non-white noise, to whiten it we pass $y_k$ through a noise whitening filter. The whitening filter's output can be written as: $$u_k=f_0I_k+\sum_{n=1}^\infty f_nI_{k-n}+\eta_k$$ Where $\eta_k$ is white noise and $f_n$ are the filter coefficients of the cascade of filters used (transmitter, channel, receiver, whitening). In practice, we assume that ISI only affects a finite number of neighbouring signals, resulting in using a simpler FIR model. $$u_k=f_0I_k+\sum_{n=1}^L f_nI_{k-n}+\eta_k$$ Where $L$ is the channel memory. This is then fed to an equaliser, which aims to recover the transmitted symbol sequence $\{I_k\}$ from the received sequence of values $\{u_k\}$. This is different to "symbol-by-symbol" detection. A given sequence can be decoded to be the sequence that maximises the probability (MAP detection). 
-region resembling an eye, giving the plot its name. As the eye closes, + 
-ISI increases and as it opens ISI is decreasing. Other diagnostics are +If the channel memory is L, then the ML sequence detector is simplified as a product of likelihood values: $$(\hat{I}_0m,\hat{I}_1,...,\hat{I}_k)=\arg\max_{i_0,...,i_k}\prod_{k=0}^Kp(u_k|I_{k-L}=i_{k-L},...,I_k=i_k)$$ Which can be re-expressed in terms of log likelihood as: $$(\hat{I}_0m,\hat{I}_1,...,\hat{I}_k)=\arg\max_{i_0,...,i_k}\sum_{k=0}^K\ln(p(u_k|I_{k-L}=i_{k-L},...,I_k=i_k))$$ If $\eta_k\sim\mathcal{N}(0,\sigma^2)$, then $u_k=\sum_{n=0}^Lf_nI_{k-n}+\eta_k$ has distribution $u_k\sim\mathcal{N}(\sum_{n=0}^Lf_nI_{k-n},\sigma^2)$ and the ML detector is given by: $$(\hat{I}_0m,\hat{I}_1,...,\hat{I}_k)=\arg\max_{i_0,...,i_k}\sum_{k=0}^K\left[\ln\left(\frac{1}{\sqrt{2\pi\sigma^2}}\right)-\frac{(u_k-\sum_{n=0}^Lf_ni_{k-n})^2}{2\sigma^2}\right]=\arg\max_{i_0,...,i_k}\sum_{k=0}^K\left(u_k-\sum_{n=0}^Lf_ni_{k-n}\right)$$ 
-the noise margin (eye opening) and sensitivity to timing error (eye + 
-width). Channel distortions lead to a smaller eye opening width and a +===== Viterbi Algorithm ===== 
-smaller eye opening at $t=0$. Eye diagrams can easily be extended to + 
-multiple levels.+The Viterbi algorithm finds the shortest (or longest) path in a trellis. The main idea is to incrementally calculate the shortest path length along the way. 
 + 
 +We can construct a trellis where the path lengths are the log-likelihood values $\ln(p(u_k|I_{k-1}=i_{k-1},I_k=i_k))$. The most-likely transmitted sequence is the longest path through the trellis. In general, for M-ary signalling the trellis diagram has $M^L$ states $S_k=(I_{k-1},I_{k-2},...,I_{k_L})$ at a given time. For each new received signal $u_k$, the Viterbi algorithm computes $M^{L+1}$ metrics: $$\sum_{k=0}^K\ln(p(u_k|I_{k-L}=i_{k-L},...,I_k=i_k))$$ A Viterbi ML detector produces the ML sequence. For a large $L$, the implementation of the Viterbi algorithm can be computationally intensive. Because of this, suboptimal approaches such as linear equalisation is used. 
 + 
 +===== Linear equalisation ===== 
 + 
 +A linear equaliser tries to invert the FIR model used to create the input. Zero-forcing equalisation convolves the input with the output, delaying the input to do so. The equaliser's output is: $$\hat{I}_k=\sum_{j=-K}^Kc_ju_{k-j}$$ Where $c_j$ are $2K+1\geq L$ filter tap coefficients of the equaliser to be designed so that: $$C(z)\cdot F(z)=1$$ Linear equalisers have complexity that grows linearly with channel memory $L$, as opposed to exponential growth with the Viterbi Algorithm ($M^L$). 
 + 
 +Given input $u_k=f_0I_k+\sum_{n=1}^Lf_nI_{k-n}+\eta_k$, the output of the linear filter equaliser is: $$\hat{I}_k=\sum_{j=-K}^Kc_ku_{k-j}=q_0I_k+\sum_{j=-K,j\neq k}^Kq_{k-j}I_j+\sum_{j=-K}^Kc_j\eta_{k-j}$$ Where $q_n=(f\star c)_n$. To satisfy zero-forcing criteria, choose tap filter coefficients $c_j$ such that: $$q_n=\sum_{j=-K}^Kc_jf_{n-j}=\begin{cases}1,&n=0\\0,&n=\pm1,\pm2,...,\pm K\end{cases}$$ In general, some $q_n$ in the range $n=K+1,...,K+L-1$ will still be non-zero due to finite filter length. 
 + 
 +As the value of $K$ increases, the error probability decreases before levelling out. A large amount of noise causes the probability to level out with a smaller value of $K$. An infinite number of filter taps would be able to invert the channel filter. In the presence of noise, the amplification of noise by many taps damages the signal. 
 + 
 +===== Minimum Mean Squared Error ===== 
 + 
 +Another linear equaliser is the Minimum Mean Squared Error (MMSE) which minimises the MSE of the equaliser output, defined as: $$J(c)=E[|I_k-\hat{I}_k|^2]=E\left[|I_k-\sum_{j=-k}^Kc_ju_{k-j}|^2\right]$$ When the noise is small for all frequencies, then MMSE is identical to a zero-forcing equaliser. When noise is large, the MMSE equaliser does not produce large noise amplification like the zero-forcing equaliser.
  
notes/elen90057.1633310542.txt.gz · Last modified: 2023/05/30 22:32 (external edit)