notes:elen90057
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
notes:elen90057 [2021/10/04 01:22] – joeleg | notes:elen90057 [2023/05/30 22:32] (current) – external edit 127.0.0.1 | ||
---|---|---|---|
Line 5: | Line 5: | ||
===== Bandpass spectra ===== | ===== Bandpass spectra ===== | ||
- | Often we are interested in spectra confined to $\pm0.5B\text{ Hz}$ of | + | Often we are interested in spectra confined to $\pm0.5B\text{ Hz}$ of a centre frequency $f_0$, being a band. If $f_0>> |
- | a centre frequency $f_0$, being a band. If $f_0>> | + | |
- | band. For real signals and systems, the spectrum for the negative | + | |
- | frequencies is redundant as $S(f)=S^*(-f)$. The signal can be in | + | |
- | bandpass form, that is a modulation of a sinusoid of the carrier | + | |
- | frequency. We want to work with the interesting signal, not the | + | |
- | carrier, so we transform the bandpass to a lowpass | + | |
- | representation. First we remove the negative frequencies, | + | |
- | the envelope: | + | |
- | $$Z(f)=(1+\text{sign}(f))S(f)=\begin{cases}2S(f),& | + | |
- | Then we down-shift in the frequency domain, providing the lowpass | + | |
- | highpass spectrum: $$S_l(f)=Z(f+f_0)$$ Note that in general | + | |
- | $S_l(f)\neq S_l^*(-f)$, being that it is not a real signal in time | + | |
- | domain ($s_l(t)$). | + | |
- | To convert from the lowpass to bandpass representation, | + | To convert from the lowpass to bandpass representation, |
- | inverse. First we shift the spectrum up: $$Z(f)=S_l(f-f_0)$$ Then we | + | |
- | reflect around 0 Hz, conjugate, add to the pre-envelope and scale by | + | |
- | 0.5: | + | |
- | $$S(f)=\frac{Z(f)+Z^*(-f)}{2}=\frac{S_l(f-f_0)+S_l^*(-f-f_0)}{2}$$ | + | |
- | In time domain, the transformation is a bit more tricky. First, we | + | In time domain, the transformation is a bit more tricky. First, we note that $\text{sign}(t)\iff\frac{1}{j\pi f}\therefore\text{sign}(f)\iff\frac{-1}{j\pi t}=\frac{j}{\pi t}$ (this is taking the Hilbert Transform $H(f)=-j\text{sign}(f)$). We can find $z(t)$ from the IFT of $Z(f)$: $$z(t)=s(t)+\frac{j}{\pi t}*s(t)=s(t)+\frac{j}{\pi}\int_{-\infty}^\infty\frac{s(\tau)}{t-\tau}=s(t)+j\hat{s}(t)$$ Where $\hat{s}=s*\frac{1}{\pi t}$. Then we take the IFT of $Z(f)=S_l(f-f_0)$: |
- | note that $\text{sign}(t)\iff\frac{1}{j\pi | + | |
- | f}\therefore\text{sign}(f)\iff\frac{-1}{j\pi t}=\frac{j}{\pi t}$ (this | + | |
- | is taking the Hilbert Transform $H(f)=-j\text{sign}(f)$). We can find | + | |
- | $z(t)$ from the IFT of $Z(f)$: $$z(t)=s(t)+\frac{j}{\pi | + | |
- | t}*s(t)=s(t)+\frac{j}{\pi}\int_{-\infty}^\infty\frac{s(\tau)}{t-\tau}=s(t)+j\hat{s}(t)$$ | + | |
- | Where $\hat{s}=s*\frac{1}{\pi t}$. Then we take the IFT of | + | |
- | $Z(f)=S_l(f-f_0)$: | + | |
- | z(t)=e^{j2\pi f_0t}s_l(t)$$ Lowpass to bandpass is: | + | |
- | $$s(t)=\mathcal{R}(z(t))=\mathcal{R}(e^{j2\pi f_0t}s_l(t))$$ This lets | + | |
- | us write $s(t)=|s_l(t)|e^{j\angle s_l(t)}=A(t)e^{j\phi(t)}$. As a | + | |
- | lowpass, $A(t)$, $\phi(t)$ change slowly with | + | |
- | time. $$z(t)=A(t)e^{j(2\pi f_ct+\phi(t)}$$ The phasor rotates slowly | + | |
- | at angular velocity $\approx 2\pi f_c$, with slowly varying magnitude | + | |
- | and phase. The real part is: $$s(t)=A(t)\cos(2\pi f_ct+\phi(t))$$ Any | + | |
- | real bandpass signal can be represented in this way. As | + | |
- | $s_l(t)=s_c+js_s$, | + | |
- | complex is the quadrature. We can find that: $$s(t)=s_c(t)\cos(2\pi | + | |
- | f_0t)-s_s(t)\sin(2\pi f_0t)$$ This is the canonical form of a bandpass | + | |
- | signal, whereas $s_l(t)e^{j2\pi f_0t}$ is the polar form. We can find | + | |
- | that: $$s_c(t)=s(t)\cos(2\pi f_0t)+\hat{s}(t)\sin(2\pi f_0t)$$ | + | |
- | $$s_s(t)=-s(t)\sin(2\pi f_0t)+\hat{s}(t)\cos(2\pi f_0t)$$ | + | |
- | The bandpass response of the Hilbert transform is $\frac{1}{\pi t}$, | + | The bandpass response of the Hilbert transform is $\frac{1}{\pi t}$, so $H(f)=-j\text{sign}(f)$. The amplitude response is unchanged, but the positive phase is less $\pi/2$ and the negative phase is added $\pi/2$. Taking the lowpass representation removes the high frequency carrier signal. $$A(t)\cos(2\pi f_0t+\phi(t))\iff A(t)e^{j\phi(t)}$$ |
- | so $H(f)=-j\text{sign}(f)$. The amplitude response is unchanged, but | + | |
- | the positive phase is less $\pi/2$ and the negative phase is added | + | |
- | $\pi/2$. Taking the lowpass representation removes the high frequency | + | |
- | carrier signal. $$A(t)\cos(2\pi f_0t+\phi(t))\iff A(t)e^{j\phi(t)}$$ | + | |
===== Double side band suppressed carrier modulation ===== | ===== Double side band suppressed carrier modulation ===== | ||
- | Here, $m(t),M(f)$ is the message signal. To up-convert, we multiply a | + | Here, $m(t),M(f)$ is the message signal. To up-convert, we multiply a message by a carrier wave $A_c\cos(2\pi f_ct)$. $$s(t)=m(t)A_c\cos(2\pi f_ct)=0.5A_cm(t)e^{j2\pi f_ct}+0.5A_cm(t)e^{-j2\pi f_ct}$$ $$S(f)=0.5A_CM(f-f_c)+0.5A_CM(f+f_c)$$ DSBSC modulation shifts the message spectrum left and right by $\pm f_c$. This creates a bandpass signal. We can then find the low pass representation: |
- | message by a carrier wave $A_c\cos(2\pi | + | |
- | f_ct)$. $$s(t)=m(t)A_c\cos(2\pi f_ct)=0.5A_cm(t)e^{j2\pi | + | |
- | f_ct}+0.5A_cm(t)e^{-j2\pi f_ct}$$ | + | |
- | $$S(f)=0.5A_CM(f-f_c)+0.5A_CM(f+f_c)$$ DSBSC modulation shifts the | + | |
- | message spectrum left and right by $\pm f_c$. This creates a bandpass | + | |
- | signal. We can then find the low pass representation: | + | |
- | $$s_l(t)=A_cm(t)$$ This is a real signal, whereas in general the | + | |
- | lowpass is a complex number. This suggests that we could send another | + | |
- | signal in the complex, quadrature component. This lets us send two | + | |
- | separate messages at the same time, called quadrature carrier | + | |
- | multiplexing. $$s_c(t)=A_cm_1(t)$$ $$s_s(t)=A_cm_2(t)$$ This requires | + | |
- | using a balanced modulator (mixer or multiplier) to combine the | + | |
- | carrier and the message. The second message has its carrier shifted by | + | |
- | $90^\circ$. The resultant signal is: $$s(t)=A_cm_1(t)\cos(2\pi | + | |
- | f_ct)+A_cm_2(t)\sin(2\pi f_ct)$$ $$s_c=A_cm_1(t), | + | |
- | recover these signals at the receiving end, a Phase Locked Loop is | + | |
- | required to recover the carrier from the signal. This recovered signal | + | |
- | is then fed through a balanced modulator with the received signal, and | + | |
- | the message is recovered after the modulator output is fed through a | + | |
- | lowpass filter. | + | |
===== Amplitude modulation ===== | ===== Amplitude modulation ===== | ||
- | Suppose we transmit: $$s(t)=A_c(1+\mu m(t))\cos(2\pi f_ct)$$ We choose | + | Suppose we transmit: $$s(t)=A_c(1+\mu m(t))\cos(2\pi f_ct)$$ We choose amplitude sensitivity or modulation index $\mu$ such that $$|m(t)|< |
- | amplitude sensitivity or modulation index $\mu$ such that | + | |
- | $$|m(t)|< | + | |
- | that $1=\mu m(t)$ is the envelope of $s(t)$. Further we choose | + | |
- | $f_c>> | + | |
- | carrier cycle, the maxima of $s(t)$ follows the envelope. | + | |
- | A phase reversal is related to the change in the sign of the | + | A phase reversal is related to the change in the sign of the amplitude. When $A(t)< |
- | amplitude. When $A(t)< | + | |
- | f_ct+\pi)$, causing a bump in the signal from the sudden change in | + | |
- | phase. $A(t)$ needs to stay positive all the time. When | + | |
- | $\mu< | + | |
- | -|m(t)|$, $1+\mu m(t)\geq 1\mu|m(t)|> | + | |
- | always being positive. This removes phase reversals, making envelope | + | |
- | detection easier. | + | |
- | Envelope detection tries to remove the high frequency carrier to find | + | Envelope detection tries to remove the high frequency carrier to find the message. It results in finding the magnitude of the signal. Envelope detection recovers the signal without needing a PLL to regenerate the carrier, unlike Quadrature-carrier multiplexing. This is also called asynchronous detection, incoherent detection or direct detection. |
- | the message. It results in finding the magnitude of the | + | |
- | signal. Envelope detection recovers the signal without needing a PLL | + | |
- | to regenerate the carrier, unlike Quadrature-carrier | + | |
- | multiplexing. This is also called asynchronous detection, incoherent | + | |
- | detection or direct detection. | + | |
===== Side bands ===== | ===== Side bands ===== | ||
- | Side bands are the parts of the signal outside 0 in the passband | + | Side bands are the parts of the signal outside 0 in the passband representation. The upper side band is to the right, and the lower is to the left. The upper and lower side bands are related as: $$USB=LSB^*$$ When we convert to a bandpass signal, we are centred around $f_c$, but still have upper and lower side bands flanking $f_c$. In a DSBSC signal, there is four-fold redundancy, as there are two copies of the LSB and of the USB. As such, we can transmit only one of the side bands and still receive the whole signal. This single side band representation is denoted with as $\tilde{m}(t)$ and $\tilde{M}(f)$. The signal is: $$\tilde{M}(f)=(1+\text{sign}(f))M(f)=\begin{cases}2M(f),& |
- | representation. The upper side band is to the right, and the lower is | + | |
- | to the left. The upper and lower side bands are related as: | + | |
- | $$USB=LSB^*$$ When we convert to a bandpass signal, we are centred | + | |
- | around $f_c$, but still have upper and lower side bands flanking | + | |
- | $f_c$. In a DSBSC signal, there is four-fold redundancy, as there are | + | |
- | two copies of the LSB and of the USB. As such, we can transmit only | + | |
- | one of the side bands and still receive the whole signal. This single | + | |
- | side band representation is denoted with as $\tilde{m}(t)$ and | + | |
- | $\tilde{M}(f)$. The signal is: | + | |
- | $$\tilde{M}(f)=(1+\text{sign}(f))M(f)=\begin{cases}2M(f),& | + | |
- | $$S_{SSB}(f)=0.5A_C\tilde{M}(f-f_c)+0.5A_C\tilde{M}(-f-f_c)=\begin{cases}A_CM(f-f_c),& | + | |
- | In time domain this corresponds to: $$\tilde{m}(t)=m(t)+j\hat{m}(t)$$ | + | |
- | $$s_{SSB}(t)=A_Cm(t)\cos(2\pi f_ct)-A_C\hat{m}(t)\sin(2\pi | + | |
- | f_ct)=\mathcal{R}\{\tilde{m}(t)A_Ce^{j2\pi f_ct}\}$$ | + | |
- | $$s_C=A_Cm(t), | + | |
- | carrier multiplexing, | + | |
====== Digital communications ====== | ====== Digital communications ====== | ||
- | Digital signal are encoded and modulated before being transmitted over | + | Digital signal are encoded and modulated before being transmitted over a channel, demodulated and decoded. The modulator turns the digital signal into a time domain signal. A communication channel can include space, atmosphere, optical disks, cables, etc. Different channels cause different types of impairments and require different types of modulation. |
- | a channel, demodulated and decoded. The modulator turns the digital | + | |
- | signal into a time domain signal. A communication channel can include | + | |
- | space, atmosphere, optical disks, cables, etc. Different channels | + | |
- | cause different types of impairments and require different types of | + | |
- | modulation. | + | |
- | Modulation involves mapping the digital information into analogue | + | Modulation involves mapping the digital information into analogue signals for transmission over physical channels. It requires parsing of the incoming bit sequence into a sequence of binary words length $k$. Each binary word corresponds to a symbol, with $M=2^k$ possible symbols. Each symbol has a signalling interval of length $T$. $1/T$ is the symbol rate, with $k/T$ being the bit rate. |
- | signals for transmission over physical channels. It requires parsing | + | |
- | of the incoming bit sequence into a sequence of binary words length | + | |
- | $k$. Each binary word corresponds to a symbol, with $M=2^k$ possible | + | |
- | symbols. Each symbol has a signalling interval of length $T$. $1/T$ is | + | |
- | the symbol rate, with $k/T$ being the bit rate. | + | |
- | A baseband signal is a low frequency real signal centred on 0. A | + | A baseband signal is a low frequency real signal centred on 0. A bandpass signal is a high frequency signal centred on $f_C$. There is no need to use a carrier waveform to transmit a baseband signal. A bandpass signal needs passbands far from 0 to be removed. |
- | bandpass signal is a high frequency signal centred on $f_C$. There is | + | |
- | no need to use a carrier waveform to transmit a baseband signal. A | + | |
- | bandpass signal needs passbands far from 0 to be removed. | + | |
===== Signal space ===== | ===== Signal space ===== | ||
- | To simplify analysis, geometric vector representation is used for | + | To simplify analysis, geometric vector representation is used for baseband and bandpass signals. A vector space or linear space $L$ of a field $F$ (usually $\mathbb{R}$ or $\mathbb{C}$) is a set that is closed over addition and scalar multiplication and is: |
- | baseband and bandpass signals. A vector space or linear space $L$ of a | + | |
- | field $F$ (usually $\mathbb{R}$ or $\mathbb{C}$) is a set that is | + | |
- | closed over addition and scalar multiplication and is: | + | |
- | * Associative Commutative Distributive Has additive identity | + | * Associative |
- | * multiplicative identity | + | * Commutative |
+ | * Distributive | ||
+ | * Has additive identity | ||
+ | * Has multiplicative identity | ||
- | Signal space is a vector space consisting of functions $x(t)$ defined | + | Signal space is a vector space consisting of functions $x(t)$ defined on a time set $T$. The modulation scheme is visualised as a finite set of points, called the signal space diagram or signal constellation. This enables a geometric interpretation, |
- | on a time set $T$. The modulation scheme is visualised as a finite set | + | |
- | of points, called the signal space diagram or signal | + | |
- | constellation. This enables a geometric interpretation, | + | |
- | to treat bandpass modulation similarly to baseband modulation. | + | |
- | The inner product of two complex valued signals is: $$\langle | + | The inner product of two complex valued signals is: $$\langle x_1(t), |
- | x_1(t), | + | |
- | signals are orthogonal if $\langle x_1(t), | + | |
- | of $x(t)$ is: $$||x(t)||=\sqrt{\langle | + | |
- | x(t), | + | |
- | Where $\varepsilon_x$ is the energy in $x(t)$. The distance between | + | |
- | two signals is: $$d(x_1(t), | + | |
- | Cauchy-Schwartz inequality for two signals is: $$|\langle | + | |
- | x_1(t), | + | |
- | With equality when $x_1(t)=\alpha x_2(t)$, where $\alpha$ is any | + | |
- | complex number. The triangle inequality is: | + | |
- | $$||x_1(t)+x_2(t)||\leq||x_1(t)||+||x_2(t)||$$ | + | |
- | A set of $N$ signals $\{\phi_j(t), | + | A set of $N$ signals $\{\phi_j(t), |
- | if any signal can be written as a linear combination of the $N$ | + | |
- | signals. $$s(t)=\sum_{j=1}^N s_j\phi_j(t)$$ Where $s_j$ are scalar | + | |
- | coefficients. A set of signals is linearly independent if no signal in | + | |
- | the set can be represented as a linear combination of the other | + | |
- | signals in the set. A basis for $S$ is any linearly independent set | + | |
- | that spans the whole space. The dimension of $S$ is the number of | + | |
- | elements in any basis for $S$. An orthonormal basis is a basis such | + | |
- | that: $$\langle \phi_j(t), | + | |
- | \phi_j(t)\phi_n^*(t)dt=\begin{cases}1, | + | |
- | Orthonormal bases provide convenient ways of representing any | + | |
- | set. Using an orthonormal basis allows us to easily express a signal | + | |
- | as a point in signal space. Each point using a basis of $M$ dimensions | + | |
- | corresponds to $k=\log_2M$ bits of information. The square of the | + | |
- | Euclidean distance of a point to the origin equals the energy of the | + | |
- | corresponding signal: | + | |
- | $$\varepsilon_{s_m}=s_{m1}^2+s_{m2}^2+...+s_{mN}^2$$ If a given signal | + | |
- | $r(t)$ is outside of the subspace, we can project the signal onto the | + | |
- | space to get $\hat{s}(t)$. The Gram-Schmidt procedure can be used to | + | |
- | construct an orthonormal basis, by iteratively projecting vectors onto | + | |
- | vectors in the basis and removing the projection from the original | + | |
- | vector. This can always be used to construct an orthonormal basis. | + | |
===== Digital Modulation ===== | ===== Digital Modulation ===== | ||
- | For Pulse Amplitude Modulation (PAM), we turn each block into a | + | For Pulse Amplitude Modulation (PAM), we turn each block into a distinct amplitude. This uses a signal generator to map the sequence of blocks of length $k$ into the sequence of $M=2^k$ possible symbols. We then use a modulator to map the symbol sequence to a continuous time signal. |
- | distinct amplitude. This uses a signal generator to map the sequence | + | |
- | of blocks of length $k$ into the sequence of $M=2^k$ possible | + | |
- | symbols. We then use a modulator to map the symbol sequence to a | + | |
- | continuous time signal. | + | |
- | One-dimensional modulation provides one of the simplest modulation | + | One-dimensional modulation provides one of the simplest modulation schemes, One Off Keyring (OOK). A baseband OOK modulator maps a binary symbol sequence $a(n)$ to continuous time signal $s(t)$ by: $$s(t)=\sum_{n\in\mathbb{Z}} a(n)p(t-nT)$$ Where $1/T$ is the symbol rate and $p(t)$ is a pulse signal. There are various ways of encoding the bits, such as non-return-to-zero (NRZ), return-to-zero (RZ), Manchester (MAN) and half-sine (HS), each with distinct pulse shapes. Baseband OOK modulation produces a signal of the form: $$s_m(t)=A_mp(t); |
- | schemes, One Off Keyring (OOK). A baseband OOK modulator maps a binary | + | |
- | symbol sequence $a(n)$ to continuous time signal $s(t)$ by: | + | |
- | $$s(t)=\sum_{n\in\mathbb{Z}} a(n)p(t-nT)$$ Where $1/T$ is the symbol | + | |
- | rate and $p(t)$ is a pulse signal. There are various ways of encoding | + | |
- | the bits, such as non-return-to-zero (NRZ), return-to-zero (RZ), | + | |
- | Manchester (MAN) and half-sine (HS), each with distinct pulse | + | |
- | shapes. Baseband OOK modulation produces a signal of the form: | + | |
- | $$s_m(t)=A_mp(t); | + | |
- | frequency $f_c$ to produce a signal of the form: | + | |
- | $$s_m(t)=A_mg(t)\cos(2\pi f_ct); | + | |
- | signal is denoted by $g(t)$. This is the same as DSBSC modulation. The | + | |
- | constellation is given by points at $(0,0)$ and $(1,0)$. | + | |
- | A baseband PAM modulator maps a symbol sequence maps a signal $a(n)$ | + | A baseband PAM modulator maps a symbol sequence maps a signal $a(n)$ to a continuous time signal $s(t)$: $$s(t)=\sum_{n\in\mathbb{Z}}a(n)p(t-nT)$$ This uses amplitudes above and below 0, at many different levels. It produces a constellation on the x-axis mirrored on the y-axis. The mapping on the constellation uses grey coding for adjacent points. Grey coding minimises the errors in transmission. |
- | to a continuous time signal $s(t)$: | + | |
- | $$s(t)=\sum_{n\in\mathbb{Z}}a(n)p(t-nT)$$ This uses amplitudes above | + | |
- | and below 0, at many different levels. It produces a constellation on | + | |
- | the x-axis mirrored on the y-axis. The mapping on the constellation | + | |
- | uses grey coding for adjacent points. Grey coding minimises the errors | + | |
- | in transmission. | + | |
- | When choosing the signals, we want the to be as far apart from each | + | When choosing the signals, we want the to be as far apart from each other when drawn as a constellation. The energy for a constellation point is: $$\varepsilon_m=||s_m(t)||^2=\int_0^TA_m^2p^2(t)dt=A^2\varepsilon_p$$ Where $\varepsilon_p$ is the energy in $p(t)$ and $m=1, |
- | other when drawn as a constellation. The energy for a constellation | + | |
- | point is: | + | |
- | $$\varepsilon_m=||s_m(t)||^2=\int_0^TA_m^2p^2(t)dt=A^2\varepsilon_p$$ | + | |
- | Where $\varepsilon_p$ is the energy in $p(t)$ and $m=1, | + | |
- | orthonormal basis vector for PAM is given by: | + | |
- | $$\phi(t)=\frac{p(t)}{\sqrt{\varepsilon_p}}$$ For carrier modulated | + | |
- | PAM signals, we have: $$s_m(t)=A_mg(t)\cos(2\pi f_ct),1\leq m\leq M, | + | |
- | 0\leq t<T$$ The bandpass PAM energy for a constellation point equals: | + | |
- | $$\varepsilon_m=||s_m(t)||^2=\frac{A_m^2}{2}\int_0^Tg^2(t)dt+\frac{A_m^2}{2}\int_0^Tg^2(t)\cos(4\pi | + | |
- | f_ct)dt\approx\frac{A_m^2}{2}\varepsilon_g$$ Binary bandpass PAM is | + | |
- | also called Binary Phase Shift Keyring (BPSK) because the symbol | + | |
- | values inform the phase of $s_m(t)$. Modulation of the signal waveform | + | |
- | $s_m(t)$ with carrier $\cos(2\pi f_c t)$ shifts the spectrum of the | + | |
- | baseband signal by $f_c$: | + | |
- | $$S_m(f)=\frac{A_m}{2}(G_T(f-f_c)+G_T(f+g_c))$$ For bandpass PAM | + | |
- | signalling, the orthonormal basis vector is given by: | + | |
- | $$\phi(t)=\sqrt{\frac{2}{\varepsilon_p}}g(t)\cos(2\pi f_ct)$$ Which | + | |
- | results in $s_m(t)=A_m\sqrt{\frac{\varepsilon_g}{2}}\phi(t)$. Bandpass | + | |
- | PAM has the same signal space diagram as baseband PAM, but with a | + | |
- | different basis vector. | + | |
- | The modulator is implemented by feeding the input bits into a serial | + | The modulator is implemented by feeding the input bits into a serial to parallel converter, outputting $\log_2M$ bits. This is fed into a Look Up Table (LUT) to find the symbols, which is fed into a pulse shaping filter to generate the signal. The signal may be up-sampled in filtering (FIR possibly) and fed into a DAC to create the analogue form. |
- | to parallel converter, outputting $\log_2M$ bits. This is fed into a | + | |
- | Look Up Table (LUT) to find the symbols, which is fed into a pulse | + | |
- | shaping filter to generate the signal. The signal may be up-sampled in | + | |
- | filtering (FIR possibly) and fed into a DAC to create the analogue | + | |
- | form. | + | |
==== Two dimensional modulation ==== | ==== Two dimensional modulation ==== | ||
- | Orthogonal signalling involves modulation using two signals that are | + | Orthogonal signalling involves modulation using two signals that are orthogonal: $$s(t)=s_1\phi_1(t)+s_2\phi_2(t)$$ We can denote the modulated signals as a vector $s(t)=(s_1, |
- | orthogonal: $$s(t)=s_1\phi_1(t)+s_2\phi_2(t)$$ We can denote the | + | |
- | modulated signals as a vector $s(t)=(s_1, | + | |
- | modulation can be represented as: $$s_m(t)=g(t)\cos(2\pi | + | |
- | f_ct+\theta_m)=\mathcal{R}(g(t)e^{j\theta_m}e^{j2\\pi | + | |
- | f_ct})=g(t)\cos(\theta_m)\cos(2\pi f_ct)-g(t)\sin(\theta_m)\sin(2\pi | + | |
- | f_ct)$$ Where $g(t)$ is the signal pulse shape and | + | |
- | $\theta_m=\frac{2\pi}{M}(m-1)$ is the phase the conveys the | + | |
- | transmitted information. An orthonormal basis for the signal space is: | + | |
- | $$\{\phi_1(t), | + | |
- | f_ct), | + | |
- | Where the basis functions are unit norm, | + | |
- | $||\phi_1(t)||=||\phi_2(t)||=1$. In M-ary phase-shift keyring (PSK), | + | |
- | all M bandpass signals are constrained to have the same energy (signal | + | |
- | constellation points lie on a circle). Grey encoding is used so that | + | |
- | adjacent phases differ by only one bit. This leads to a better average | + | |
- | bit error rate (BER). The transmitted information is impressed on 2 | + | |
- | orthogonal carrier signals, the in-phase carrier $\cos(2\pi f_ct)$ and | + | |
- | the quadrature carrier $\sin(2\pi | + | |
- | f_ct)$. $$s_m(t)=g(t)\cos(\theta_m)\cos(2\pi | + | |
- | f_ct)-g(t)\sin(\theta_m)\sin(2\pi f_ct)$$ Its lowpass equivalent | + | |
- | signal is: $$s_m^{lowpass}(t)=g(t)e^{j\Theta_m}=I(t)+jQ(t)$$ Where | + | |
- | $I(t)=g(t)\cos(\theta_m)$ is the in-phase component and | + | |
- | $Q(t)=g(t)\sin(\theta_m)$ is the quadrature component. | + | |
- | In quadrature/ | + | In quadrature/ |
- | differentiated by phase shifts in multiples of | + | |
- | $\pi/2$. $$s_m(t)=g(t)\cos\left(2\pi f_ct+\frac{\pi}{2}(m-1)\right)$$ | + | |
- | Equivalently the signal constellation can be rotated so that the | + | |
- | vectors are in the quadrants rather than on the | + | |
- | axes. $$s(t)=I(t)\sqrt{2}\cos(2\pi f_ct)-Q(t)\sqrt{2}\sin(2\pi | + | |
- | f_c(t))$$ $I(t)=\sum_na_1(n)f(t-nT)$ is the in-phase component of | + | |
- | $s(t)$ and $Q(t)=\sum_na_2(n)g(t-nT)$ is the quadrature component of | + | |
- | $s(t)$. $I(t)$ and $Q(t)$ are binary PAM pulse trains, making QPSK | + | |
- | interpretable as two PAM signal constellations on orthogonal axes. | + | |
==== Two dimensional bandpass modulation ==== | ==== Two dimensional bandpass modulation ==== | ||
- | Quadrature amplitude modulation (QAM) allows signals to have different | + | Quadrature amplitude modulation (QAM) allows signals to have different amplitudes and impress separate information bits on each of the quadrature carriers. Important performance parameters are the average energy and minimum distance in the signal constellation. The signal constellation consists of concentric circles. The points are all off axis in each quadrant for maximum efficiency (minimises energy and maximises minimum distance). |
- | amplitudes and impress separate information bits on each of the | + | |
- | quadrature carriers. Important performance parameters are the average | + | |
- | energy and minimum distance in the signal constellation. The signal | + | |
- | constellation consists of concentric circles. The points are all off | + | |
- | axis in each quadrant for maximum efficiency (minimises energy and | + | |
- | maximises minimum distance). | + | |
==== Comparison ==== | ==== Comparison ==== | ||
- | |Scheme |$s_m(t)$ |$s_m$ |$E_{avg}$ |$E_{bavg}$ |$d_{min}$ | Baseband | + | |Scheme |
- | |PAM|$A_mp(t)$ |$A_m\sqrt{\varepsilon_p}$ | + | |Baseband |
- | ||$\frac{2(M^2-1)}{3}\varepsilon_p$|$\frac{2(M^2-1)}{3\log_2M}\varepsilon_p$|$\sqrt{\frac{6\log_2M}{M^2-1}\varepsilon_{bavg}}$ | + | |Bandpass PAM|$A_mg(t)\cos(2\pi f_ct)$ |
- | || Bandpass PAM|$A_mg(t)\cos(2\pi f_ct)$ | + | |PSK |$g(t)\cos\left[2\pi f_ct+\frac{2\pi}{M}(m-1)\right]$ |$\sqrt{\frac{\varepsilon_g}{2}}\left(\cos\frac{2\pi}{M}(m-1), |
- | ||$A_m\sqrt{\frac{\varepsilon_p}{2}}$ |$\frac{M^2-1}{3}\varepsilon_p$ | + | |QAM |
- | ||$\frac{M^2-1}{3\log_2M}\varepsilon_p$ | + | |
- | ||$\sqrt{\frac{6\log_2M}{M^2-1}\varepsilon_{bavg}}$ | PSK | + | |
- | ||$g(t)\cos\left[2\pi f_ct+\frac{2\pi}{M}(m-1)\right]$ | + | |
- | ||$\sqrt{\frac{\varepsilon_g}{2}}\left(\cos\frac{2\pi}{M}(m-1), | + | |
- | ||$\frac{1}{2\log_2M}\varepsilon_g$ | + | |
- | ||$2\sqrt{\log_2M\sin^2\left(\frac{\pi}{M}\right)\varepsilon_{bavg}}$| | + | |
- | |QAM |$A_{mi}g(t)\cos(2\pi f_ct)-A_{mq}g(t)\sin(2\pi | + | |
- | |f_ct)$|$\sqrt{\frac{\varepsilon_g}{2}}(A_{mi}\cdot A_{mq})$ | + | |
- | ||$\frac{M-1}{3}\varepsilon_g$ |$\frac{M-1}{3\log_2M}\varepsilon_g$ | + | |
- | ||$\sqrt{\frac{6\log_2M}{M-1}\varepsilon_{bavg}}$ | | + | |
==== Multidimensional modulation ==== | ==== Multidimensional modulation ==== | ||
- | We can use time domain and/or frequency domain to increase the number | + | We can use time domain and/or frequency domain to increase the number of dimensions. Orthogonal Signalling (Baseband) is one example, e.g. Pule position modulation (PPM). This varies where in the pulse position within the pulse time, i.e. first quarter of pulse or third quarter. Alternatively we can frequency shift (FSK) to create orthogonal signals: $$s_m(t)=\sqrt{\frac{3\varepsilon}{T}}\cos(2\pi(f_c+m\Delta f)t), 0\leq m\leq M-1,0\leq t\leq T$$ For orthogonality, |
- | of dimensions. Orthogonal Signalling (Baseband) is one example, | + | |
- | e.g. Pule position modulation (PPM). This varies where in the pulse | + | |
- | position within the pulse time, i.e. first quarter of pulse or third | + | |
- | quarter. Alternatively we can frequency shift (FSK) to create | + | |
- | orthogonal signals: | + | |
- | $$s_m(t)=\sqrt{\frac{3\varepsilon}{T}}\cos(2\pi(f_c+m\Delta f)t), | + | |
- | 0\leq m\leq M-1,0\leq t\leq T$$ For orthogonality, | + | |
- | frequency separation of $\Delta f=1/(2T)$. | + | |
===== Receiving signals ===== | ===== Receiving signals ===== | ||
Line 338: | Line 91: | ||
==== Additive White Gaussian Noise ==== | ==== Additive White Gaussian Noise ==== | ||
- | A received signal has Additive White Gaussian Noise added to it from | + | A received signal has Additive White Gaussian Noise added to it from the channel. $$r(t)=s_m(t)+n(t)$$ Where $n(t)$ is a AWGN random noise process with power spectral density $\frac{N_0}{2}$ W/Hz. Given $r(t)$, a receiver must decide which $s_m(t)$ was transmitted, |
- | the channel. $$r(t)=s_m(t)+n(t)$$ Where $n(t)$ is a AWGN random noise | + | |
- | process with power spectral density $\frac{N_0}{2}$ W/Hz. Given | + | |
- | $r(t)$, a receiver must decide which $s_m(t)$ was transmitted, | + | |
- | minimising the error probability, | + | |
- | The $Q$ function is useful for tail probabilities, | + | The $Q$ function is useful for tail probabilities, |
- | $$Q(x)=P[X> | + | |
- | For a general Gaussian RV $X\sim\mathcal{N}(\mu, | + | |
- | that $\frac{X-\mu}{\sigma}\sim\mathcal{N}(0, | + | |
- | $$P[X> | + | |
- | $X(t)$ is completely characterised bu joint PDFs of the form: | + | |
- | $$f_{X(t_1), | + | |
- | stationary if for all $n, | + | |
- | $$f_{X(t_1), | + | |
- | Random processes have a mean and autocorrelation: | + | |
- | $$R_X(t_1, | + | |
- | Stationary (WSS) if its mean is constant for all time and its | + | |
- | autocorrelation depends only on the time difference $\tau=t_1-t_2$, | + | |
- | allowing us to write the autocorrelation as $R_X(\tau)$. The Power | + | |
- | Spectral Density (PSD) of a WSS process is the fourier transform of | + | |
- | the autocorrelation $\mathcal{S}_X(f)=\mathcal{F}[F_X(\tau)]$. The | + | |
- | total power content of the process is: | + | |
- | $$P_X=E[|X(t)|^2]=R_X(0)=\int_{-\infty}^\infty\mathcal{S}_X(f)df$$ | + | |
- | If a WSS process $X(t)$ passes through a LTI system with impulse | + | If a WSS process $X(t)$ passes through a LTI system with impulse response $h(t)$ and frequency response $H(f)$, the output $Y(t)=\int_{-\infty}^\infty X(\tau)h(t-\tau)d\tau$ is also a WSS process. The output process has a mean of $m_Y=m_x\int_{-\infty}^\infty h(t)dt=m_XH(0)$, |
- | response $h(t)$ and frequency response $H(f)$, the output | + | |
- | $Y(t)=\int_{-\infty}^\infty X(\tau)h(t-\tau)d\tau$ is also a WSS | + | |
- | process. The output process has a mean of | + | |
- | $m_Y=m_x\int_{-\infty}^\infty h(t)dt=m_XH(0)$, | + | |
- | $R_Y=R_X\star h\star \tilde{h}$ ($\tilde{h}(t)=h(-t)$) and PSD | + | |
- | $\mathcal{S}_Y(f)=\mathcal{S}_X(f)|H(f)|^2$. $X(t)$ is a Gaussian | + | |
- | random process if $\{X(t_1), | + | |
- | Gaussian PDF, making $X(t_k)$ a Gaussian RV for any fixed | + | |
- | $t_k\in\mathbb{R}$. $X(t)$ is a white noise process if its PSD | + | |
- | $S_X(f)$ is constant for all frequencies. The power content of white | + | |
- | noise processes $P_X=\int_{-\infty}^\infty\mathcal{S}_X(f)=\infty$, | + | |
- | not physically realisable. A Gaussian process into an LTI system | + | |
- | produces a Gaussian process, but a white input does not necessarily | + | |
- | produce a white output. | + | |
The AWGN process can be modelled as a random process that is: | The AWGN process can be modelled as a random process that is: | ||
- | * WSS, with | + | * WSS, with $R_N(\tau)=\mathcal{F}^{-1}[\mathcal{S}_N(f)]=\frac{N_0}{2}\delta(\tau)$ |
- | * $R_N(\tau)=\mathcal{F}^{-1}[\mathcal{S}_N(f)]=\frac{N_0}{2}\delta(\tau)$ | + | * Zero mean |
- | * Zero mean Gaussian, with $N(t)\sim\mathcal{N}(0, | + | * Gaussian, with $N(t)\sim\mathcal{N}(0, |
* White, with $\mathcal{S}_N(f)=\frac{N_0}{2}$ | * White, with $\mathcal{S}_N(f)=\frac{N_0}{2}$ | ||
==== Demodulation ==== | ==== Demodulation ==== | ||
- | The first thing a receiver does is project the received waveform onto | + | The first thing a receiver does is project the received waveform onto a signal $\mathbf{r}=(r_1, |
- | a signal $\mathbf{r}=(r_1, | + | |
- | detector then decides which of the possible signal waveforms was | + | |
- | transmitted. Any signal can be written as: | + | |
- | $$s_m(t)=\sum_{k=1}^Ns_{mk}\phi_k(t)$$ This is added to noise to | + | |
- | produce the received signal, which is projected into the signal | + | |
- | space. In projecting the signal, we get: | + | |
- | $$r_{mj}=\langle r_m(t), | + | $$r_{mj}=\langle r_m(t), |
- | $n_j\sim\mathcal{N}(0, | + | |
- | $E\{n_in_j\}=\delta_{ij}=\frac{N_0}{2}$. | + | |
There are two main approaches to demodulating $r(t)$: | There are two main approaches to demodulating $r(t)$: | ||
- | * **Correlation-type** The incoming signal is modulated with the | + | * **Correlation-type** The incoming signal is modulated with the basis signals in parallel, then integrated to find each $r_k$. |
- | * **basis signals in parallel, then integrated to find each $r_k$. | + | * **Matched-type** The received signal is convoluted with the basis signals in parallel to produce each $r_k$, $r_k(t)=\int_0^tr(t)\phi_k(T-(t-\tau))d\tau=\int_0^tr(\tau)\phi_k(\tau)d\tau$. |
- | * **Matched-type** The received signal is convoluted with the basis | + | |
- | * **signals in parallel to produce each $r_k$, | + | |
- | * **$r_k(t)=\int_0^tr(t)\phi_k(T-(t-\tau))d\tau=\int_0^tr(\tau)\phi_k(\tau)d\tau$. | + | |
- | Both of these produce signals that are equal at integer multiples of | + | Both of these produce signals that are equal at integer multiples of $T_s$. |
- | $T_s$. | + | |
- | A correlation-type demolulator uses a parallel bank of $N$ correlators | + | A correlation-type demolulator uses a parallel bank of $N$ correlators with multiplies $r(t)$ with $\{\phi_k(t)\}_{k=1}^N$. The output is: $$r_k=\int_0^Tr(t)\phi_k(t)dt=s_{mk}+n_k$$ This makes the overall result: $$\mathbf{r}=\mathbf{s}_m+\mathbf{n}$$ The received signal can be expressed as: $$r(t)=\sum_{k=1}^Ns_{mk}\psi_k(t)+\sum_{k=1}^Nn_k\phi_k(t)+n' |
- | with multiplies $r(t)$ with $\{\phi_k(t)\}_{k=1}^N$. The output is: | + | |
- | $$r_k=\int_0^Tr(t)\phi_k(t)dt=s_{mk}+n_k$$ This makes the overall | + | |
- | result: $$\mathbf{r}=\mathbf{s}_m+\mathbf{n}$$ The received signal can | + | |
- | be expressed as: | + | |
- | $$r(t)=\sum_{k=1}^Ns_{mk}\psi_k(t)+\sum_{k=1}^Nn_k\phi_k(t)+n' | + | |
- | We ignore $n' | + | |
- | $r_k(t)$ are independent. | + | |
- | A matched filter-type demodulator uses a parallel bank of $N$ linear | + | A matched filter-type demodulator uses a parallel bank of $N$ linear filters with impulse response $h_k(t)=\phi_k(T-t)$. The output is: $$r_k(t)=(r\star h_k)(t)=\int_0^Tr(\tau)\phi(T-t+\tau)d\tau$$ Hence, $r_k(t)$ is a Gaussian process, which when sampling at $t=T$, we get: $$r_k=\int_0^Tr(\tau)\phi_k(\tau)d\tau$$ A matched filter to a signal $s(t)$ is a filter whose impulse response is $h(t)=s(T-t)$, |
- | filters with impulse response $h_k(t)=\phi_k(T-t)$. The output is: | + | |
- | $$r_k(t)=(r\star h_k)(t)=\int_0^Tr(\tau)\phi(T-t+\tau)d\tau$$ Hence, | + | |
- | $r_k(t)$ is a Gaussian process, which when sampling at $t=T$, we get: | + | |
- | $$r_k=\int_0^Tr(\tau)\phi_k(\tau)d\tau$$ A matched filter to a signal | + | |
- | $s(t)$ is a filter whose impulse response is $h(t)=s(T-t)$, | + | |
- | confined time interval $0\leq t\leq T$. A matched filter maximises the | + | |
- | signal to noise ratio for a signal corrupted by AWGN. The output SNR | + | |
- | from the filter depends on the energy of the waveform $s(t)$ but not | + | |
- | on the detailed characteristics of $s(t)$. Sampling the output at time | + | |
- | $t=T$ gives the signal and noise components: | + | |
- | $$y(t)=\underbrace{\int_0^Ts(\tau)h(T_\tau)d\tau}_{y_s(T)}+\underbrace{\int_0^Tn(\tau)h(T_\tau)d\tau}_{y_n(T)}$$ | + | |
- | We need to choose a $h(t)$ to maximise $\frac{y_s^2(T)}{E[y_n^2(T)]}$, | + | |
- | being $h(t)=ks(T-t)$ for some constant $k$. This means that we need a | + | |
- | filter response that is matched to the signal. The output Signal to | + | |
- | Noise Ratio is $\frac{2\varepsilon_s}{N_0}$, | + | |
- | signal energy. | + | |
==== Detection ==== | ==== Detection ==== | ||
- | The projected signal is mapped to a point in signal space and forms | + | The projected signal is mapped to a point in signal space and forms spherical noise clouds due to the Gaussian components. In choosing which signal to map to, we need to chose the closest point. We want to minimise the overall probability of error. This is done by partitioning the signal space into $M$ nonoverlapping regions $D_1, |
- | spherical noise clouds due to the Gaussian components. In choosing | + | |
- | which signal to map to, we need to chose the closest point. We want to | + | |
- | minimise the overall probability of error. This is done by | + | |
- | partitioning the signal space into $M$ nonoverlapping regions | + | |
- | $D_1, | + | |
- | partitions. | + | |
- | The prior probability is the probability that a signal was transmitted | + | The prior probability is the probability that a signal was transmitted ($P(s_m\text{ transmitted})$), |
- | ($P(s_m\text{ transmitted})$), | + | |
- | signal was transmitted given what was received ($P(s_m\text{ | + | |
- | transmitted}|r\text{ received})$). The likelihood function forms a | + | |
- | conditional PDF for $P(r\text{ received}|s_m\text{ | + | |
- | transmitted})$. These are all related with Bayes' theorem: | + | |
- | $$P(s_m|r)=\frac{P(r|s_m)P(s_m)}{P(r)}$$ We should note that: | + | |
- | $$P(\text{No error}|s_m)=P(r\in D_m|s_m)=\int_{D_m}P(r|s_m)dr$$ | + | |
- | Minimising the probability of error means maximising the probability | + | |
- | of no error. As such, we want to construct the decision regions such | + | |
- | that: $$D_m=\{r\in\mathbb{R}^N: | + | |
- | m\}$$ | + | |
- | The Maximum A Posteriori (MAP) criterion for this is: | + | The Maximum A Posteriori (MAP) criterion for this is: $$\hat{m}=\arg\max_mP(r|s_m)P(s_m)$$ The MAP detector is an optimal detector for minimising the probability of error. |
- | $$\hat{m}=\arg\max_mP(r|s_m)P(s_m)$$ The MAP detector is an optimal | + | |
- | detector for minimising the probability of error. | + | |
- | The Maximum Likelihood (ML) criterion is: | + | The Maximum Likelihood (ML) criterion is: $$\hat{m}=\arg\max_mP(r|s_m)$$ The ML detector is optimal when all signals are equiprobable (MAP devolves to ML in such a case). |
- | $$\hat{m}=\arg\max_mP(r|s_m)$$ The ML detector is optimal when all | + | |
- | signals are equiprobable (MAP devolves to ML in such a case). | + | |
- | In AWGN channels, the received vector components of | + | In AWGN channels, the received vector components of $r=(r_1, |
- | $r=(r_1, | + | |
- | $n_k\sim\mathcal(0, | + | |
- | $r_i\sim\mathcal{N}(s_{mk}, | + | |
- | calculated to be: $$P(r|s_m)=\prod_{k=1}^NP(r_k|s_m)=\frac{1}{(\pi | + | |
- | N_0)^{\frac{N}{2}}}\exp\left(-\frac{||r-s_m||^2}{N_0}\right)$$ The ML | + | |
- | detection criterion for AWGN channels is given by: | + | |
- | $$\hat{m}=\arg\max_m\frac{1}{(\pi | + | |
- | N_0)^{\frac{N}{2}}}\exp\left(-\frac{||r-s_m||^2}{N_0}\right)$$ This | + | |
- | can be simplified to: $$\hat{m}=\arg\min_m||r-s_m||$$ This means we | + | |
- | decide on $s_m$ that is closest to $r$, being minimum distance | + | |
- | detection. The decision regions are represented graphically as the | + | |
- | midlines between $s_1, | + | |
- | for ML to: $$||r-s_m||^2=\underbrace{||r||^2}_{\text{Independent of | + | |
- | }m}-2\langle r, | + | |
- | signal, }\varepsilon_m}$$ This allows us to reexpress the ML criterion | + | |
- | as: $$\hat{m}=\arg\max_m[\langle r, | + | |
- | $\eta=-\frac{1}{2}\varepsilon_m$ is a bias term compensating for | + | |
- | signal sets that have unequal energies such as PAM. | + | |
==== Phase mismatch ==== | ==== Phase mismatch ==== | ||
- | When the phase of the carrier and receiver are not synchronised, | + | When the phase of the carrier and receiver are not synchronised, |
- | the carrier frequency is only approximately known there is a mismatch | + | |
- | affecting the signal. This mismatch causes: | + | |
- | $$r=\pm\sqrt{\frac{2}{\varepsilon_g}}\int_0^Tg^2(t)\cos(2\pi | + | |
- | f_ct+\theta)\cos(2\pi | + | |
- | f_ct)dt+n\approx\pm\sqrt{\frac{\varepsilon_g}{2}}\cos(\theta)+n$$ When | + | |
- | there is a phase mismatch ($\cos(\theta)< | + | |
- | information. The phase mismatch causes a rotation that may lead to the | + | |
- | projection lying in the wrong region, even in the noiseless case. | + | |
- | If the phase mismatch is unknown and changing rapidly, and if small, | + | If the phase mismatch is unknown and changing rapidly, and if small, it is ingnored and treated as random noise and an otherwise optimal detector is designed. If the mismatch is large then noncoherent demodulators are used. These work best for modulation schemes that ignore phase, such as envelope detectors for orthogonal signalling (FSK) and on-off keyring (OOK). Alternatively is the phase mismatch is unknown but fixed or varying slowly, then using differential modulation that modulate the phase differences rather than absolute phase (e.g. DBPSK). |
- | it is ingnored and treated as random noise and an otherwise optimal | + | |
- | detector is designed. If the mismatch is large then noncoherent | + | |
- | demodulators are used. These work best for modulation schemes that | + | |
- | ignore phase, such as envelope detectors for orthogonal signalling | + | |
- | (FSK) and on-off keyring (OOK). Alternatively is the phase mismatch is | + | |
- | unknown but fixed or varying slowly, then using differential | + | |
- | modulation that modulate the phase differences rather than absolute | + | |
- | phase (e.g. DBPSK). | + | |
=== Non-coherent OOK Demodulation === | === Non-coherent OOK Demodulation === | ||
- | The transmit signal for OOK is: $$s_m(t)=A_mg(t)\cos(2\pi | + | The transmit signal for OOK is: $$s_m(t)=A_mg(t)\cos(2\pi f_ct); |
- | f_ct); | + | |
- | imperfect synchronisation, | + | |
- | f_ct+\theta)+n(t)=A_mg(t)\cos(\theta)\cos(2\pi | + | |
- | f_ct)-A_mg(t)\sin(\theta)\sin(2\pi f_ct)+n(t)$$ The dimension of the | + | |
- | signal space is 1, but the dimension of the received signal space is 2 | + | |
- | due to the phase mismatch. The signal demodulator must have 2 | + | |
- | correlators, | + | |
- | signal demodulator output is: | + | |
- | $$r_1=\sqrt{\frac{1}{\varepsilon_g}}\int_0^\infty A_mg^2(t)\cos(2\pi | + | |
- | f_ct+\theta)\cos(2\pi | + | |
- | f_ct)dt+n_1\approx\sqrt{\frac{\varepsilon_g}{2}}A_m\cos(\theta)+n_1$$ | + | |
- | $$r_2\approx\sqrt{\frac{\varepsilon_g}{2}}A_m\sin(\theta)+n_2$$ This | + | |
- | rotates the signal along the signal space circle of radius | + | |
- | $\sqrt{\frac{\varepsilon_g}{2}}$. This requires a circular decision | + | |
- | region. | + | |
- | The optimal detector uses: | + | The optimal detector uses: $$\hat{m}=\arg\max_{m=1, |
- | $$\hat{m}=\arg\max_{m=1, | + | |
- | The worst case has phase uncertainty uniformly distributed over | + | |
- | $[0,2\pi)$. Assuming equiprobable symbols, we get: | + | |
- | $$\hat{m}=\begin{cases}1,& | + | |
- | Which depend only on the envelope of the properly received signal | + | |
- | ($V_T$ in terms of Bessel function). At high SNRs: | + | |
- | $$r_1^2+r_2^2\approx\frac{1}{2}A_m^2\varepsilon_g(\cos^2(\theta)+\sin^2(\theta))=\begin{cases}\frac{1}{2}\varepsilon_g,& | + | |
- | Which is independent of $\theta$. | + | |
=== Non-coherent FSK demodulation === | === Non-coherent FSK demodulation === | ||
- | Similarly to non-coherent OOK, we can demodulate Binary FSK. That is | + | Similarly to non-coherent OOK, we can demodulate Binary FSK. That is we demodulate against sin and cos, and use the squares to make a decision with a larger envelope. This can be extended to M-ary FSK using a larger envelope still. |
- | we demodulate against sin and cos, and use the squares to make a | + | |
- | decision with a larger envelope. This can be extended to M-ary FSK | + | |
- | using a larger envelope still. | + | |
=== Differential Modulation === | === Differential Modulation === | ||
- | Differential MPSK modulation involves precoding of the infomation | + | Differential MPSK modulation involves precoding of the infomation symbol sequence $b(n)$ into a symbol sequence $\delta(n)$ that is then input to a MPSK modulator. The DMPSK demodulator needs to invert the precoding. In MPSK each symbol value determines the actual value of the phase, but in DMPSK it determines the phase change form the previous signalling interval' |
- | symbol sequence $b(n)$ into a symbol sequence $\delta(n)$ that is then | + | |
- | input to a MPSK modulator. The DMPSK demodulator needs to invert the | + | |
- | precoding. In MPSK each symbol value determines the actual value of | + | |
- | the phase, but in DMPSK it determines the phase change form the | + | |
- | previous signalling interval' | + | |
- | with memory. Differential demodulation is used in situations where the | + | |
- | MPSK demodulator' | + | |
- | fixed (or slowly varying) phase mismatch $\varphi$, the value of which | + | |
- | we do not know (may be zero). Differential modulation leads to a | + | |
- | better symbol error performance since it is robust against a fixed | + | |
- | unknown phase mismatch. If the demodulator makes an error, there are | + | |
- | bit errors over that bit and the next one, due to the differential | + | |
- | coding nature. | + | |
==== Error probability ==== | ==== Error probability ==== | ||
- | No matter the demodulation, | + | No matter the demodulation, |
- | determination with noise, there exists a chance we can make an | + | |
- | error. This is when we decide on the received signal being | + | |
- | misidentified as another. | + | |
=== Binary PAM === | === Binary PAM === | ||
- | For Binary PAM, the probability we make an error is: | + | For Binary PAM, the probability we make an error is: $$P(err|s_2)=P(r> |
- | $$P(err|s_2)=P(r> | + | |
- | $r|s_2\sim\mathcal{N}\left(-\sqrt{\varepsilon_b}, | + | |
- | making the probability of an error equal to: | + | |
- | $$P(err|s_2)=Q\left(\frac{0+\sqrt{\varepsilon_b}}{\sqrt{\frac{N_0}{2}}}\right)=Q\left(\sqrt{\frac{2\varepsilon_b}{N_0}}\right)$$ | + | |
- | Due to symmetry, the error is the same for $s_1$. The bit error | + | |
- | probability is: | + | |
- | $$P_b=\frac{1}{2}P(err|s_1)+\frac{1}{2}P(err|s_2)=Q\left(\sqrt{\frac{2\varepsilon_b}{N_0}}\right)$$ | + | |
=== Binary orthogonal signals === | === Binary orthogonal signals === | ||
- | For binary orthogonal signals, the signal is: | + | For binary orthogonal signals, the signal is: $$s_m(t)=\sqrt{\varepsilon_b}\phi_m(t)$$ Where $\phi_1(t), |
- | $$s_m(t)=\sqrt{\varepsilon_b}\phi_m(t)$$ Where $\phi_1(t), | + | |
- | are orthonormal. The ML detector uses: $$\hat{m}=\arg\min_m||r-s_m||$$ | + | |
- | Being that the constellation point closest to the received point is | + | |
- | assumed to be transmitted. The error probability consists of 2 | + | |
- | normally distributed variables, unlike the one from BPSK. This causes | + | |
- | the error distribution to be: $$n\sim\mathcal{N}(0, | + | |
- | probability is thus: | + | |
- | $$P(err|s_1)=P(n_3> | + | |
- | The probabilities are equiprobable, | + | |
- | the overall bit error is: | + | |
- | $$P_b=Q\left(\sqrt{\frac{\varepsilon_b}{N_0}}\right)$$ For the same | + | |
- | energy, binary PAM has better error performance than binary orthogonal | + | |
- | signalling. Alternatively, | + | |
- | probability with half the energy. This is because for the same energy, | + | |
- | the distance between constellation points is greater for 2-PAM. | + | |
=== M-PAM === | === M-PAM === | ||
- | For M-PAM, the signal waveforms are written as: | + | For M-PAM, the signal waveforms are written as: $$s_m(t)=A-mg(t)=A_m\sqrt{\varepsilon_g}\phi(t); |
- | $$s_m(t)=A-mg(t)=A_m\sqrt{\varepsilon_g}\phi(t); | + | |
- | t\leq T$$ Where $A_m=2m-1-M$ is the set of possible amplitudes, | + | |
- | $\phi(t)=\frac{g(t)}{\sqrt{\varepsilon_g}}$ and the pulse energy is | + | |
- | $\varepsilon_g$, | + | |
- | $\sqrt{2\varepsilon_g}$. The signal demodulator' | + | |
- | variable: $$r=\langle | + | |
- | r(t), | + | |
- | r\sim\mathcal{N}(A_m\sqrt{\varepsilon_g}, | + | |
- | ($m=1,M$), the error probability is: | + | |
- | $$P(err|s_m)=Q\left(\sqrt{\frac{2\varepsilon_g}{N_0}}\right)$$ For | + | |
- | inner points ($m=2, | + | |
- | $$P(err|s_m)=P(|r-s_m|> | + | |
- | The symbol error probability is: | + | |
- | $$P_e=\frac{1}{M}\left(2Q\left(\sqrt{\frac{2\varepsilon_g}{N_0}}\right)\right)+(M-2)2Q\left(\sqrt{\frac{2\varepsilon_g}{N_0}}\right)=\frac{2(M-1)}{M}Q\left(\sqrt{\frac{2\varepsilon_g}{N_0}}\right)$$ | + | |
- | The average symbol energy is: | + | |
- | $$\varepsilon_{avg}=\frac{1}{M}\sum_{m=1}^MA_m^2\varepsilon_g=\frac{\varepsilon_g(M^2-1)}{3}$$ | + | |
- | The average bit energy can be useful to compare with other modulation | + | |
- | schemes. $$\varepsilon_b=\frac{\varepsilon_{avg}}{\log_2M}=\frac{\varepsilon_g(M^2-1)}{3\log_2M}$$ | + | |
- | After substitution the symbol error probability is: | + | |
- | $$P_e=\frac{2(M-1)}{M}Q\left(\sqrt{\frac{4\log_2M\varepsilon_b}{(M^2-1)N_0}}\right)$$ | + | |
- | M-PAM bandpass signals are: $$s_m(t)A_mg(t)\cos(2\pi | + | M-PAM bandpass signals are: $$s_m(t)A_mg(t)\cos(2\pi f_ct)=A_m\sqrt{\frac{\varepsilon_g}{2}}\phi(t); |
- | f_ct)=A_m\sqrt{\frac{\varepsilon_g}{2}}\phi(t); | + | |
- | t\leq T$$ The minimum distance between constellations is | + | |
- | $d_{min}=\sqrt{2\varepsilon_g}$. The symbol error probability is: | + | |
- | $$P_e=\frac{2(M-1)}{M}Q\left(\sqrt{\frac{\varepsilon_g}{N_0}}\right)$$ | + | |
- | The average symbol energy is: | + | |
- | $$\varepsilon_{avg}=\frac{\varepsilon_g(M^2-1)}{6}$$ | + | |
- | $$\varepsilon_b=\frac{\varepsilon_g(M^2-1)}{6\log_2M}$$ The symbol | + | |
- | error probability becomes: | + | |
- | $$P_e=\frac{2(M-1)}{M}Q\left(\sqrt{\frac{6\log_2M\varepsilon_b}{(M^2-1)N_0}}\right)$$ | + | |
- | This is the same performance as M-ary baseband. | + | |
=== M-PSK === | === M-PSK === | ||
- | In M-ary PSK, all M signals have the same energy, so all constellation | + | In M-ary PSK, all M signals have the same energy, so all constellation points are on a circle. M-ary signals can be represented as: $$s_m(t)=g(t)\cos(\theta_m)\cos(2\pi f_ct)-g(t)\sin(\theta_m)\sin(2\i f_ct)$$ Where $g(t)$ is the pulse shape and $\theta_m=\frac{2\pi}{M}(m-1)$ is the phase conveying the information. This forms the orthonormal basis: $$\{\phi_1(t), |
- | points are on a circle. M-ary signals can be represented as: | + | |
- | $$s_m(t)=g(t)\cos(\theta_m)\cos(2\pi f_ct)-g(t)\sin(\theta_m)\sin(2\i | + | |
- | f_ct)$$ Where $g(t)$ is the pulse shape and | + | |
- | $\theta_m=\frac{2\pi}{M}(m-1)$ is the phase conveying the | + | |
- | information. This forms the orthonormal basis: | + | |
- | $$\{\phi_1(t), | + | |
- | f_ct), | + | |
- | This assumes $f_c>> | + | |
- | correct transmission is: | + | |
- | $$P(correct|s_1)=(r_1> | + | |
- | error probability is: | + | |
- | $$P(err|s_1)=1-(1-P_{BPSK})^2=2P_{BPSK}-P_{BPSK}^2=2Q\left(\sqrt{\frac{2\varepsilon_b}{N_0}}\right)-\left(Q\left(\sqrt{\frac{2\varepsilon_b}{N_0}}\right)\right)^2$$ | + | |
- | For $M\geq8$, the symbol error probability calculations need a | + | |
- | transformation to polar coordinates and can only be approximated. The | + | |
- | bit error probability is the average proportion of erroneous bits per | + | |
- | symbol. This depends on the mapping from the k-bit code-word to the | + | |
- | symbol. At high SNR, most symbol errors involve erroneous detection of | + | |
- | the transmitted signal as the nearest neighbour signal. Therefore | + | |
- | fewer bit errors occur by ensuring that neighbouring symbols differ by | + | |
- | only one bit, being Gray coding. At high SNR, the bit error | + | |
- | probability can be approximated as the probability of picking a | + | |
- | neighbouring signal, so assuming Gray coding: | + | |
- | $$P_b\approx\frac{1}{k}P_e\approx\frac{P_e}{\log_2M}$$ Where | + | |
- | $k=\log_2M$, | + | |
=== QAM === | === QAM === | ||
- | QAM is the most widely used constellation, | + | QAM is the most widely used constellation, |
- | efficiency. QAM allows signals to have different amplitudes and | + | |
- | impress information bits on each of the quadrature carriers. The | + | |
- | important performance parameters are average energy and minimum | + | |
- | distance in the signal constellation. An orthonormal basis for the | + | |
- | signal space is: | + | |
- | $$\{\phi_1(t), | + | |
- | f_ct), | + | |
- | QAM there are many different constellations possible, so we need to | + | |
- | select the one with the greatest minimum distance and efficiency, | + | |
- | which still including implementation ease. In general rectangular QAM | + | |
- | is most frequently used, due to its ease. When $M=2^k$, with $k$ even, | + | |
- | M-ary QAM is equivalent to two $\sqrt{M}$ ary PAM signals on | + | |
- | quadrature carriers, each with half the equivalent QAM power. If | + | |
- | $P_\sqrt{M}$ is the probability of symbol error of $\sqrt{M}$ ary PAM, | + | |
- | then similar to the case of QPSK, the probability of a correct | + | |
- | decision is given by: $$P(\text{no error})=(1-P_{\sqrt{M}})^2\implies | + | |
- | P_e=1-(1-P_{\sqrt{M}})^2$$ | + | |
=== Orthogonal signalling === | === Orthogonal signalling === | ||
- | Orthogonal signalling uses the waveforms: | + | Orthogonal signalling uses the waveforms: $$s_m(t)=\sqrt{\varepsilon_s}\phi_m(t), |
- | $$s_m(t)=\sqrt{\varepsilon_s}\phi_m(t), | + | |
- | distance between signals is: | + | |
- | $$d_{min}=\sqrt{2\varepsilon_s}=\sqrt{2\log_2(M)\varepsilon_b}$$ The | + | |
- | ML criterion can be expressed as: | + | |
- | $$\hat{m}=\arg\min_m||r-s_m||=\arg\max_m[\langle | + | |
- | r, | + | |
- | bias term that compensates for signal sets that have unequal energies, | + | |
- | e.g. PAM. This makes the ML rule for orthogonal signalling to select | + | |
- | the greatest $r_n$ of the demodulator output. If $r_m> | + | |
- | m$, then $s_m$ was transmitted. | + | |
- | If $s_m$ is transmitted, | + | If $s_m$ is transmitted, |
- | $$r_m\sim\mathcal{N}(\sqrt{\varepsilon_s}, | + | |
- | $$r_k\sim\mathcal{N}(0, | + | |
- | is: $$P_e=1-\frac{1}{\sqrt{\pi | + | |
- | N_0}}\int_{-\infty}^\infty\left[1-Q\left(\frac{r_m}{\sqrt{N_0/ | + | |
- | The bit probability is: | + | |
- | $$P_b=2^{k-1}\frac{P_e}{M-1}=\frac{MP_e}{2(M-1)}\approx\frac{P_e}{2}$$ | + | |
- | The error probability $P_e$ is upper bounded by the union bound of the | + | |
- | $M-1$ events: | + | |
- | $$P_c\leq(M-1)P_b^{bin}=(M-1)Q\left(\sqrt{\frac{\varepsilon_s}{N_2}}\right)< | + | |
- | $$P_e< | + | |
- | error probability can be made arbitrarily small as $k\to\infty$, | + | |
- | provided that $\varepsilon_b/ | + | |
- | upper bound is given by: | + | |
- | $$P_e< | + | |
- | the Shannon limit for AWGN symbols, and is valid for | + | |
- | $\ln2\leq\varepsilon_b/ | + | |
====== Efficiency ====== | ====== Efficiency ====== | ||
- | Spectral efficiency is the ratio of bitrate $R$ to bandwidth | + | Spectral efficiency is the ratio of bitrate $R$ to bandwidth $W$. Bandwidth-limited signals are QAM, PAM, PSK and DPSK, where $R/W>1$. Increasing the bits improves the efficiency. Power-limited signals are Orthogonal signals, where $R/W<1$. Increasing the bits decreases the efficiency. |
- | $W$. Bandwidth-limited signals are QAM, PAM, PSK and DPSK, where | + | |
- | $R/W>1$. Increasing the bits improves the efficiency. Power-limited | + | |
- | signals are Orthogonal signals, where $R/W<1$. Increasing the bits | + | |
- | decreases the efficiency. | + | |
- | The bandwidth of a pulse is the reciprocal of its | + | The bandwidth of a pulse is the reciprocal of its length. $$W=\frac{1}{T}$$ This is the bandwidth of the centre lobe of the sinc function, the Fourier transform of the rectangular pulse. The Nyquist bandwidth is for digital systems of symbol period $T$, for which the minimum bandwidth to use is $\frac{1}{2T}$. |
- | length. $$W=\frac{1}{T}$$ This is the bandwidth of the centre lobe of | + | |
- | the sinc function, the Fourier transform of the rectangular pulse. The | + | |
- | Nyquist bandwidth is for digital systems of symbol period $T$, for | + | |
- | which the minimum bandwidth to use is $\frac{1}{2T}$. | + | |
- | The spectral efficiency of a modulation scheme is defined as; | + | The spectral efficiency of a modulation scheme is defined as; $$\nu=\frac{R_b}{W}$$ Where $R_b=\frac{k}{T}$ is the bitrate and $W$ is the bandwidth required. The spectral efficiency is a performance indicator for fundamental comparison of modulation schemes with respect to power and bandwidth usage. |
- | $$\nu=\frac{R_b}{W}$$ Where $R_b=\frac{k}{T}$ is the bitrate and $W$ | + | |
- | is the bandwidth required. The spectral efficiency is a performance | + | |
- | indicator for fundamental comparison of modulation schemes with | + | |
- | respect to power and bandwidth usage. | + | |
- | Passband PAM has a bandwidth of: $$B=\frac{1}{2T}$$ Bandpass PAM and | + | Passband PAM has a bandwidth of: $$B=\frac{1}{2T}$$ Bandpass PAM and QAM have a bandwidth of; $$B=2W=2\frac{1}{2T}=\frac{1}{T}$$ QAM uses the same frequencies as PAM, so has the same bandwidth. PSK has a bandwidth of: $$B=\frac{1}{T}$$ This is because PSK is a subset of QAM where $A_m,A_n$ are equal to $\cos(\theta_n), |
- | QAM have a bandwidth of; $$B=2W=2\frac{1}{2T}=\frac{1}{T}$$ QAM uses | + | |
- | the same frequencies as PAM, so has the same bandwidth. PSK has a | + | |
- | bandwidth of: $$B=\frac{1}{T}$$ This is because PSK is a subset of QAM | + | |
- | where $A_m,A_n$ are equal to $\cos(\theta_n), | + | |
- | respectively. | + | |
- | Passband PAM has a spectral efficiency of: | + | Passband PAM has a spectral efficiency of: $$\nu=\frac{R_b}{1/ |
- | $$\nu=\frac{R_b}{1/ | + | |
- | Bandpass PAM has a spectral efficiency of: | + | |
- | $$\nu=\frac{R_b}{1/ | + | |
- | PSK has a spectral efficiency of: | + | |
- | $$\nu=\frac{R_b}{1/ | + | |
- | QAM has a spectral efficiency of: | + | |
- | $$\nu=\frac{R_b}{1/ | + | |
- | For the orthogonal signalling methods Pulse Position Modulation (PPM) | + | For the orthogonal signalling methods Pulse Position Modulation (PPM) and Frequency Shift Keyring (FSK), the spectral efficiency is: $$\nu=\frac{R_b}{\frac{M}{2T}}=\frac{2k}{M}=\frac{2\log_2M}{M}\to0\text{ as }M\to\infty$$ For PPM, the pulse duration is $T/M$, requiring $M$ times the PAM bandwidth. For FSK, the minimum frequency separation is $\frac{1}{2T}$. |
- | and Frequency Shift Keyring (FSK), the spectral efficiency is: | + | |
- | $$\nu=\frac{R_b}{\frac{M}{2T}}=\frac{2k}{M}=\frac{2\log_2M}{M}\to0\text{ | + | |
- | as }M\to\infty$$ For PPM, the pulse duration is $T/M$, requiring $M$ | + | |
- | times the PAM bandwidth. For FSK, the minimum frequency separation is | + | |
- | $\frac{1}{2T}$. | + | |
- | PAM/QAM/PSK have increasing M leading to more bandwidth efficiency and | + | PAM/QAM/PSK have increasing M leading to more bandwidth efficiency and less power efficiency. These are appropriate for bandwidth-limited channels with plenty of power. PPM/FSK have increasing M leading to less bandwidth efficiency and more power efficiency. These are appropriate in power-limited channels with plenty of bandwidth. Theoretically the error probability can be made arbitrarily small as long as $\text{SNR}/ |
- | less power efficiency. These are appropriate for bandwidth-limited | + | |
- | channels with plenty of power. PPM/FSK have increasing M leading to | + | |
- | less bandwidth efficiency and more power efficiency. These are | + | |
- | appropriate in power-limited channels with plenty of | + | |
- | bandwidth. Theoretically the error probability can be made arbitrarily | + | |
- | small as long as $\text{SNR}/ | + | |
- | require infinite bandwidth. | + | |
====== Bandlimited communications ====== | ====== Bandlimited communications ====== | ||
- | Channels act as a LTI bandpass filter, removing all signals outside of | + | Channels act as a LTI bandpass filter, removing all signals outside of a band. This can distort the output signal. Even without noise, we can result in a transmitted signal outside of the signal space. |
- | a band. This can distort the output signal. Even without noise, we can | + | |
- | result in a transmitted signal outside of the signal space. | + | |
- | Ideally channels should have infinite bandwidth with a flat amplitude | + | Ideally channels should have infinite bandwidth with a flat amplitude response and a linear phase response. In practice channels have finite bandwidth, non-flat amplitude responses and nonlinear phase responses. Signal distortions in amplitude and phase due to bandwidth limitations of the channel results in Inter-Symbol Interference (ISI). |
- | response and a linear phase response. In practice channels have finite | + | |
- | bandwidth, non-flat amplitude responses and nonlinear phase | + | |
- | responses. Signal distortions in amplitude and phase due to bandwidth | + | |
- | limitations of the channel results in Inter-Symbol Interference (ISI). | + | |
- | When signal $s(t)=\sum_{n=0}^{\infty}I_ng_T(t-nT)$, | + | When signal $s(t)=\sum_{n=0}^{\infty}I_ng_T(t-nT)$, |
- | symbol rate and $\{I_n\}$ symbol sequence, is transmitted through a | + | |
- | bandlimited channel, the received signal is: $$r(t)=\sum_{n=0}^\infty | + | |
- | I_nh{t-nT}+z(t)$$ Where $h(t)=(g_T\star c)(t)=\int_{-\infty}^\infty | + | |
- | g_T(\tau)c(t-\tau)$ and $z(t)$ is the AWGN, The signal demodulator | + | |
- | output is: $$y(t)=\sum_{n=0}^\infty I_nx(t-nT)+\nu(t)$$ Where | + | |
- | $x(t)=(g_T\star c\star g_R)(t)$ and $\nu(t)=(z\star g_R)(t)$ is the | + | |
- | filtered noise. The received filtered output is sampled at a rate of | + | |
- | $1/T$, resulting in $$y(kT)=\sum_{n=0}^\infty | + | |
- | I_nx((k-n)T)+\nu(kT)=I_kx(0)+\underbrace{\sum_{n=0, | + | |
- | I_nx((k-n)T)}_{\text{ISI}}+\nu(kT)$$ If the pulse shaping functions | + | |
- | can be designed such that the output $x$ results in a sinc function, | + | |
- | then when sampling at integer multiples of the period, the original | + | |
- | signal can be recovered without ISI. | + | |
- | The Nyquist Pulse-Shaping Criterion is that a necessary and sufficient | + | The Nyquist Pulse-Shaping Criterion is that a necessary and sufficient condition for $x(t)$ to satisfy $$x(kT)=\begin{cases}1,& |
- | condition for $x(t)$ to satisfy | + | |
- | $$x(kT)=\begin{cases}1,& | + | |
- | frequency response $X(f)$ satisfies: | + | |
- | $$\sum_{m=-\infty}^{\infty}X\left(f+\frac{m}{T}\right)=T$$ That is, | + | |
- | the sum of all shifted versions of the frequency response result in a | + | |
- | flat value. As a result, zero ISI cannot be achieved when $R>2W$, with | + | |
- | $R$ being the symbol transmission rate and $W$ being the baseband | + | |
- | channel bandwidth. When $R=2W$, the only choice for $X(f)$ is the | + | |
- | rectangular pulse shape, corresponding to the sinc pulse in time | + | |
- | domain. When $R<2W$, there are many choices for $X(f)$ to satisfy the | + | |
- | criterion. To avoid ISI, the maximum symbol rate is $R=2W$, making the | + | |
- | shortest signalling interval $T=\frac{1}{2W}$. | + | |
- | A popular choice when $R<2W$ is the raised cosine spectrum: | + | A popular choice when $R<2W$ is the raised cosine spectrum: $$X_{rc}(f)=\begin{cases}T,& |
- | $$X_{rc}(f)=\begin{cases}T,& | + | |
- | T}{\beta}\left(|f|-\frac{1-\beta}{2T}\right)\right]\right),& | + | |
- | Where $\beta\in[0, | + | |
- | is given by $(1+\beta)\frac{1}{2T}$. In time domain, the signal | + | |
- | corresponding to the raised cosine spectrum is: | + | |
- | $$x_{rc}(t)=\text{sinc}\left(\frac{t}{T}\right)\frac{\cos(\pi\beta | + | |
- | t/ | + | |
- | are smaller than the side lobes of the sinc pulse ($\beta=0$). Smaller | + | |
- | side lobes are better where there are timing errors because they lead | + | |
- | to smaller ISI components. We call $2W$ the Nyquist frequency and see | + | |
- | that $\beta=0$ corresponds to a symbol rate at Nyquist frequency, with | + | |
- | the rectangular spectrum of the sync pulse. For a larger value of | + | |
- | $\beta$ we need bandwidth beyond the Nyquist | + | |
- | frequency. $$W=\frac{1+\beta}{2T}$$ We can say that $\beta=0.5$ | + | |
- | involves an excess bandwidth of $50\%$, similarly $\beta=1$ involves | + | |
- | an excess bandwidth of $100\%$. Thus choosing $\beta$ is a tradeoff | + | |
- | between robustness against timing errors and transmission speed. | + | |
- | A transmitted signal is $X(f)=G_T(f)C(f)G_R(f)$, | + | A transmitted signal is $X(f)=G_T(f)C(f)G_R(f)$, |
- | satisfy the Nyquist criterion for zero ISI. The raised cosine spectrum | + | |
- | is one possible choice for $X(f)$. $G_T(f)$ determines the transmitted | + | |
- | pulse shape, $C(f)$ is the channel (cannot control) and $G_R(f)$ is | + | |
- | the receive filter. We can design the filters $G(f)$ to compensate for | + | |
- | the channel at the transmitter, | + | |
- | $$|G_T(f)|=\frac{|X(f)|^\frac{1}{2}}{|C(f)|}$$ | + | |
- | $$|G_R(f)|=|X(f)|^\frac{1}{2}$$ Alternately we can design to | + | |
- | compensate for the channel at both the transmitter and receiver: | + | |
- | $$|G_T(f)|=\frac{|X(f)|^\frac{1}{2}}{|C(f)|^\frac{1}{2}}$$ | + | |
- | $$|G_R(f)|=\frac{|X(f)|^\frac{1}{2}}{|C(f)|^\frac{1}{2}}$$ The second | + | |
- | design is optimal in terms of error probability for Gaussian | + | |
- | noise. Both designs use $P(f)=X(f)^\frac{1}{2}$, | + | |
- | Root Raised Cosine (SRRC), abbreviated to RRC. For baseband channels, | + | |
- | the Nyquist criterion says that zero ISI can be achieved if the symbol | + | |
- | transmission rate is $R\leq2W$, where $W$ is the channel | + | |
- | bandwidth. For bandpass channels with bandwidth $W$, the previous | + | |
- | analysis still holds if $s(t), | + | |
- | baseband equivalents. In this case the Nyquist criterion says that | + | |
- | zero ISI can be achieved if $R\leq W$ (due to different ways of | + | |
- | defining bandwidth). | + | |
- | For bandpass channels, the Nyquist frequency equals the channel | + | For bandpass channels, the Nyquist frequency equals the channel bandwidth $W$, so $\beta=0$ corresponds to a symbol rate at the Nyquist frequency with a rectangular spectrum of the sync pulse. For a larger $\beta$ we need bandwidth beyond the Nyquist frequency, hence for bandpass channels we have: $$W=\frac{1+\beta}{T}$$ Again, choosing the value of $\beta$ is a trade off between robustness against timing errors and transmission speed. |
- | bandwidth $W$, so $\beta=0$ corresponds to a symbol rate at the | + | |
- | Nyquist frequency with a rectangular spectrum of the sync pulse. For a | + | |
- | larger $\beta$ we need bandwidth beyond the Nyquist frequency, hence | + | |
- | for bandpass channels we have: $$W=\frac{1+\beta}{T}$$ Again, choosing | + | |
- | the value of $\beta$ is a trade off between robustness against timing | + | |
- | errors and transmission speed. | + | |
- | An eye diagram is constructed by placing all the symbols into the same | + | An eye diagram is constructed by placing all the symbols into the same window of $T$. At position 0, all the symbols resolve to 0 or 1, but vary in between integer positions. A low beta causes a wide spread of symbols where a high beta clusters them more. As such, it is easy to see the effect of timing errors on the diagram. The plot carves out a region resembling an eye, giving the plot its name. As the eye closes, ISI increases and as it opens ISI is decreasing. Other diagnostics are the noise margin (eye opening) and sensitivity to timing error (eye width). Channel distortions lead to a smaller eye opening width and a smaller eye opening at $t=0$. Eye diagrams can easily be extended to multiple levels. |
- | window of $T$. At position 0, all the symbols resolve to 0 or 1, but | + | |
- | vary in between integer positions. A low beta causes a wide spread of | + | ====== Equalisation ====== |
- | symbols where a high beta clusters them more. As such, it is easy to | + | |
- | see the effect of timing errors on the diagram. The plot carves out a | + | Channel equalisation are useful in detecting data in the presence of ISI in many types of channels The sampled output of a receive filter is: $$y_k=I_kx_0+\sum_{n=0, |
- | region resembling an eye, giving the plot its name. As the eye closes, | + | |
- | ISI increases and as it opens ISI is decreasing. Other diagnostics are | + | If the channel memory is L, then the ML sequence detector is simplified as a product of likelihood values: $$(\hat{I}_0m, |
- | the noise margin (eye opening) and sensitivity to timing error (eye | + | |
- | width). Channel distortions lead to a smaller eye opening width and a | + | ===== Viterbi Algorithm ===== |
- | smaller eye opening at $t=0$. Eye diagrams can easily be extended to | + | |
- | multiple levels. | + | The Viterbi algorithm finds the shortest (or longest) path in a trellis. The main idea is to incrementally calculate the shortest path length along the way. |
+ | |||
+ | We can construct a trellis where the path lengths are the log-likelihood values $\ln(p(u_k|I_{k-1}=i_{k-1}, | ||
+ | |||
+ | ===== Linear equalisation ===== | ||
+ | |||
+ | A linear equaliser tries to invert the FIR model used to create the input. Zero-forcing equalisation convolves the input with the output, delaying the input to do so. The equaliser' | ||
+ | |||
+ | Given input $u_k=f_0I_k+\sum_{n=1}^Lf_nI_{k-n}+\eta_k$, | ||
+ | |||
+ | As the value of $K$ increases, the error probability decreases before levelling out. A large amount of noise causes the probability to level out with a smaller value of $K$. An infinite number of filter taps would be able to invert the channel filter. In the presence of noise, the amplification of noise by many taps damages the signal. | ||
+ | |||
+ | ===== Minimum Mean Squared Error ===== | ||
+ | |||
+ | Another linear equaliser is the Minimum Mean Squared Error (MMSE) which minimises the MSE of the equaliser output, defined as: $$J(c)=E[|I_k-\hat{I}_k|^2]=E\left[|I_k-\sum_{j=-k}^Kc_ju_{k-j}|^2\right]$$ When the noise is small for all frequencies, | ||
notes/elen90057.1633310542.txt.gz · Last modified: 2023/05/30 22:32 (external edit)