$conf['savedir'] = '/app/www/public/data'; notes:elen30012 [DokuWiki]

Site Tools


notes:elen30012

ELEN30012 - Signals and Systems

Fundamental concepts

Signal definitions

  • Continuous-time signals map \(\mathbb{R}\to\mathbb{R}\), the output signal is denoted as \(x(t)\).
  • Discrete-time signals map \(\mathbb{Z}\to\mathbb{R}\), the output signal is denoted as \(x[n]\).
  • The unit step function \(u: \mathbb{R}\to\mathbb{R}\) is defined as

\[u(t)=\begin{cases}1&t\geq 0\\0\end{cases}\]

  • The unit ramp function \(r: \mathbb{R}\to\mathbb{R}\) is defined as

\[r(t)=\begin{cases}t&t\geq 0\\0\end{cases}\]

  • The Dirac delta function \(\delta:\mathbb{R}\to\mathbb{R}\cup\{\infty\}\) is defined as the function with the properties:
    1. \(\forall t \in \mathbb{R}\setminus\{0\}, \delta(t)=0\)
    2. \(\forall \epsilon>0\):

\[\int_{-\epsilon}^{\epsilon}\delta(\lambda)d\lambda=1\] It is also of note that \(u(t)=\int_{-\infty}^{t}\delta(\lambda)d\lambda, \forall t\neq 0\). The delta function also has a sifting property, meaning: \[\int_{-\infty}^{\infty}f(\lambda)\delta(\lambda-t_0)d\lambda=f(t_0)\]

  • A signal is of finite duration if there exists constants \(a\) and \(b\) such that \(x(t)=0\forall t<a\cup t>b\). We can then say the signal has a duration \(d=b-a\).
  • Rectangular pulse signal of width \(\tau\) is

\[p_\tau (t)=\begin{cases}1&\frac{-\tau}{2}\leq t\leq\frac{\tau}{2}\\0\end{cases}\]

  • Triangular pulse signal of width \(\tau\) is

\[\Lambda_\tau (t)=\begin{cases}1-\frac{2\lvert t\rvert}{\tau}&\frac{-\tau}{2}\leq t\leq\frac{\tau}{2}\\0\end{cases}\]

  • Period signals if \(\exists T>0\) such that

\[\forall t, x(t)=x(t+T)\] If \(T\) is the smallest number with this property, then \(T\) is the fundamental period of the period. The sum of two periodic functions will be period if and only if \(\frac{T_1}{T_2}\) is a rational number. Period signals can be constructed from finite duration signals using shift and add summations with \(d\) greater than the finite signal's duration. \[y(t)=\sum_{k=\infty}^{\infty} x(t-kd),\forall t\in \mathbb{R}\]

  • There are also discrete versions of the unit step and ramp functions.
  • The discrete version of the Dirac delta function is the Kronecker delta function.

The sifting property of the delta function also applies to the discrete Kronecker delta function. \[\sum_{i=-\infty}^\infty x[i]\delta[n-i]=x[n]\] The Kronecker is referred to as the unit pulse function, the Dirac delta function is referred to as the impulse function.

  • Discrete periodic signals satisfy an analogous definition to their continuous counterparts.

\[x[n]=x[n+L]\] If \(L\) is the least integer with this property, then \(L\) is the fundamental period. The sum of periodic signals has a period equal to the least common multiple of its constituents. \[L_0=lcm(L_1,L_2)\] A discrete sinusoid, \(x[n]=A\cos[\Omega n+\phi]\), is periodic is and only if \(\frac{\Omega}{2\pi}\in\mathbb{Q}\).

  • A continuous signal can be transformed into a discrete one by sampling at a uniform interval.

Sampling at an insufficient frequency can cause the read sampled frequency to be less than the real frequency. This is known as aliasing. The Shannon-Nyquist Sampling Theorem states that the sampling frequency must be at least twice the fundamental frequency of the source to avoid aliasing.

Systems definitions

Systems convert input signals into output signals. A system's behaviour consists of all input-output pairs \((x,y)\), called trajectories, that satisfy the laws of the system. Continuous-time signals operate on continuous-time signals. Discrete-time signals operate on discrete-time signals.

  • The zero-input response, natural response or unforced response is the output when there is no input, it depends only on initial conditions.
  • If the input is a step signal and the initial condition is zero, the output is called the step response.
  • If the input is an impulse signal and the initial condition is zero, the output is called the impulse response.
  • The input is sometimes called the forcing term, or driving function, since it is usually the input that causes the input to produce an output.
  • A system is adaptive if the sum of any two trajectories, their sum is also a trajectory in the system.
  • A system is homogeneous if the scalar multiple of a trajectory is also a trajectory in the system.
  • A system is linear if it is both additive and homogeneous.

The linearity theorem states that a system is linear if and only if for trajectories \((x_1,y_2)\), \((x_2,y_2)\) and arbitrary real numbers \(a\) and \(b\), \((ax_1+bx_2,ay_1+by_2)\) is also a trajectory in the system.

  • A system is time-invariant is for any trajectory, \((x(t),y(t))\) and any constant \(T\in\mathbb{R}\), the pair \((x(t-T),y(t-T))\) is also a trajectory of the system.
  • A system is causal if, for any time \(t_1\), the output response,\(y(t_1)\), resulting from the input \(x(t)\) does not depend on the values of the input \(x(t)\) for \(t>t_1\).

Non-causal systems with time as the independent variable are impossible to build. Non-causal systems arise in image processing with space is the independent variable.

  • Memory-less systems are those whose output at a given time only depends on the input at that time.
  • Systems with memory have output that depends in the input at time \(t<t_1\).

Time domain models of systems

N-th order difference equations

The N-th order causal linear time-invariant input/output difference equation is: \[y[n]+\sum^N_{i=1}a_iy[n-i]=\sum_{i=0}^Nb_ix[n-i]\] Here, \(x[n]\) and \(y[n]\) are the inputs and outputs, and \(a_i\) and \(b_i\) are assumed to be linear.

One way these can be solved is through recursion. This assumes N initial conditions \(y[-1],y[-2],\ldots,y[-N]\) for \(x[-1],x[-2],\ldots,x[-N]\). First we set \(n=0\) and obtain \(y[0]\) by solving: \[y[0]=-\sum_{i=1}^Na_iy[-i]+\sum_{i=0}^Nb_ix[-i]\] This is repeated until \(N-1\). \[y[N]=-\sum_{i=1}^Na_iy[N-i]+\sum_{i=0}^Nb_ix[N-i]\]

Unit pulse response

The unit pulse response is denoted by \(h[n]\) is defined by a N-th order difference equation where the input is \(x[n]=\delta[n]\) and the initial conditions are assumed to be zero.

Convolution

The convolution of two signals, \(x\) and \(y\) as discrete time signals is \[(x\star y)[n]=\sum_{i=-\infty}^\infty x[i]y[n-i]\] We can define the delay of a signal as: \[Delay_i(y[n])=y[n-i]\] We can express the sum of the inputs into a system as \[\sum_{i=-\infty}^\infty x(i)Delay_i(\delta)\to\sum_{i=-\infty}^\infty Delay_i(h)\] \[\sum_{-\infty}^{\infty}x[i]Delay_i(h[n])=\sum_{-\infty}^{\infty}x[i]h[n-i]=(x\star h)[n]\] For a finite-duration signal: \[(x\star y)[n]=\begin{cases}0,&n<n_x+n_y\\\sum_{i=n_x}^{N_x}x[i]y[n-i],&n_x+n_y\leq n\leq N_x+N_y\\0,&n>N_x+N_y\\\end{cases}\] We can define a Nth order causal linear time-invariant input/output differential equation as \[\frac{dy^N}{dt^N}+\sum_{i=0}^{N-1}a_i\frac{dy^i}{dt^i}=\sum_{i=0}^Nb_i\frac{dx^i}{dt^i},t\geq0\] The impulse response is when \(x(t)=\delta(t)\) and \(y(0)=y'(0)=...=y^n(0)=0\).

The continuous convolution is \[(x\star v)(t)=\int_{-\infty}^\infty x(\lambda)v(t-\lambda)d\lambda\]

Euler approximations

We can define the Euler approximation of the first derivative as: \[\left.\frac{dy}{dt}\right\vert_{t=nT}\approx\frac{y(nT+T)-y(nT)}{T}\] Likewise the second derivative approximation is: \[\left.\frac{d^2y}{dt^2}\right\vert_{t=nT}\approx\frac{y(nT+2T)-2y(nT+T)+y(nT)}{T^2}\] This lets us rewrite a differential equation as a difference equation, using \(T\) as our step size. \[y[n+1]+(aT-1)y[n]=bTx[n]\] For an initial condition \(y_0\) and zero input, this difference equation gives: \[y[n]-(1-aT)^ny_0\] We know that the exact solution is: \[y(t)=e^{-at}y(0)\] These are the same, with the approximation using the first two terms of the Taylor expansion of \(e^{aT}\), which is appropriate if \(T^2\approx0\).

Fourier analysis

We can write a signal with a fundamental period \(T\) and a fundamental frequency \(\omega_0\). We can then write the trigonometric Fourier series by: \[x(t)=a_0+\sum_{k=1}^\infty(a_k\cos(k\omega_0t)+b_k\sin(k\omega_0t))\] The coefficients \(a_0\), \(a_k\) and \(b_k\) are known as Fourier coefficients and can be calculated by Euler's formulae: \[a_0=\frac{1}{T}\int_0^Tx(t)dt\] \[a_k=\frac{2}{T}\int_0^Tx(t)\cos(k\omega_0t)dt\] \[b_k=\frac{2}{T}\int_0^Tx(t)\sin(k\omega_0t)dt\] The location of the integral is irrelevant as long as it is over a period.

A signal can be represented by a Fourier series if it satisfies the Dirichlet conditions:

  1. x is absolutely integrable over one period, i.e.

\[\int_0^T\lvert x(t)\rvert dt<\infty\]

  1. x has only finitely many maxima and minima over one period
  2. x has only finitely many points of discontinuity over one period

We can define the finite Fourier series as: \[x_N(t)=a_0+\sum_{k=1}^N(a_k\cos(k\omega_0t)+b_k\sin(k\omega_0t))\] This serves as an approximation of the Fourier series, as \[\lim_{N\to\infty}\frac{1}{T}\int_0^\infty\lvert x(t)-x_N(t)\rvert^2dt=0\] We can say that \(x_N\) converges to \(x\) in the \(L_2\) norm, because the mean-squared error converges to zero. We can use the finite Fourier series to approximate what the function is at certain values. If the function is continuous at \(t_0\), then \[\lim_{N\to\infty}x_N(t_0)=x(t_0)\] If the function is discontinuous at \(t_0\), then \[\lim_{N\to\infty}x_N(t_0)=\frac{1}{2}(x(t_0^-)+x(t_0^+))\]

A function is even if \[x(t)=x(-t)\forall t\in\mathbb{R}\] A function is odd if \[x(t)=-x(-t)\forall t\in\mathbb{R}\] For an even periodic function with period \(T=2L\), we can represent it by a Fourier cosine series \[x(t)=a_0+\sum_{k=1}^\infty a_k\cos(k\omega_0t)\] where \(a_0=\frac{1}{L}\int_0^Lx(t)dt\) and \(a_k=\frac{2}{L}\int_0^Lx(t)\cos(k\omega_0t)dt\). For an odd periodic function with period \(T=2L\), we can represent it by a Fourier sine series \[x(t)=\sum_{k=1}^\infty b_k\sin(k\omega_0t)\] where \(b_k=\frac{2}{L}\int_0^Lx(t)\sin(k\omega_0t)dt\). We can also express any trigonometric Fourier series in a cosine phase form \[x(t)=a_0+\sum_{k=1}^\infty(a_k\cos(k\omega_0t)+b_k\sin(k\omega_0t))=a_0+\sum_{k=1}^\infty A_k\cos(k\omega_0t+\theta_k)\] where \(A_k=\sqrt{a_k^2+b_k^2}\) and \(\theta_k=\begin{cases}\tan^{-1}(-b_k/a_k)&k=1,2,...\vert a_k\geq 0\\\pi+\tan^{-1}(-b_k/a_k)&k=1,2,...\vert a_k<0\end{cases}\). We can also represent a Fourier series in a complex form. \[x(t)=\sum_{k=-\infty}^\infty c_ke^{jk\omega_0t}\] The complex Fourier coefficients \(c_k\) can be calculated from \[c_k=\frac{1}{T}\int_0^Tx(t)e^{-jk\omega_0t}dt, k=0,\pm1,\pm2,...\] We can integrate over any interval of length \(T\). We can notice that \(c_k=\overline{c}_{-k}\), being that \(c_k\) and \(c_{-k}\) are complex conjugates. The relation between the complex coefficients and the trigonometric are as follows: \[c_0=a_0\] \[c_k=\frac{1}{2}(a_k-jb_k)\] \[c_{-k}=\frac{1}{2}(a_k-jb_k)\] We can get an amplitude spectrum from \(\lvert c_k\rvert\) and a phase spectrum from \(\angle{c_k}\). The amplitude spectrum is an even function of k and the phase spectrum is an odd function of k.

Fourier transform

Continuous

A Fourier transform is the generalisation of a Fourier series to any signal, allowing the analysis of non-periodic signals. The Fourier transform is: \[X(\omega)=\int_{-\infty}^\infty x(t)e^{-j\omega t}dt\] The variable \(\omega\in\mathbb{R}\) is the frequency variable. Lower case letters denote continuous time variables, and upper case letters denote their Fourier transform. The Fourier transform produces a complex valued function, with amplitude \(\lvert X(\omega)\rvert\) and phase \(\angle X(\omega)\). The amplitude and phase spectra are generalisations for periodic signals.

The inverse Fourier transform is given by: \[x(t)=\frac{1}{2\pi}\int_{-\infty}^\infty X(\omega)e^{j\omega t}d\omega\] Any absolutely integrable signal has a Fourier transform. If \(\lvert X(\omega)\rvert\) is large, that component of the spectra makes up a large component of the signal. We can transform the Fourier Transform into a rectangular form by splitting up the real and imaginary parts \[R(\omega)=\int_{-\infty}^{\infty}x(t)\cos(\omega t)dt\] \[I(\omega)=\int_{-\infty}^{\infty}x(t)\sin(\omega t)dt\] \[X(\omega)=R(\omega)+jI(\omega)\] The Fourier transform of an even signal is \[X(\omega)=R(\omega)=2\int_{0}^\infty x(t)\cos(\omega t)dt\] The Fourier transform of an off signal is \[X(\omega)=jI(\omega)=-j2\int_{0}^\infty x(t)\sin(\omega t)dt\]

Special functions

We can define a new function which is commonly used in signal analysis \[\operatorname{sinc}(a\omega)=\frac{\sin(a\pi\omega)}{a\pi\omega}\] Using this function we can find the Fourier transform of a rectangular pulse of width \(\tau\) to be \[P_\tau(\omega)=\frac{2}{\omega}\sin\left(\frac{\omega\tau}{2}\right)=\tau\operatorname{sinc}\left(\frac{\tau\omega}{2\pi}\right)\] We can also find the Fourier transform of a triangular pulse of width \(\tau\) to be \[T_\tau(\omega)=\frac{8}{\tau\omega^2}\sin^2\left(\frac{\tau\omega}{4}\right)=\frac{\tau}{2}\operatorname{sinc}^2\left(\frac{\tau\omega}{4\pi}\right)\]

Properties

The Fourier transform is a linear transform. Time shifting a signal affects the transform's phase, but not its magnitude. Time scaling the signal shrinks the transform by the scaling factor in both magnitude and space. Flipping the signal flips the transform. We can modulate the signal by multiplying it by a trigonometric function, causing its transform to split in two with the argument of the transform added by the frequency of the added signal. The transform of the convolution of the signal is the multiple of the signal's transforms.

Signals like cos and sin are not absolutely integrable, and thus do not have Fourier transforms. However they have representations in transforms, so should have transforms. Using the sifting theorem, we can find the transform of the Dirac delta function: \[\delta(t)\leftrightarrow 1\] \[1\leftrightarrow 2\pi\delta(\omega)\] Using the modulation theorem: \[x(t)e^{j\omega_0t}\leftrightarrow X(\omega-\omega_0)\] We can find that \[e^{j\omega_0t}\leftrightarrow 2\pi\delta(\omega-\omega_0)\] \[\cos(\omega_0t)\leftrightarrow \pi[\delta(\omega+\omega_0)+\delta(\omega-\omega_0)]\] \[\sin(\omega_0t)\leftrightarrow j\pi[\delta(\omega+\omega_0)-\delta(\omega-\omega_0)]\]

Discrete Time Fourier Transform

For discrete time signals, the Fourier transform also exists. All signals worked with are discrete due to sampling. The transform is: \[X(\Omega)=\sum_{n=-\infty}^\infty x[n]e^{-j\Omega n}\] The inverse discrete Fourier transform is given by \[x[n]=\frac{1}{2\pi}\int_0^{2\pi}X(\Omega)e^{j\Omega n}d\Omega\] The DTFT is a periodic function with period \(2\pi\). As a result of this, we can integrate over any period to find the inverse transform A discrete time signal has a DTFT if it is absolutely summable over \(\mathbb{Z}\). \[\sum_{n=-\infty}^\infty \lvert x[n]\rvert<\infty\] Again the transform of the delta function is 1. \[\delta[n]\leftrightarrow 1\] The transform can be expressed as the sum of a real and imaginary part (rectangular), or as a magnitude and angle (phase). For an even signal, the transform is strictly the real part of the rectangular form. Likewise for an odd signal, the transform is strictly the imaginary part of the rectangular form. The discrete time transform of the rectangular pulse is: \[P_L(\Omega)=\frac{\sin((q+1/2)\Omega)}{\sin(\Omega/2)}\] Where \(q=(L-1)/2\) and \(L\) is the width of the pulse.

The DTFT is a linear transform. We can time delay a signal by multiplying the transform of the signal by \(e^{-j\Omega q}\). They can also be flipped, modulated and convoluted like the continuous signal.

Discrete Fourier Transform

The n-point Discrete Fourier transform is \[X_k=\sum_{n=0}^{L-1}x[n]e^{-j2\pi kn/N}\] Where \(k=0,1,2,...,N-1\) and \(L\leq N\) is the record length of \(x\). Since \(x\) is finite, \(X_k\) always exists. \(X_k\) is a function of the discrete variable \(k\) and takes \(N\) complex values \(X_0,....,X_{N-1}\). The DFT is a periodic function with period \(N\). The inverse transform can be constructed by: \[x[n]=\frac{1}{N}\sum_{k=0}^{N-1}X_ke^{j2\pi kn/N}\]

The DFT is a finite approximation of the DTFT. \[X_k=X\left(\frac{2\pi k}{N}\right)\] It is a sampling of the DTFT at a finite number of points. As such, increasing N causes the DFT to approach the DTFT.

Fourier analysis of systems

We can represent a system in time domain or frequency domain. As such, we can represent its impulse response as \(h(t)\) or \(H(\omega)\) respectively. We can find the output from: \[Y(\omega)=H(\omega)V(\omega)\] The amplitude spectrum is the product of the impulse and input amplitude spectra and the phase spectrum is the sum of the impulse and input phase spectra. Given an input of a Fourier series and a frequency response, we can find the output from an associated product of the two. \[A_k^y=A^v_k\lvert H(k\omega_0)\rvert\] \[\theta_k^y=\theta^v_k+\angle H(k\omega_0)\] In the complex form, its a straight product of the terms. \[c_k^y=c_k^vH(k\omega_0)\] We can do the same with the Fourier transform of a non-periodic signal with a Fourier transform as long as we find the inverse transform.

Filters

Filters allow certain signals to pass through and block others. An ideal filter will pass through all signals within it pass range and block all signals out of it. The pass band is the set of frequencies that a filter allows to pass and a stop band is the range of frequencies which are attenuated. The width of the pass band is the bandwidth. The ideal low-pass filter is non-causal in time due to its use of the sinc function, so cannot be realised.

A LTI system is called a linear phase system if its frequency response can be written as: \[H(\omega)=K(\omega)e^{-j\omega t_d}\] Where \(K(\omega)\) is a real valued function and \(t_d\in\mathbb{R}\) is a positive constant. The phase response is a linear function of frequency. The system is also time independent and does not suffer from phase distortion.

Response of systems

We recall that from a system's impulse response and the input, we can find the output via convolution. \[y[n]=(v\star h)[n]=\sum_{i=-\infty}^{\infty}h[i]v[n-i]\] This assumes \(h\) is absolutely summable, which means its DTFT always exits. We then remember that convolution becomes multiplication with the Fourier transform, so we can get: \[Y(\Omega)=H(\Omega)V(\Omega)\] With this we are able to represent the system in time and frequency domains. We can then represent the amplitude spectrum as the product of the amplitudes of the input and response and the phase spectrum is the sum of the phases of the input and response.

If we were to cascade systems, the transform of the impulse response is the product of the individual systems transforms. \[H(\Omega)=H_1(\Omega)H_2(\Omega)\] We can have inverse systems if the product of their transforms is 1, or the product of the responses is the delta function. \[H(\Omega)=H_1(\Omega)H_2(\Omega)=1\] \[(h_1\star h_2)[n]=\delta[n]\]

Laplace transform

For any continuous-time signal we define the Laplace transform as: \[X(s)=\int_0^\infty x(t)e^{-st}dt\] This only depends on the value of the signal for \(t\geq 0\). It is particularly useful for analysing systems given terms of differential equations with specified initial conditions. There is a region of convergence, being the set of values for s for which the transform exists. The region of convergence has the following property: \[RoC(x)=\{s\in\mathbb{C}:Re(s)>c\}\] Where c is a real number. The Fourier transform is the case of the Fourier transform where \(s=j\omega\). \[X(\omega)=\left.X(s)\right\rvert_{s=j\omega}\] The inverse Laplace transform is \[x(t)=\frac{1}{j2\pi}\int_{x-j\infty}^{c+j\infty}X(s)e^{st}ds\]

Properties of the Laplace transform

The Laplace transform is linear: \[ax_1(t)+bx_2(t)\leftrightarrow aX_1(s)+bX_2(s)\] It is also able to be shifted: \[x(t-c)u(t-c)\leftrightarrow X(s)e^{-cs}\] It is capable of being time scaled: \[x(at)\leftrightarrow\frac{1}{a}X\left(\frac{s}{a}\right)\] The transform can also be modulated: \[x(t)e^{at}\leftrightarrow X(t-a)\] \[t^Nx(t)\leftrightarrow(-1)^N\frac{d^N}{ds^N}X(s)\] The transform of the delta function is 1. \[\delta(t)\leftrightarrow 1\] The convolution of two signals is equivalent to the product of their transforms. The derivative of a signal is: \[x'(t)\leftrightarrow sX(s)-x(0^-)\] \[x''(t)\leftrightarrow s^2X(s)-sx(0^-)-x'(0^-)\] The integral theorem states: \[\int_0^t x(\lambda)d\lambda\leftrightarrow \frac{X(s)}{s}\]

Existence and roots of the transform

The path of integration must lie in the region which the Laplace transform converges. A rational function is a function of a complex variable which can be expressed as a ratio of two polynomials with real coefficients. A proper rational function is one which has a numerator of order less than its denominator. A real polynomial \(A(s)\) has a root \(p\in\mathbb{C}\) if: \(A(p)=0\) We define the multiplicity of a root as the largest integer \(m\) such that: \[A(s)=(s-p)^mH(s)\] Where \(H(s)\) is a real polynomial where \(H(p)\neq0\). The fundamental theorem of algebra states that every real polynomial of degree \(N\geq1\) has a factorisation of linear factors in \(\mathbb{C}\). \[A(s)=a_N(s-p_1)(s-p_2)...(s-p_N)\] Where \(a_N\in\mathbb{R}\) and \(p_1,p_2,...,p_N\in\mathbb{C}\). The complex roots of a real polynomial occur in polynomial pairs. The \(p\) values are the poles of \(A(s)\). A proper rational function with a denominator of \(N\) distinct poles (multiplicity of 1) can be expressed as a partial fraction expansion: \[X(s)=\frac{c_1}{x-p_1}+\frac{c_2}{s-p_2}+...+\frac{c_N}{s-p_N}\] Where the constants \(c_i\) are given by \[c_i=[(s-p_i)X(s)]_{s=p_i}\] The constants \(c_i\) are called the residues of \(X\) and are real or complex depending on whether the corresponding pole is real or complex, and have conjugates for the values of \(p\) which have conjugates. We can take the inverse LT to get the time domain signal: \[x(t)=c_1e^{p_1t}+c_2e^{p_2t}+...+c_Ne^{p_Nt}\] For complex values of \(c_i\) and \(p=\sigma+j\omega\), we can obtain a real expression for \(x(t)\) using: \[ce^{pt}+\overline{c}e^{\overline{p}t}=2\lvert c\rvert e^{\sigma t}\cos(\omega t+\angle{c})\]

When there are repeated poles, the partial fraction expansion is: \[X(s)=\frac{c_1}{s-p_1}+\frac{c_2}{(s-p_1)^2}+...+\frac{c_r}{(s-p_1)^r}+\frac{c_{r+1}}{s-p_{r+1}}+...+\frac{c_N}{s-p_N}\] Where the residues are given by \[c_{r-i}=\frac{1}{i!}\left[\frac{d^i}{ds^i}[(s-p_1)^rX(s)]\right]_{s=p_1}\] \[c_i=[(s-p_i)X(s)]_{s=p_i}\]

Z-transform

For discrete signals, we can define an analogue of the Laplace transform to be the z-transform: \[X(z)=\sum_{n=0}^\infty x[n]z^{-n}\] Like the Laplace transform, the z-transform has a region of convergence defined as the values of z such that the sum exists, i.e. \[RoC(x)=\{z\in\mathbb{C}\backslash\{0\}:x[n]z^{-n}\text{ is absolutely summable}\}\] \[RoC(x)=\{z\in\mathbb{C}:\lvert z\rvert>c\}\] If \(z=1\) is in the region of convergence for a signal, then the DTFT can be given by \[X(\Omega)=\left.X(z)\right\rvert_{z=e^{j\Omega}}\]

The inverse z-transform is \[x[n]=\frac{1}{j2\pi}\int_CX(z)z^{n-1}dz\] This contour integral is any within the RoC of x.

Properties

The z-transform is linear. \[ax_1[n]+bx_2[n]\leftrightarrow aX_1(z)+bX_2(z)\] It is capable of being right-shifted (signals to the left are undefined). \[x[n-q]u[n-q]\leftrightarrow X(z)z^{-q}\] It can be modulated \[x[n]a^n\leftrightarrow X\left(\frac{z}{a}\right)\] \[x[n]\cos(\Omega n)\leftrightarrow\frac{1}{2}[X(ze^{j\Omega})+X(ze^{-j\Omega})]\] \[nx[n]\leftrightarrow -z\frac{d}{dz}X(z)\] The convolution of a signal is the product of its transform \[(x\star v)[n]\leftrightarrow X(z)V(z)\] The left time shift is \[x[n+q]\leftrightarrow z^qX(z)-x[0]z^q-x[1]z^{q-1}-...-x[q-1]z\]

Partial fractions

When finding the partial fraction expansion, we first divide by z, to give: \[\frac{X(z)}{z}=\frac{c_0}{z}+\frac{c_1}{z-p_1}+\frac{c_2}{z-p_2}+...+\frac{c_N}{z-p_N}\] Where \(c_0=X(0)\) and \[c_i=\left[(z-p_i)\frac{X(z)}{z}\right]_{z=p_i}\] Here X does not need to be a proper rational function like for the Laplace transform. Multiplying by z gives: \[X(z)=c_0+\frac{c_1z}{z-p_1}+\frac{c_2z}{z-p_2}+...+\frac{c_Nz}{z-p_N}\] Taking the inverse transform of each signal gives: \[x[n]=c_0\delta[n]+c_1p_1^n+c_2p_2^n+...+c_Np_N^n\] For complex poles \(p=\sigma e^{j\Omega}\), we get \[cp^n+\overline{c}\overline{p}^n=21\lvert c\rvert\sigma^n\cos(\Omega n+\angle c)\]

State spaces

Fundamentals

A state vector for a system is a vector of the form: \[x(t)=\begin{bmatrix}x_1(t)\\x_2(t)\\...\\x_N(t)\end{bmatrix}\text{ or }x[n]=\begin{bmatrix}x_1[n]\\x_2[n]\\...\\x_N[n]\end{bmatrix}\] State equations for linear time-invariant systems are: \[\dot{x}(t)=Ax(t)+Bv(t)\text{ or }x[n+1]=Ax[n]+Bv[n]\] Where \(\dot{x}\) and \(x[n+1]\) are state vectors and A is a NxN matrix, B is a Nx1 vector. We can also express these as a scalar by changing the dimensions of A and B.

Controller canonical form

We can create a state space model for the continuous-time system as \[\frac{dy^N}{dt^N}+a_{N-1}\frac{dy^{N-1}}{dt^{N-1}}+...+a_1\frac{dy}{dt}+a_0y(t)=c_0v(t)\]

  1. We define a state vector with state variables

\[x_1(t)=\frac{1}{c_0}y(t),x_2(t)=\dot{x}_1(t),...,x_N(t)=\dot{x}_{N-1}(t)\] \[y(t)=c_0x_1(t)\]

  1. Some state equations are

\[\dot{x}_1(t)=x_2(t)\] \[\dot{x}_2(t)=x_3(t)\] \[x\dot{x}_N(t)=-a_0x_1(t)-a_2x(t)-...-a_{N-1}x_N(t)+v(t)\] \[y(t)=c_0x_1(t)\] We can write these in matrix form to obtain the state space representation. \[\dot{x}(t)=\begin{bmatrix}0&1&0&\cdots&0\\0&0&1&\cdots&0\\\vdots&\vdots&\vdots&\vdots&\vdots\\0&0&0&\cdots&1\\-a_0&-a_1&-a_2&\cdots&-a_{N-1}\end{bmatrix}x(t)+\begin{bmatrix}0\\0\\\vdots\\0\\1\end{bmatrix}v(t)\] \[y(t)=\begin{bmatrix}c_0&0&-&\cdots&0\end{bmatrix}x(t)\] This is controller canonical form and is not unique.

Similarly we can obtain a state space model for the discrete-time system: \[y[n+N]+\sum_{i=0}^{N-1}a_iy[n+i]=c_0v[n]\] We can define the state vector as \[x_1[n]=\frac{1}{c_0}y[n],x_2[n]=y[n+1],...\] \[y[n]=c_0x_1[n]\] Then the state equations are similar to above.

A Multiple-input Multiple-output (MIMO) system is a continuous time system of the state space form: \[\dot{x}(t)=Ax(t)+Bv(t)\] \[y(t)=Cx(t)+Dv(t)\] Systems where \(y\) is scalar are called Scalar-input Scalar-output (SISO) systems. We can similarly describe a discrete time system.

A continuous time system of the form: \[\dot{x}=Ax(t)+Bv(t)\] \[y(t)=Cx(t)+Dv(t)\] Has the solution \[x(t)=e^{At}x(0)+\int_0^te^{A(t-\lambda)}Bv(\lambda)d\lambda\] And output \[y(t)=Ce^{At}x(0)+\int_0^tCe^{A(t-\lambda)}Bv(\lambda)d\lambda+Dv(t)\] The exponential matrix \(e^{At}\) is the state-transition matrix of the system. The matrix exponential is defined by the matrix power series: \[e^{At}=I+At+\frac{A^2t^2}{2!}+\frac{A^3t^3}{3!}+\frac{A^4t^4}{4!}+...\] Where \(I\) is the NxN identity matrix. The matrix exponential has the following properties:

  • \(e^{A(t+s)}=e^{At}e^{As}\)
  • The inverse of \(e^{At}\) is \(e^{-At}\)
  • The time derivative of \(e^{At}\) is \(\frac{d}{dt}e^{At}=Ae^{At}=e^{At}A\)
  • The matrix exponential of a diagonal matrix is the each term as the power matrix's of the base
  • For two square matrices \(A\) and \(\overline{A}\), if the is an invertible matrix \(P\) such that \(\overline{A}=PAP^{-1}\), then \(A\) and \(\overline{A}\) have the same eigenvalues and are similar matrices
  • If \(\overline{A}=\Lambda\) is a diagonal matrix, we can say \(A\) is diagonalizable as \(\Lambda=PAP^{-1}\)

We can exploit this to do an easy exponential as \(e^{At}=P^{-1}e^{\Lambda t}P\) For the discrete time variant, we can use sums instead of integrals.

Integrator realisations

Integrator realisations is a diagram that represents a continuous-time state-space system in terms of integrators, summers and scalar multipliers. To build an integrator realisation, we:

  1. Construct an integrator for each state variable where the output is \(x_i\). Hence the input to the integrator is \(\dot{x}_i\).
  2. Put a summer in front of each integrator and feed into each summer the scalar multiples of the state variables according to the i-th state equation: \(\dot{x}_i=A_ix+B_iv\)
  3. Put scalar multiples of the state variables into a summer to realise the output equation \(y_i=C_i+D_iv\)

A unit delay realisation is the discrete time analogue of an integrator realisation, using delay instead of integrators.

Equivalent systems

For a given state vector, we can transform it to a new state vector: \[\overline{x}(t)=Px(t)\] Where P is an invertible NxN matrix, which is a coordinate transformation matrix. This requires us to transform our state matrices from $(A,B,C,D)$ to \((\bar{A},\bar{B},\bar{C},\bar{D})\), where
\[\bar{A}=PAP^{-1}\] \[\bar{B}=PB\] \[\bar{C}=CP^{-1}\] \[\bar{D}=D\] From this we get a state representation \[\dot{\bar{x}}=\bar{A}\bar{x}(t)+\bar{B}v(t)\] \[y(y)=\bar{C}\bar{x}(t)+\bar{D}v(t)\] Which is equivalent to the original state representation \[\dot{x}=Ax(t)+Bv(t)\] \[y(y)=Cx(t)+Dv(t)\] And likewise for discrete systems. We can say that the state matrices are equivalent. We can then chose a P matrix which diagonalizes the A matrix in order to simplify the calculation.

We can express the zero input solution and response as: \[x_{zi}(t)=e^{At}x(0)\] \[y_{zi}(t)=Ce^{At}x(0)\] And we can express the zero state solution and response as: \[x_{zs}(t)=\int_0^te{A(t-\lambda)}Bv(\lambda)d\lambda\] \[y_{zs}(t)=\int_0^tCe{A(t-\lambda)}Bv(\lambda)d\lambda+Dv(t)\] This gives the overall response as: \[y(t)=y_{zi}(t)+y_{zs}(t)\]

Transfer functions

\[\frac{d^Ny(t)}{dt^N}+\sum_{i=0}^{N-1}a_i\frac{d^iy(t)}{dt^i}=\sum_{i=0}^Mb_i\frac{d^nv(t)}{dt^i}\] Where \(M<N\) and the coefficients \(a_i\) and \(b_i\) are real numbers. This is the general transfer function, relating the inputs and outputs of a system. Assuming \(v^{(i)}(0^-)=0,\forall i\neq M\) and \(y^{(i)}(0^-)=0,\forall i\neq N\). Taking the Laplace transform of both sides gives: \[(s^N+a_{N-1}s^{N-1}+...+a_1s+a_0)Y(s)=(b_Ms^M+b_{M-1}s^{M-1}+...+b_1s+b_0)V(s)\] This gives the input-output transfer function \[H(s)=\frac{Y(s)}{V(s)}=\frac{b_Ms^M+b_{M-1}s^{M-1}+...+b_1s+b_0}{s^N+a_{N-1}s^{N-1}+...+a_1s+a_0}\] We can call the numerator and denominator polynomials \(B(s)\) and \(A(s)\) respectively. The transfer function is used to find the output as: \[Y(s)=H(s)V(s)\] Although the transfer function assumes zero initial conditions and is only useful for obtaining the zero-state response of the system.

Stability

A signal converges to zero if \(\lvert x(t)\rvert \to0\) as \(t\to 0\). A signal is bounded if there exits a constant \(c\geq 0\) such that \(\lvert x(t)\rvert\leq c,\forall t\geq0\). A signal is unbounded or divergent if \(\lvert x(t)\rvert\to\infty\) as \(t\to\infty\). An LTI system is said to be bounded-input bounded-output (BIBO) stable if bounded inputs lead to bounded outputs. An LTI system is BIBO stable if and only if its unit impulse response \(h\) is absolutely integrable: \[\int_0^\infty \lvert h(t)\rvert dt<\infty\] An LTI system is said to be marginally stable is its unit impulse response \(h\) is bounded but not absolutely integrable. If the impulse response is unbounded, then the system is said to be unstable, meaning bounded input produces unbounded output. Marginal stability means that there resists at least one bounded input signal that yields an unbounded output. A system is stable if for all poles of its transfer function, the real component is less than zero. A system is marginally stable is for all poles of the transfer function, the poles either have a real part less than 0 and at least one non-repeated pole with real part equal to 0. In discrete time, a system is BIBO stable if bounded inputs leads to bounded outputs. That is, if the unit pulse response is absolutely summable: \[\sum_{n=0}^\infty\lvert h[n]\rvert<\infty\] An LTI system is marginally stable if its unit pulse response if bounded but not absolutely summable. An LTI system is unstable if the unit pulse response is unbounded. A system is stable if for all poles of its transfer function, the magnitude of the pole is less than 1. A system is marginally stable if for all poles of the transfer function all poles have magnitude less than 1 and there is at least one non-repeated pole equal to 1.

We can model a step response for a continuous time system with the transfer function as, \(V(s)=1/s\): \[Y(s)=\frac{B(s)V(s)}{A(s)}=\frac{B(s)}{sA(s)}\] We can introduce a polynomial function in \(s\), \(E\), such that: \[Y(s)=\frac{E(s)}{A(s)}+\frac{H(s)}{s}\] We can then get the step response as: \[y(t)=y_1(t)+H(0)\] Where \(y_1(t)=\mathcal{L}^{-1}E(s)/A(s)\). For stable systems, \(y_1(t)\to0\), being the transient response and \(H(0)\) is the steady state response.

Damping of responses

If we consider the second order transfer function: \[H(s)=\frac{k}{s^2+2\zeta\omega_ns+\omega_n^2}\] We have \(\zeta\) and \(\omega_n\), being the damping ratio and natural frequency respectively. If both are greater than 0, the system is stable. We can express the poles of the system as: \[p_1,p_2=-\zeta\omega_n\pm\omega_n\sqrt{\zeta^2-1}\] If \(\zeta>1\), the poles are real and distinct, making the system overdamped. If \(\zeta=1\), the poles are real and repeated, making the system critically damped. If \(\zeta<1\), the poles are complex conjugate pairs, making the system underdamped.

For an overdamped system, we have a step response transform of: \[Y(s)=\frac{k}{s(s-p_1)(s-p_2)}\] Resulting in \[y(t)=\frac{k}{p_1p_2}(k_1e^{p_1t}+k_2e^{p_2t}+1),t\geq0\] Here: \[y_{tr}(t)=\frac{k}{p_1p_2}(k_1e^{p_1t}+k_2e^{p_2t})\] \[y_{ss}(t)=\frac{k}{p_1p_2}\] If the system is critically damped, the transform of the step response is: \[Y(s)=\frac{k}{s(s+\omega_n)^2}\] Resulting in: \[y(t)=\frac{k}{\omega_n^2}(1-(1+\omega_nt)e^{-\omega_nt}),t\geq0\] Here: \[y_{tr}(t)=-\frac{k}{\omega_n^2}(1+\omega_nt)e^{-\omega_nt}\] \[y_{ss}(t)=\frac{k}{\omega_n^2}\] If the system is underdamped: \[Y(s)=\frac{k}{a((s+\zeta\omega_n)^2+\omega_d^2)}\] \[y(t)=\frac{k}{\omega_n^2}\left[1-\frac{\omega_n}{\omega_d}e^{-\zeta\omega_nt}\sin(\omega_dt+\phi)\right],t\geq0\] \[y_{ss}(t)=\frac{k}{\omega_n^2}\] \[y_{tr}(t)=-\frac{k}{\omega_n^2}\frac{\omega_n}{\omega_d}e^{-\zeta\omega_nt}\sin(\omega_dt+\phi)\]

This only applies to transfer functions with constant numerators. A non-constant numerator has roots, which will affect the reliability of the analysis. Systems with positive roots will exhibit undershoot in their transient response.

Sinusoidal response

Above we looked at a step response of a system, but we also have sinusoid inputs. If we have an input \(v(t)=\cos(\omega_0t)u(t)\), we have an output: \[Y(s)=\frac{B(s)V(s)}{A(s)}=\frac{sB(s)}{A(s)(s^2+\omega_0^2)}\] We can define a polynomial \(\gamma\) such that: \[Y(s)=\frac{\gamma(s)}{A(s)}+\frac{c}{s-j\omega_0}+\frac{\bar{c}}{s+j\omega}\] The system's sinusoidal response is: \[y(t)=y_1(t)+\lvert H(j\omega_0)\rvert\cos(\omega_0t+\angle H(j\omega_0)),t\geq0\] Where \(y_1(t)=\mathcal(L)^{-1}(\gamma(s)/A(s))\). For stable systems, the transient response, \(y_1(t)\to0\) as \(t\to\infty\), leaving the steady state response as the cosine term. The steady state response has the same frequency as the input, but is magnitude and phase shifted by \(H(j\omega_0)\), similarly to the Fourier transform. In a stable system, an input of \(v(t)=\cos(\omega_0t)\) produces the steady state response without a transient response. Marginally stable systems can resonate, producing a large output, when a specific input frequency is applied such that the poles of the transfer function become repeated. This also applies for discrete systems.

More general state systems

To allow a system which has derivatives on the input as well as the output: \[\frac{d^Ny}{dt^N}+\sum_{i=0}^{N-1}a_i\frac{d^iy}{dt^i}=\sum_{i=0}^{N-1}c_i\frac{d^iv}{dt^i}\] We need a state vector of length N. If we have a transfer function, \(H(s)=\frac{Y(s)}{V(s)}\), we can define polynomials A and C such that: \[C(s)=c_{N-1}s^{N-1}+...+c_1s+c_0\] \[A(s)=s^N+a_{N-1}s^{N-1}+...+a_1s+a_0\] We can then construct the transfer function as \(H(s)=\frac{C(s)}{A(s)}\) We can then introduce a signal w such that \(C(s)W(s)=Y(s)\): \[c_0w(t)+c_1\frac{dw}{dt}+...+c_{N-1}\frac{d^{N-1}w}{dt^{N-1}}=y(t)\] We can define the state variables as: \[x_i(t)=\frac{dw^{i-1}}{dt^{i-1}},i=1,2,...,N\] \[x_{i+1}(t)=\frac{dx_i}{dt},i=1,2,...,N-1\] \[y(t)=c_0x_1(t)+c_1x_2(t)+...+c_{N-1}x_N(t)\] \[V(s)=\frac{V(s)Y(s)}{V(s)}=\frac{A(s)Y(s)}{C(s)}=A(s)W(s)\] \[v(t)=a_0w(t)+a_1\frac{dw}{dt}+...+a_{N-1}\frac{d^{N-1}w}{dt^{N-1}}+\frac{d^Nw}{dt^N}=a_0x_1(t)+a_1x_2(t)+...+a_{N-1}x_N(t)+\dot{x}_N(t)\] This lets us build an A, B and C matrices to represent the system in controller canonical form. Here the C matrix may have non-zero elements in every position. An nth order derivative creates a D matrix, but also complicates the calculation.

For a discrete system described by the difference equation: \[y[n+N]+\sum_{i=0}^{N-1}a_iy[n+i]=\sum_{i=0}^{N-1}c_iv[n+i],n\geq-N\] And having a transfer function \(H(z)=\frac{Y(z)}{V(z)}\), we can define real polynomials: \[C(z)=c_{N-1}z^{N-1}+...+c_1z+c_0\] \[A(z)=z^N+a_{N-1}z^{N-1}+...+a_1z+a_0\] Such that \(H(z)=\frac{C(z)}{A{z}}\). We introduce the variable 2 such that \(C(z)W(z)=Y(z)\). \[c_{N-1}w[n+N+1]+...+c_1w[n+1]+c_0w[n]=y[n]\] \[x_i[n]=2]n+i-1,i=1... N\] \[x_i[n+1]=x_{i+1}[n],i=1... N-1\] We can show that \(V(z)=A(z)W(z)\) and hence: \[v[n]=a_0x_1[n]+a_1x_2[n]+...+a_{N-1}x_n[n]+x_N[n+1]\] This lets us write the discrete system in controller canonical form, with standard A and B matrices and C with values in every position.

For a system in state space form, we can find the Laplace transform. The state vector becomes: \[sX(s)=AX(s)+BV(s)\] \[X(s)=(sI-A)^{-1}BV(s)\] And the output becomes: \[Y(s)=CX(s)\] \[Y(s)=[C(sI-A)^{-1}B]V(s)\] This allows us to express the transfer function in s domain as: \[H(s)=C(sI-A)^{-1}B\] And the equivalent in z domain applies. The poles of an LTI system are those such that: \[det(sI-A)=0\] Or that the poles of the system are equal to the eigenvalues of the state matrix A. As a result the system is stable if the eigenvalues are in the left-hand side of the complex plane or in the unit disk for continuous and discrete systems respectively.

notes/elen30012.txt · Last modified: 2023/05/30 22:32 by 127.0.0.1