$conf['savedir'] = '/app/www/public/data'; notes:elen90055 [DokuWiki]

Site Tools


notes:elen90055

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Next revision
Previous revision
notes:elen90055 [2021/09/10 00:25] – created joelegnotes:elen90055 [2023/05/30 22:32] (current) – external edit 127.0.0.1
Line 160: Line 160:
 To find the arrival and departure angles, we can apply the phase condition to a test point with a pole or zero for which we want to compute the arrival/departure angle. $$\angle(s_0-\beta_1)-\angle(s_0-\alpha_2)+...=(2l+1)\pi;l\in\mathbb{Z}$$ To find the arrival and departure angles, we can apply the phase condition to a test point with a pole or zero for which we want to compute the arrival/departure angle. $$\angle(s_0-\beta_1)-\angle(s_0-\alpha_2)+...=(2l+1)\pi;l\in\mathbb{Z}$$
  
-====== Nyquist criterion ======+====== Nyquist stability criterion ====== 
 + 
 +Moving around a contour $F(s)$ in the s-plane, the angle from a zero inside the contour to $F(s)$ changes by $-2\pi$ rad. For a zero outside the contour, moving around the contour causes there to be no change in the angle of $F(s)$. For two zeros, the phase change can be $-4\pi$ for both enclosed, $-2\pi$ for only one enclosed or $0$ for none enclosed. When moving from the s-plane to the F-plane, each enclosed zero cause a counter-clockwise wrap around the origin. Poles have a similar effect, but in the opposite direction to a zero. That is, the phase change is $2\pi$. For a rational function, as $s$ traverses a given contour $\mathcal{C}_s$, the change in phase as $F(s)$ traverses $\mathcal{C}_F$ is: $$\angle F(s)=\sum_{i=1}^m\angle(s-c_i)-\sum_{j=1}^n\angle(s-p_i)$$ This is, the number of counterclockwise encirclements of the origin by $F(s)$ as $s$ travels clockwise along the contour is equal to the number of zeros less the number of poles. This is Cauchy's argument principle. $$N=Z-P$$ Where $N$ is the number of counterclockwise encirclements of the origin of the contour, $Z$ is the number of zeros and $P$ is the number of poles. 
 + 
 +Given a transfer function: $$T_0(s)=\frac{\Lambda_0(s)}{1+\Lambda_0(s)},\Lambda_0(s)=C(s)G_0(s)$$ The closed loop poles are the zeros of $F(s)=1+\Lambda_0(s)$ and the open loop poles are the poles of $\Lambda_0(s)$, being the poles of $F(s)$. Assuming the transfer function is strictly proper, such that: $$\lim_{|s|\to\infty}F(s)=1$$ We use the Nyquist D contour, encompassing the whole right half of the plane, being the values for which $\mathcal{R}(s)>0$. This path is given by a vertical path on the imaginary axis and a clockwise semicircle in the s-plane. In the F plane, the whole semicircular arc maps to the single point $(1,j0)$, so only the mapping of the path on the imaginary axis needs to be mapped. The plot in the F plane needs to be symmetrical about the real axis. The convention is to plot the open loop transfer function $\Lambda_0(s)$ plane instead of the F plane. Given the number of unstable poles of $\Lambda_0(s)$, the plot of the open loop frequency response gives us the number of clockwise encirclements of the critical point $(-1,0)$, this then gives the number of closed loop poles from the Cauchy argument principle. 
 + 
 +The Nyquist stability criteria states that the closed loop system is stable iff the Nyquist plot of $\Lambda_0(s)$ encircles the point $(-1,0)$ in the counterclockwise direction as $s$ traverses the D contour in the clockwise direction. This is an alternative to the root-locus and Routh-Hurwitz criterion. It can deal with systems with non-rational transfer functions and unstable open loop transfer functions. To see how changing the gain affects the stability of the system: $$1+K\Lambda_0(s)=0$$ We then check the encirclements of the new critical point $\left(-\frac{1}{K},j0\right)$. 
 + 
 +Adding in a delay can only be visualised in a Nyquist plot. The delay causes a spiral in F domain, and as such, the function will become stable and unstable as the gain changes. The larger the delay, the slower varying the spiral. 
 + 
 +When there are poles on the imaginary axis, we need to create infinitesimally small exclusions of those poles when creating the Nyquist contour. This creates a divergent Nyquist plot which can be closed by completing the loop from $\infty$ to $-\infty$ on the imaginary axis while passing through $\infty$ or $-\infty$ on the real axis. The infinitesimal bump translates to a $180^\circ$ shift in the Nyquist plot with an asymptote. The arc connecting the contour is clockwise. 
 + 
 +A pole a the origin has a vertical connection between $\omega=0^+$ and $0^-$, so any gain value is stable, as the critical point will never be encircled. A purely imaginary pole ($\lambda_0=\frac{1}{1-\omega^2}$)has a Nyquist plot that passes through the critical point and is marginally/critically stable. The plot starts at $(1,0j)$ at $\omega=0$, moving to $(+ \infty,0j)$ as $\omega\to 1^-$, where it jumps to $(-\infty,0j)$ at $\omega=1^+$, where it goes to 0 as $\omega\to\infty$. From $\omega\to-\infty$, the plot moves from 0 to $(-\infty,0j)$ as $\omega\to-1^-$, then jumps to $(+\infty,0j)$ at $\omega=-1^+$, where it returns to $(1,0j)$ as $\omega\to0$. 
 + 
 +===== Margins ===== 
 + 
 +As the Nyquist plot passes close to the critical point, the output has sustained oscillations. It makes sense then to measure how far the Nyquist plot is from the critical point: 
 + 
 +  * **Gain margin** Real valued increase in DC gain needed for the plot to cross the critical point 
 +  * **Phase margin** Angle Nyquist plot must be rotated clockwise to cross the critical point 
 +  * **Sensitivity peak** 1/radius of circle around the critical point that just touches the Nyquist plot 
 + 
 +$>30^\circ$ is a good phase margin, $>15dB$ is a good gain margin and $<4$ is s good sensitivity peak. 
 + 
 +The phase margin is the angle to the point on the Nyquist plot where $|\Lambda_0(j\omega)|=1$, so the that the Nyquist plot could be rotated to intersect the critical point. The gain margin is where the Nyquist plot intersects the negative real axis ($-180^\circ$), and is the distance to that point from the origin. It measures the extra gain that the system could tolerate before hitting the critical point. The sensitivity peak is the minimum distance to the Nyquist plot from the critical point. $$\frac{1}{\eta}=\frac{1}{\min_\omega|1+\Lambda_0(j\omega)|}=\max_\omega|S_0(j\omega)|$$ The phase and gain margin are used together as a design guide, and the sensitivity peak alone can be used. 
 + 
 +The phase margin and sensitivity function are related by: $$|1+\Lambda_0(j\omega_c)|=2\sin\left(\frac{\phi}{2}\right)$$ $$\frac{1}{2\sin\left(\frac{\phi}{2}\right)}=|S_0(j\omega_c)|\leq\max_\omega|S_0(j\omega)|=SP$$ $$\frac{1}{SP}\leq 2\sin\left(\frac{\phi}{2}\right)$$ As the sensitivity peak falls (Nyquist plot approaches the critical point), the phase margin increases. 
 + 
 +The gain margin and sensitivity peak are related by: $$1-\eta>|a|\implies-20\log(1-\eta)\leq-20\log|a|=GM$$ So the larger $\eta$, the greater the gain margin must be. 
 + 
 +The gain margin can be seen in the bode plot as the displacement in the negative direction from 0 on the magnitude plot when the phase is 0. The phase margin is the distance from $-180^\circ$ when the magnitude is 0. The gain margin can be infinite if the only intersection with the real axis is at the origin. The phase margin may be $360^\circ$ if no rotation of the Nyquist plot will intersect the critical point. 
 + 
 +A small sensitivity peak guarantees good gain and phase margins, the converse is not true. Where there are multiple phase crossovers, we find the gain margin for the greatest crossover frequency that is closest to the critical point. Likewise, for multiple gain crossovers, we find the phase margin for the crossover closest to the critical point. 
 + 
 +====== Robustness to model uncertainty ====== 
 + 
 +The sensitivity function describes how disturbances and model uncertainty affects the system. As all models are approximate, there will be some level of uncertainty inherent in the system, and it can be useful to understand how the model deviates from the real system. Robustness analysis needs to characterise the amount of mismatch that does not destroy stability/performance. For LTI modelling errors, we consider: 
 + 
 +  * Additive modelling error model $R=R_0+R_\epsilon$ 
 +  * Multiplicative modelling error model $R=R_0(1+R_\Delta)$ 
 + 
 +We can note that $R_\epsilon=R_0R_\Delta$. Often the multiplicative form is useful when working with transfer functions. The modelling errors can have different frequency response characteristics to the modelled transfer function. 
 + 
 +At low frequencies the additive modelling error peaks, making it more useful. At high frequencies the multiplicative error saturates, so it is more useful. 
 + 
 +The small gain theorem allows us to analyse systems where the model isn't perfect. Suppose $C(s)$ internally stabilises $G_0(s)$, we state the conditions when $C(s)$ stabilises the true plant. We assume that $\Lambda_0=G_0C$ and $\Lambda=GC$ have the same number of RHP poles. The small gain theorem states that the actual system is internally stable if the following equations are true: $$|G_\epsilon(j\omega)|\frac{|C(j\omega)|}{|1+\Lambda_0(j\omega)|}<1$$ $$|G_\Delta(j\omega)|\frac{|\Lambda_0(j\omega)|}{|1+\Lambda_0(j\omega)|}<1\equiv|G_\Delta T_0|<1$$ That is the real system's Nyquist plot lies within a distance of $|G_\Delta\Lambda_0|$ of the model's Nyquist plot, so does not encircle the critical point. 
 + 
 +The small gain theorem only provides a sufficient condition, not a necessary one, so a violation may not mean the system is unstable. It is hard to obtain a less conservative criterion in general since we may only have a frequency dependent bound on the magnitude error $|G_\Delta(j\omega)|$ without any phase information. 
 + 
 +The modelling error can also be applied to the sensitivity functions. $$G_\Delta(s)=\frac{G(s)-G_0(s)}{G_0(s)}$$ The actual sensitivity functions are given by; $$S(s)=S_0(s)S_\Delta(s)$$ $$T(s)=T_0(s)(1+G_\Delta(s))S_\Delta(s)$$ $$S_i(s)=S_{i0}(s)(1+G_\Delta(s))S_\Delta(s)$$ $$S_u(s)=S_{u0}(s)S_\Delta(s)$$ Where the error sensitivity is: $$S_\Delta(s)=\frac{1}{1+T_0(s)G_\Delta(s)}$$ We want $S_\Delta(s)\approx1$ over all frequencies of interest. This is roughly true if $|T_0(j\omega)G_\Delta(j\omega)|<<1$. The model uncertainty limits the closed loop bandwidth. 
 + 
 +===== Frequency based controller design ===== 
 + 
 +In designing, we should consider all 4 sensitivity functions, but this can be cumbersome. Under certain conditions these functions can be approximated with the open loop gain. We can place bounds on the sensitivity functions: $$\frac{1}{|\Lambda_0(j\omega)|+1}\leq|S_0(j\omega)|\leq\frac{1}{|\Lambda_0(j\omega)|-1}$$ $$\frac{|\Lambda_0(j\omega)|}{|\Lambda_0(j\omega)|+1}\leq|T_0(j\omega)|\leq\frac{|\Lambda_0(j\omega)|}{|\Lambda_0(j\omega)|-1}$$ $$\frac{1}{|C(j\omega)|}\frac{|\Lambda_0(j\omega)|}{|\Lambda_0(j\omega)|+1}\leq|S_{i0}(j\omega)|\leq\frac{1}{|C(j\omega)|}\frac{|\Lambda_0(j\omega)|}{|\Lambda_0(j\omega)|-1}$$ $$\frac{1}{|G_0(j\omega)|}\frac{|\Lambda_0(j\omega)|}{|\Lambda_0(j\omega)|+1}\leq|S_{u0}(j\omega)|\leq\frac{1}{|G_0(j\omega)|}\frac{|\Lambda_0(j\omega)|}{|\Lambda_0(j\omega)|-1}$$ For frequencies where $|\Lambda_0(j\omega)|>>1$: $$|S_0(j\omega)|<<1,|T_0(j\omega)|\approx 1,|S_{i0}(j\omega)\approx\frac{1}{|C(j\omega)|},|S_{u0}(j\omega)\approx\frac{1}{|G_0(j\omega)|}$$ Similarly for frequencies where $|\Lambda_0(j\omega)|<<1$: $$|S_0(j\omega)|\approx1,|T_0(j\omega)|<<1,|S_{i0}(j\omega)\approx|G_0(j\omega)|,|S_{u0}(j\omega)\approx|C(j\omega)|$$ Around the crossover frequency $|\Lambda_0(j\omega)|\approx 0$, the sensitivity functions can do anything. The steady state error is equal to the DC value of the sensitivity function. $$e_{ss}=S_0(0)$$ The low frequency region affects the tracking and steady state value. The region around the crossover affects transients and stability. The high frequency region affects robustness and disturbance rejection. 
 + 
 +In the low frequency region, in order to have 0 steady state error, the gain needs to be above a slope of $-20\text{dB/decade}$. In order to have good tracking, the gain needs to be larger than some $M>>1$. 
 + 
 +The crossover frequency affects how close the Nyquist plot of the open loop gain gets to the critical point. The smaller the PM or GM, the stronger the oscillations in the transient response. The rise time and crossover frequency are related. $$\omega_c=\omega_n\sqrt{\sqrt{1+4\psi^4}-2\psi^2}$$ $$M_f=\pi-\angle\Lambda(j\omega_c)=\arctan\frac{2\psi}{\sqrt{\sqrt{1+4\psi^4}-2\psi^2}}$$ $\omega_c$ is the crossover frequency, $\omega_n\geq\frac{1.8}{t_r}$ is related to the rise time and $M_f$ is the phase margin and $\psi$ is the damping ratio. The phase margin is inversely related to the overshoot $M_p$. Unless the open-loop is exactly $\omega_n^2/(s(s+2\psi\omega_n))$, these formulas are approximate. The approximations are good provided the open-loop has only one cross-over, the slope is $-20$ to $-40\text{dB/decade}$ around the cross-over and the phase is between $-90$ and $-180^\circ$. 
 + 
 +In order to have good disturbance/noise rejection, we need the open loop gain to be small for all high frequencies. 
 + 
 +====== Frequency domain compensation ====== 
 + 
 +===== Basic compensators ===== 
 + 
 +The basic elements that can be used in compensation are: 
 + 
 +  * Lead control (PD special case) 
 +  * Lag control (PI special case) 
 +  * Lead-lag control (PID special case) 
 + 
 +===== Lag compensator ===== 
 + 
 +A lag compensator has a transfer function given by: $$C(s)=\frac{K(\tau_zs+1)}{(\tau_ps+1)}$$ Where $0<\tau_z<\tau_p$, $C(0)=K$ and $C(\infty)=K\frac{\tau_z}{\tau_p}$. $C(s)$ has a real pole and zero, both of which are stable and the pole is closer to the origin than the zero. It is a lag compenstator because the phase is negative. A lag compensator increases the low frequency gain. This improves the steady state error but reduces the phase margin. 
 + 
 +A PI compensator is a special case of a lag compensator. $$C(s)=\frac{K(\tau_zs+1)}{s}$$ Where $0<\tau_z$, $C(0)=\infty$ and $C(\infty)=K\tau_z$. 
 + 
 +A lag compensator is advantageous as most plants have an excess of phase at low frequencies. If we chose the corner frequencies of the lag compensator to be small, we can improve the DC gain without affecting the PM too much. If the compensator has too large a peak phase, it can move the gain margin and cause instability. Alternatively the lag compensator can be used to reduce the gain at high frequencies, improving the phase margin as the gain crossover frequency is reduced. This doesn't change the phase crossover frequency, improving the gain margin. 
 + 
 +We can rewrite the controller as: $$C(s)=\alpha\frac{Ts+1}{\alpha Ts+1},\alpha>1$$ This reduces the number of parameters that can be changed. When designing the compensator, we generally want the pole to be a decade less than the crossover frequency. 
 + 
 +The closed loop bandwidth is the frequency at which we still have good tracking. $$20\log_{10}|T(j\omega)|\geq 3dB,\forall\omega\in[0,\omega_{BW}]$$ It is typically assumed that $\omega_{BW}\in[\omega_c,2\omega_c]$, due to the second order approximation of the transfer function. 
 + 
 +===== Lead compensator ===== 
 + 
 +A lead compensator has a transfer function given by: $$C(s)=\frac{K(\tau_zs+1)}{(\tau_ps+1)}$$ Where $0<\tau_p<\tau_z$, $C(0)=K$ and $C(\infty)=K\frac{\tau_z}{\tau_p}$. This is the same as the lead compensator except the zero is closer to the origin than the pole. It is a lead compensator because the phase is positive. The phase peak is at $\omega=\frac{1}{\sqrt{\tau_p\tau_z}}$ and has a value of $\phi_{\max}=\arcsin\frac{\tau_z-\tau_p}{\tau_z+\tau_p}$. 
 + 
 +We can rewrite the controller as: $$C(s)=K\frac{Ts+1}{\alpha Ts+1}$$ Where $\tau_z=T$, $\tau_p=\alpha T=\alpha\tau_z$ and $\alpha\in(0,1)$. This gives: $$\omega_\max=\frac{1}{T\sqrt{\alpha}}$$ $$\alpha=\frac{1-\sin\phi_\max}{1+\sin\phi_\max}$$ 
 + 
 +Applying a compensator can shift the crossover frequency, so multiple iterations may be required when finding the controller. The lead compensator can be used to increase the phase at the crossover frequency, increasing the PM. 
 + 
 +===== Lead-lag compenstator ===== 
 + 
 +Lead-lag compensation combines both a lead and lag compensator. Its transfer function is given by: $$C(s)=\frac{K(\tau_{z2}s+1)}{(\tau_{p2}s+1)}\frac{(\tau_{z1}s+1)}{(\tau_{p1}s+1)}$$ Where $0<\underbrace{\tau_{p1}<\tau_{z1}}_{\text{lead}}<\underbrace{\tau_{z2}<\tau_{p2}}_{\text{lag}}$. 
 + 
 +This is a cascade of a lead and lag compensators, although other combinations are possible. The lag effect comes first, then the lead. 
 + 
 +PID control is a special case where $\tau_{p2}\to\infty$ and $\tau_{p1}\to0$. $$C(s)=\frac{K(\tau_{z2}s+1)(\tau_{z1}s+1)}{s}=\underbrace{K(\tau_{z1}+\tau_{z2})}_{P}+\underbrace{\frac{K}{s}}_{I}+\underbrace{K\tau_{z2}\tau_{z1}s}_{D}$$ PID controllers are not strictly proper, so often a fast pole is introduced to make it proper (e.g. $\frac{1}{1+\epsilon s},\epsilon\approx0$) 
 + 
 +To simplify the controller, we introduce $\beta$, where $\log\beta$ is the rise and fall spacing of the magnitude. This fixes the high frequency gain to be the same as the DC gain. The controller becomes: $$C(s)=K_c\frac{(T_1s+1)(T_2s+1)}{(\frac{T_1}{\beta}s+1)(\beta T_2s+1)},\beta>1$$ 
 + 
 +We define $K_v$ as the steady state output to a ramp input. $$K_v=\lim_{s\to0}sC(s)G_0(s)$$ For reference inputs, the steady state error is the reciprocal of $K_v$. $$\epsilon_{ss}=\frac{1}{K_v}$$ A lag and lead lag controller create a dominant real slow pole, which leads to a small tail in the step response. 
 + 
 +If a plant has a pole or zero that are undesirable (e.g. slow), we can use a pre-compensator to cancel it and then a lead/lag compensator can be used to achieve the performance specifications. Do not cancel an unstable pole or zero as this would lose internal stability. 
 + 
 +===== Comparison ===== 
 + 
 +Lead compensator achieve specs largely by virtue of their phase responses, adding positive phase in the mid-freq region to boost PM. Lag compensators achieve specs by their magnitude responses, amplification at low frequencies and attenuation at high. Lead-lag compensators can combine benefits of both schemes. 
 + 
 +Lead pros: 
 + 
 +  * Higher crossover frequency 
 +    * Higher bandwidth 
 +    * Faster system, reduction in settling and rise time 
 + 
 +Lead cons: 
 + 
 +  * Higher bandwidth may not be desirable if measurement noise is present, or if there are high frequency modelling uncertainties 
 +  * Increases control signal amplitude and control sensitivity function, causing larger control signals 
 +    * Increased power consumption 
 +    * Increased cost 
 +    * Actuator saturation and non-linear effects 
 + 
 +Lag pros: 
 + 
 +  * Can improve DC gain and reduce steady state errors 
 +  * May improve PM by reducing the gain at higher frequencies 
 +  * High frequency noise may be attenuated without reducing DC gain 
 + 
 +Lag cons: 
 + 
 +  * Typically yields lower crossover frequency, lower bandwidth and slower response 
 +  * Zero-pole pair are close to zero which typically induces a very long tail in the transient response 
 + 
 +Lead-lag pros: 
 + 
 +  * If both improvement in steady state and in transient response are required, we can use a lead-lag compensator to exploit the advantages of both compensators 
 + 
 +A large number of practical problems can be solved using these compensators. More complex problems sometimes require compensators with different zero-pole configurations. Some problems are more easily tackled using optimal control techniques. 
 + 
 +====== Fundamental limitations ====== 
 + 
 +There are fundamental limits to what we can achieve by compensation. One important constraint is that reducing sensitivity at low frequencies causes large sensitivity at high frequencies. The Bode sensitivity integral provides this constraint. Given an open-loop transfer function with no poles on the imaginary axis or in the RHP and supposing that the relative degree of the open-loop is $\geq 2$, then: $$\int_0^\infty \ln|S_0(j\omega)|d\omega=0$$ Meaning that reducing the sensitivity at one frequency increases sensitivity at another. This also holds when there are open-loop poles on the imaginary axis. It becomes a positive integral when there are open-loop RHP poles. A peaking of sensitivity occurs when there is an upper bound on the loop-gain cross-over. 
 + 
 +The internal model principle states that to track/reject a set of signals, the controller must contain a model of those signals. That is, the controller must be able to predict those signals. In order to achieve asymptotically perfect reference tracking and disturbance rejection, the controller must take the form of: $$C(s)=\frac{B(s)}{A(s)\Gamma(s)}$$ Where $\Gamma(s)$ is the characteristic polynomial of the disturbance. If we know the reference and disturbance characteristics and can generate their characteristic polynomials, then we only need to find $A(s)$ and $B(s)$ so that the closed loop system is internally stable.
  
notes/elen90055.1631233530.txt.gz · Last modified: 2023/05/30 22:32 (external edit)