notes:elen90055
Differences
This shows you the differences between two versions of the page.
Next revision | Previous revision | ||
notes:elen90055 [2021/09/10 00:25] – created joeleg | notes:elen90055 [2023/05/30 22:32] (current) – external edit 127.0.0.1 | ||
---|---|---|---|
Line 160: | Line 160: | ||
To find the arrival and departure angles, we can apply the phase condition to a test point with a pole or zero for which we want to compute the arrival/ | To find the arrival and departure angles, we can apply the phase condition to a test point with a pole or zero for which we want to compute the arrival/ | ||
- | ====== Nyquist criterion ====== | + | ====== Nyquist |
+ | |||
+ | Moving around a contour $F(s)$ in the s-plane, the angle from a zero inside the contour to $F(s)$ changes by $-2\pi$ rad. For a zero outside the contour, moving around the contour causes there to be no change in the angle of $F(s)$. For two zeros, the phase change can be $-4\pi$ for both enclosed, $-2\pi$ for only one enclosed or $0$ for none enclosed. When moving from the s-plane to the F-plane, each enclosed zero cause a counter-clockwise wrap around the origin. Poles have a similar effect, but in the opposite direction to a zero. That is, the phase change is $2\pi$. For a rational function, as $s$ traverses a given contour $\mathcal{C}_s$, | ||
+ | |||
+ | Given a transfer function: $$T_0(s)=\frac{\Lambda_0(s)}{1+\Lambda_0(s)}, | ||
+ | |||
+ | The Nyquist stability criteria states that the closed loop system is stable iff the Nyquist plot of $\Lambda_0(s)$ encircles the point $(-1,0)$ in the counterclockwise direction as $s$ traverses the D contour in the clockwise direction. This is an alternative to the root-locus and Routh-Hurwitz criterion. It can deal with systems with non-rational transfer functions and unstable open loop transfer functions. To see how changing the gain affects the stability of the system: $$1+K\Lambda_0(s)=0$$ We then check the encirclements of the new critical point $\left(-\frac{1}{K}, | ||
+ | |||
+ | Adding in a delay can only be visualised in a Nyquist plot. The delay causes a spiral in F domain, and as such, the function will become stable and unstable as the gain changes. The larger the delay, the slower varying the spiral. | ||
+ | |||
+ | When there are poles on the imaginary axis, we need to create infinitesimally small exclusions of those poles when creating the Nyquist contour. This creates a divergent Nyquist plot which can be closed by completing the loop from $\infty$ to $-\infty$ on the imaginary axis while passing through $\infty$ or $-\infty$ on the real axis. The infinitesimal bump translates to a $180^\circ$ shift in the Nyquist plot with an asymptote. The arc connecting the contour is clockwise. | ||
+ | |||
+ | A pole a the origin has a vertical connection between $\omega=0^+$ and $0^-$, so any gain value is stable, as the critical point will never be encircled. A purely imaginary pole ($\lambda_0=\frac{1}{1-\omega^2}$)has a Nyquist plot that passes through the critical point and is marginally/ | ||
+ | |||
+ | ===== Margins ===== | ||
+ | |||
+ | As the Nyquist plot passes close to the critical point, the output has sustained oscillations. It makes sense then to measure how far the Nyquist plot is from the critical point: | ||
+ | |||
+ | * **Gain margin** Real valued increase in DC gain needed for the plot to cross the critical point | ||
+ | * **Phase margin** Angle Nyquist plot must be rotated clockwise to cross the critical point | ||
+ | * **Sensitivity peak** 1/radius of circle around the critical point that just touches the Nyquist plot | ||
+ | |||
+ | $> | ||
+ | |||
+ | The phase margin is the angle to the point on the Nyquist plot where $|\Lambda_0(j\omega)|=1$, | ||
+ | |||
+ | The phase margin and sensitivity function are related by: $$|1+\Lambda_0(j\omega_c)|=2\sin\left(\frac{\phi}{2}\right)$$ $$\frac{1}{2\sin\left(\frac{\phi}{2}\right)}=|S_0(j\omega_c)|\leq\max_\omega|S_0(j\omega)|=SP$$ $$\frac{1}{SP}\leq 2\sin\left(\frac{\phi}{2}\right)$$ As the sensitivity peak falls (Nyquist plot approaches the critical point), the phase margin increases. | ||
+ | |||
+ | The gain margin and sensitivity peak are related by: $$1-\eta> | ||
+ | |||
+ | The gain margin can be seen in the bode plot as the displacement in the negative direction from 0 on the magnitude plot when the phase is 0. The phase margin is the distance from $-180^\circ$ when the magnitude is 0. The gain margin can be infinite if the only intersection with the real axis is at the origin. The phase margin may be $360^\circ$ if no rotation of the Nyquist plot will intersect the critical point. | ||
+ | |||
+ | A small sensitivity peak guarantees good gain and phase margins, the converse is not true. Where there are multiple phase crossovers, we find the gain margin for the greatest crossover frequency that is closest to the critical point. Likewise, for multiple gain crossovers, we find the phase margin for the crossover closest to the critical point. | ||
+ | |||
+ | ====== Robustness to model uncertainty ====== | ||
+ | |||
+ | The sensitivity function describes how disturbances and model uncertainty affects the system. As all models are approximate, | ||
+ | |||
+ | * Additive modelling error model $R=R_0+R_\epsilon$ | ||
+ | * Multiplicative modelling error model $R=R_0(1+R_\Delta)$ | ||
+ | |||
+ | We can note that $R_\epsilon=R_0R_\Delta$. Often the multiplicative form is useful when working with transfer functions. The modelling errors can have different frequency response characteristics to the modelled transfer function. | ||
+ | |||
+ | At low frequencies the additive modelling error peaks, making it more useful. At high frequencies the multiplicative error saturates, so it is more useful. | ||
+ | |||
+ | The small gain theorem allows us to analyse systems where the model isn't perfect. Suppose $C(s)$ internally stabilises $G_0(s)$, we state the conditions when $C(s)$ stabilises the true plant. We assume that $\Lambda_0=G_0C$ and $\Lambda=GC$ have the same number of RHP poles. The small gain theorem states that the actual system is internally stable if the following equations are true: $$|G_\epsilon(j\omega)|\frac{|C(j\omega)|}{|1+\Lambda_0(j\omega)|}< | ||
+ | |||
+ | The small gain theorem only provides a sufficient condition, not a necessary one, so a violation may not mean the system is unstable. It is hard to obtain a less conservative criterion in general since we may only have a frequency dependent bound on the magnitude error $|G_\Delta(j\omega)|$ without any phase information. | ||
+ | |||
+ | The modelling error can also be applied to the sensitivity functions. $$G_\Delta(s)=\frac{G(s)-G_0(s)}{G_0(s)}$$ The actual sensitivity functions are given by; $$S(s)=S_0(s)S_\Delta(s)$$ $$T(s)=T_0(s)(1+G_\Delta(s))S_\Delta(s)$$ $$S_i(s)=S_{i0}(s)(1+G_\Delta(s))S_\Delta(s)$$ $$S_u(s)=S_{u0}(s)S_\Delta(s)$$ Where the error sensitivity is: $$S_\Delta(s)=\frac{1}{1+T_0(s)G_\Delta(s)}$$ We want $S_\Delta(s)\approx1$ over all frequencies of interest. This is roughly true if $|T_0(j\omega)G_\Delta(j\omega)|<< | ||
+ | |||
+ | ===== Frequency based controller design ===== | ||
+ | |||
+ | In designing, we should consider all 4 sensitivity functions, but this can be cumbersome. Under certain conditions these functions can be approximated with the open loop gain. We can place bounds on the sensitivity functions: $$\frac{1}{|\Lambda_0(j\omega)|+1}\leq|S_0(j\omega)|\leq\frac{1}{|\Lambda_0(j\omega)|-1}$$ $$\frac{|\Lambda_0(j\omega)|}{|\Lambda_0(j\omega)|+1}\leq|T_0(j\omega)|\leq\frac{|\Lambda_0(j\omega)|}{|\Lambda_0(j\omega)|-1}$$ $$\frac{1}{|C(j\omega)|}\frac{|\Lambda_0(j\omega)|}{|\Lambda_0(j\omega)|+1}\leq|S_{i0}(j\omega)|\leq\frac{1}{|C(j\omega)|}\frac{|\Lambda_0(j\omega)|}{|\Lambda_0(j\omega)|-1}$$ $$\frac{1}{|G_0(j\omega)|}\frac{|\Lambda_0(j\omega)|}{|\Lambda_0(j\omega)|+1}\leq|S_{u0}(j\omega)|\leq\frac{1}{|G_0(j\omega)|}\frac{|\Lambda_0(j\omega)|}{|\Lambda_0(j\omega)|-1}$$ For frequencies where $|\Lambda_0(j\omega)|>> | ||
+ | |||
+ | In the low frequency region, in order to have 0 steady state error, the gain needs to be above a slope of $-20\text{dB/ | ||
+ | |||
+ | The crossover frequency affects how close the Nyquist plot of the open loop gain gets to the critical point. The smaller the PM or GM, the stronger the oscillations in the transient response. The rise time and crossover frequency are related. $$\omega_c=\omega_n\sqrt{\sqrt{1+4\psi^4}-2\psi^2}$$ $$M_f=\pi-\angle\Lambda(j\omega_c)=\arctan\frac{2\psi}{\sqrt{\sqrt{1+4\psi^4}-2\psi^2}}$$ $\omega_c$ is the crossover frequency, $\omega_n\geq\frac{1.8}{t_r}$ is related to the rise time and $M_f$ is the phase margin and $\psi$ is the damping ratio. The phase margin is inversely related to the overshoot $M_p$. Unless the open-loop is exactly $\omega_n^2/ | ||
+ | |||
+ | In order to have good disturbance/ | ||
+ | |||
+ | ====== Frequency domain compensation ====== | ||
+ | |||
+ | ===== Basic compensators ===== | ||
+ | |||
+ | The basic elements that can be used in compensation are: | ||
+ | |||
+ | * Lead control (PD special case) | ||
+ | * Lag control (PI special case) | ||
+ | * Lead-lag control (PID special case) | ||
+ | |||
+ | ===== Lag compensator ===== | ||
+ | |||
+ | A lag compensator has a transfer function given by: $$C(s)=\frac{K(\tau_zs+1)}{(\tau_ps+1)}$$ Where $0< | ||
+ | |||
+ | A PI compensator is a special case of a lag compensator. $$C(s)=\frac{K(\tau_zs+1)}{s}$$ Where $0< | ||
+ | |||
+ | A lag compensator is advantageous as most plants have an excess of phase at low frequencies. If we chose the corner frequencies of the lag compensator to be small, we can improve the DC gain without affecting the PM too much. If the compensator has too large a peak phase, it can move the gain margin and cause instability. Alternatively the lag compensator can be used to reduce the gain at high frequencies, | ||
+ | |||
+ | We can rewrite the controller as: $$C(s)=\alpha\frac{Ts+1}{\alpha Ts+1}, | ||
+ | |||
+ | The closed loop bandwidth is the frequency at which we still have good tracking. $$20\log_{10}|T(j\omega)|\geq 3dB, | ||
+ | |||
+ | ===== Lead compensator ===== | ||
+ | |||
+ | A lead compensator has a transfer function given by: $$C(s)=\frac{K(\tau_zs+1)}{(\tau_ps+1)}$$ Where $0< | ||
+ | |||
+ | We can rewrite the controller as: $$C(s)=K\frac{Ts+1}{\alpha Ts+1}$$ Where $\tau_z=T$, $\tau_p=\alpha T=\alpha\tau_z$ and $\alpha\in(0, | ||
+ | |||
+ | Applying a compensator can shift the crossover frequency, so multiple iterations may be required when finding the controller. The lead compensator can be used to increase the phase at the crossover frequency, increasing the PM. | ||
+ | |||
+ | ===== Lead-lag compenstator ===== | ||
+ | |||
+ | Lead-lag compensation combines both a lead and lag compensator. Its transfer function is given by: $$C(s)=\frac{K(\tau_{z2}s+1)}{(\tau_{p2}s+1)}\frac{(\tau_{z1}s+1)}{(\tau_{p1}s+1)}$$ Where $0< | ||
+ | |||
+ | This is a cascade of a lead and lag compensators, | ||
+ | |||
+ | PID control is a special case where $\tau_{p2}\to\infty$ and $\tau_{p1}\to0$. $$C(s)=\frac{K(\tau_{z2}s+1)(\tau_{z1}s+1)}{s}=\underbrace{K(\tau_{z1}+\tau_{z2})}_{P}+\underbrace{\frac{K}{s}}_{I}+\underbrace{K\tau_{z2}\tau_{z1}s}_{D}$$ PID controllers are not strictly proper, so often a fast pole is introduced to make it proper (e.g. $\frac{1}{1+\epsilon s}, | ||
+ | |||
+ | To simplify the controller, we introduce $\beta$, where $\log\beta$ is the rise and fall spacing of the magnitude. This fixes the high frequency gain to be the same as the DC gain. The controller becomes: $$C(s)=K_c\frac{(T_1s+1)(T_2s+1)}{(\frac{T_1}{\beta}s+1)(\beta T_2s+1)}, | ||
+ | |||
+ | We define $K_v$ as the steady state output to a ramp input. $$K_v=\lim_{s\to0}sC(s)G_0(s)$$ For reference inputs, the steady state error is the reciprocal of $K_v$. $$\epsilon_{ss}=\frac{1}{K_v}$$ A lag and lead lag controller create a dominant real slow pole, which leads to a small tail in the step response. | ||
+ | |||
+ | If a plant has a pole or zero that are undesirable (e.g. slow), we can use a pre-compensator to cancel it and then a lead/lag compensator can be used to achieve the performance specifications. Do not cancel an unstable pole or zero as this would lose internal stability. | ||
+ | |||
+ | ===== Comparison ===== | ||
+ | |||
+ | Lead compensator achieve specs largely by virtue of their phase responses, adding positive phase in the mid-freq region to boost PM. Lag compensators achieve specs by their magnitude responses, amplification at low frequencies and attenuation at high. Lead-lag compensators can combine benefits of both schemes. | ||
+ | |||
+ | Lead pros: | ||
+ | |||
+ | * Higher crossover frequency | ||
+ | * Higher bandwidth | ||
+ | * Faster system, reduction in settling and rise time | ||
+ | |||
+ | Lead cons: | ||
+ | |||
+ | * Higher bandwidth may not be desirable if measurement noise is present, or if there are high frequency modelling uncertainties | ||
+ | * Increases control signal amplitude and control sensitivity function, causing larger control signals | ||
+ | * Increased power consumption | ||
+ | * Increased cost | ||
+ | * Actuator saturation and non-linear effects | ||
+ | |||
+ | Lag pros: | ||
+ | |||
+ | * Can improve DC gain and reduce steady state errors | ||
+ | * May improve PM by reducing the gain at higher frequencies | ||
+ | * High frequency noise may be attenuated without reducing DC gain | ||
+ | |||
+ | Lag cons: | ||
+ | |||
+ | * Typically yields lower crossover frequency, lower bandwidth and slower response | ||
+ | * Zero-pole pair are close to zero which typically induces a very long tail in the transient response | ||
+ | |||
+ | Lead-lag pros: | ||
+ | |||
+ | * If both improvement in steady state and in transient response are required, we can use a lead-lag compensator to exploit the advantages of both compensators | ||
+ | |||
+ | A large number of practical problems can be solved using these compensators. More complex problems sometimes require compensators with different zero-pole configurations. Some problems are more easily tackled using optimal control techniques. | ||
+ | |||
+ | ====== Fundamental limitations ====== | ||
+ | |||
+ | There are fundamental limits to what we can achieve by compensation. One important constraint is that reducing sensitivity at low frequencies causes large sensitivity at high frequencies. The Bode sensitivity integral provides this constraint. Given an open-loop transfer function with no poles on the imaginary axis or in the RHP and supposing that the relative degree of the open-loop is $\geq 2$, then: $$\int_0^\infty \ln|S_0(j\omega)|d\omega=0$$ Meaning that reducing the sensitivity at one frequency increases sensitivity at another. This also holds when there are open-loop poles on the imaginary axis. It becomes a positive integral when there are open-loop RHP poles. A peaking of sensitivity occurs when there is an upper bound on the loop-gain cross-over. | ||
+ | |||
+ | The internal model principle states that to track/ | ||
notes/elen90055.1631233530.txt.gz · Last modified: 2023/05/30 22:32 (external edit)