It is said that one picture is worth a thousand words – so let it be two thousand:
A dynamic system is one whose behavior, described by certain quantities, changes in time. We may be interested only in the end result (that is called a steady state), but if we are really to talk about the system dynamics, then the end result will be only one of the elements to be considered, and the main emphasis will be on how the steady state is reached (or why there is no steady state).
Let’s imagine that we are dealing with a simple lifting mechanism, which after switching on should push something to a given height (note: the example is simplified as much as possible, so that you can easily imagine what is shown in the diagram below). In the simplest model (not to scare you, there will be no diagram or equations for this example) the input (control) is the force to be used, and the output – the height to which we managed to lift our weight. Obviously, the greater the weight, the greater the force to be used. The device is constructed in such a way that after switching it on (giving a constant force at the input), after some (preferably short) time the feeder reaches some constant height and its position stops changing (it settles – hence the name “steady state”).
The first question to ask is, “How much force should we use?” Let us assume that we can answer this question. However, this is not enough for us to declare success. Depending on what components were used to construct the lifting mechanism, the position of the mechanism may change in time in different ways, always reaching the set height we require (see figure below).
Here we come to the first reason of learning system dynamics – acquiring the ability to predict how a system will behave in response to a particular input and using this knowledge in system design (including the selection of components that make up the system). We look at various features of such systems – at first glance, let us focus on the (perhaps) most obvious:
- How fast will the system reach steady state (this still needs to be defined – e.g. after how long will the output value stay within +/-5% of the steady state value)?
- Will there be oscillations in the system, or will the system reach a steady state without oscillations (aperiodic behaviour)?
- How large is the overshot of the response?
Of course, you can find answers to these questions by trial and error – but it’s probably better to do some calculations and simulations first, and only then build a prototype of the system that you would like to sell for a lot of money… Especially that trial and error method may prove to be very expensive …
Once we know how the system responds to a particular input form, we can move on to the next step – how to shape the controls so that the system behaves exactly as we expect it to. For example, if we are dealing with a robot that has to move horizontally an element from one place to another, the question is: How to control the motors that move the individual parts of the robot? And if we want to charge the battery of an electric car – how to change the voltage (current) in the charging device to do it in the shortest time?
The problem is that even if we designed and constructed the system, we only know its parameters with some accuracy. Moreover, as I will show in the subsequent posts, any model that allows us to answer the questions asked above is always a simplified description of reality. In addition, the system behavior, may also be affected by some interference. Therefore, instead of an open loop structure (as in Fig. 1), a closed loop control system, using the information from the measuring devices to achieve the desired effect, is preferred:
Automatic control – here we come (well, some day and post, at least). That’s life, as well:
If you liked the post and think it might be worth your friends reading it – please recommend it :).