Readit News logoReadit News
amatic · 3 months ago
There is a mistake right in the beginning, not sure how it affects the conclusions yet. The variables given are S - System variable (some kind of disturbance), Z is the outcome ( a controlled variable) and R is the action of a controller. The causal relations between them are S affects Z, S affects R, and R affects Z.

> The archetypal example for this is something like a thermostat. The variable S represents random external temperature fluctuations. The regulator R is the thermostat, which measures these fluctuations and takes an action (such as putting on heating or air conditioning) based on the information it takes in. The outcome Z is the resulting temperature of the room, which depends both on the action taken by the regulator, and the external temperature.

The problem here is that the regulator R does not measure external temperature. It just measures the controlled variable - the temperature Z, so the causal arrow should go from Z to R too, and the arrow from S to R does not exist.

masfuerte · 3 months ago
> The problem here is that the regulator R does not measure external temperature.

Domestic thermostats typically don't but some heating control systems do.

analog31 · 3 months ago
I wonder if the theorem is another way of showing how hard control is without feedback. And I can't quite figure out if it addresses dynamic systems as opposed to static ones.
stanislavzza · 3 months ago
This is pedantic, but I don't like the formulation of entropy as sum of p log(1/p). I think of log(p) as information of a single event, for which log base 1/2 gives the answer in bits. This makes the negative sign unnecessary, and technically all these formulas should specify the base of log > 1. Everything is cleaner with log base 1/2 (instead of e.g. using the equivalent negative log base 2). This comes up in log likelihood all the time too. I guess it's a prejudice against fractional bases.
PeterStuer · 3 months ago
To me this feels a bit too theoretical. The reason a real regulator has an implicit or explicit model of the relation between S and Z is time.

Z.t is influenced by S.[<t] and R.[<t], the curren state of Z is the result of the time series of S up to that point and the timeseries of R up to that point.

Think of each arrow as taking 1 time quantum. Even if you assume R itself takes 0 prossessing time, R can only affect Z after S already had it's affect.

So S.t affects Z.t+1 and is observed by R at t+1, and the regulatory signal from the resulting output of R will only affect at Z.t+2 at the same time that S.t+1 is already affecting it.

If R has no implicit or explicit model of the S-Z relation, meaning it can not sufficiently predict dZ from dS, it can not modulate dR, its own compensations, to avoid over or undercompensating.

In practice you see this in self reinforcing feedbackloops in naive regulators. An initial small perturbation gets overcompensated so the result is a slightly larger perturbation that gets overcompensated until the system is completely oscilating out of control.