Today I read a paper titled “Distributed Control by Lagrangian Steepest Descent”
The abstract is:
Often adaptive, distributed control can be viewed as an iterated game between independent players.
The coupling between the players’ mixed strategies, arising as the system evolves from one instant to the next, is determined by the system designer.
Information theory tells us that the most likely joint strategy of the players, given a value of the expectation of the overall control objective function, is the minimizer of a Lagrangian function of the joint strategy.
So the goal of the system designer is to speed evolution of the joint strategy to that Lagrangian minimizing point, lower the expectated value of the control objective function, and repeat.
Here we elaborate the theory of algorithms that do this using local descent procedures, and that thereby achieve efficient, adaptive, distributed control.