control and decision making
Sunday, September 26, 2021
Friday, August 17, 2018
Decision making and Control
The IEEE has held conferences on decision and control for nearly sixty years, but despite this have not seen a detailed explanation of the differences between decision and control. Decision making and control theory each have a vast array of technical literature and are mature fields in their own right. I will try to separate out some aspects of decision making from control. What are the similarities, what are the differences and where opportunities exist for synergy.
Decision making
Decision-making involves selecting an action from a set of alternatives. Examples include A/B testing and game playing. In some cases the decision making step of action selection is separated from the execution step of applying that action. The decision making step is made based on the information available to the decision maker The decision being made to typically to meet some objectives of the decision maker. Some criteria against which the different available decisions can be evaluated. The decision-making process is a continuous process involving interacting with the environment.
Decision making in games such as Chess and Go are not time critical. This does not mean to imply that an agent has an unlimited time to make and enact a decision, many variants require time limits for moves to be do applied. The state of the system, the board position of the pieces is however stable. The positions of the pieces do not change until a decision is made and a move played. There are no additional disturbance inputs other than the moves of the other player.
Artificial intelligence systems have been attempting to automate the decision making process for many years. Early work involved so called expert systems, which tried to replicate the results of skilled experts. More recently reinforcement learning systems has been applied to develop expert level game playing using self learning on games such as Checkers, Backgammon, Chess, Atari games and most recently DeepMind's AlphaGo learning Go.
Some of the reasons games are used are listed below.
- Board games like Chess and Go do not have any dynamics in them or at best quasi-static dynamics.
- Video games typically have extremely simplified dynamics - you push a button to jump and you jump instantaneously, no delays.
- Games have a small number of methods of interaction - a few push buttons for most of the Atari style games (such as left, right and fire). Output similarly limited to a screen and audio (although I have not see audio used to help AI games)
- The games are also designed to provide almost instantaneous rewards back to the operator in terms of a scores which is what the algorithms that are learning on them require for their training.
- Games are repeatable you can try multiple ideas from the exact same state by replaying the game.
- Conveniently ignores the elephant in the room challenge of where rewards come from in the real-world.
- Games are inherently stable.
Control Systems
Control systems are designed to use feedback to change the behaviors of dynamical systems towards enhancing stability, minimizing disturbance effects and achieving a desired system performance. Examples of control systems range from simple temperature control to industrial process control and aircraft autopilots.
The control challenge is so hidden that the decision-making problem of playing chess (a problem now solved by AI) was considered harder than the problem of picking up and moving the pieces to the correct positions.
From a controls perspective, the decision making process is one of the steps that has to be taken at every time step that a control action is to be applied. The process of deciding what control control action to apply is determined algorithmically, each iteration, and is determined by many factors such as the available state information and the magnitude of error from the desired value. Interestingly, partial observability leads to the necessity to use a control function that is the history of observations - which traditional controller rules such a PID are.
The time constraints for decision making problems are generally not well defined. However, lets contrast this with what happens in a control system. Control systems are generally time critical. Timely application of the control action is critical since the underlying system dynamics of the process being controlled are, typically, continuously changing and are constantly being subjected to disturbance events that the control system counteracts to maintain stability. Delays in either receiving and processing information from sensors or in the controller output can lead to performance degradation and system instability.
Early reinforcement learning systems treated the pole balancing problem as a decision making problem, with a minimal number of actions and a discretized state space. The value function or Q function being implemented as a lookup table (or an equivalent compact representation of one). Although time optimal control often leads to bang-bang control solutions these have been seen as being problematics in practice. Bang-Bang control can cause high wear in mechanical systems from the sudden control changes applied and lead to hunting and chattering behavior.. There is also a limit on the control performance that is achievable. This is very different from a typical control engineering solution where look to minimize error. The control having continuous actions and continuous states, although computer control typically necessitates a discrete-time control application. Adaptive dynamic systems
Game Theory and Control
One area that in is the interface between control and decision making is game theory. Decision making is an interactive process and game theory considers the consequences of many plays of a game. A control system can be viewed as a decision making system operating in an adversarial environment. A control system in this context is as an intelligent, rational decision maker that is designed to produce a desired effect. Here, game theory can be viewed as the study of conflict and cooperation between the controller, the environment and what they are trying to achieve.
One control approach, robust control, assumes the environment is an adversary and is trying to prevent the control system from meeting its objectives. Alternative perspectives that can be approached from the concept of game theory are teams of controllers and distributed control. In interface between game theory and control is still a very active area of research.
For further Information:
Barto, Andrew G., Richard S. Sutton, and Charles W. Anderson. "Neuronlike adaptive elements that can solve difficult learning control problems." IEEE transactions on systems, man, and cybernetics 5 (1983): 834-846.
Marden, Jason R., and Jeff S. Shamma. "Game Theory and Control." Annual Review of Control, Robotics, and Autonomous Systems 0 (2018). https://www.annualreviews.org/doi/abs/10.1146/annurev-control-060117-105102
Special Issue on Game Theory in Control, IEEE Control systems magazine, February 2017. https://ieeexplore.ieee.org/xpl/tocresult.jsp?isnumber=7823053
-------------------------------------------------------------------------------------------
Some Points:
- Control and decision making come from different backgrounds
- Decision making and control systems have both approached similar problems from different perspectives
- Control systems are time critical
- Optimal control systems can be written as open-loop or closed-loop
- Adaptive dynamic programming systems are online adaptive optimal control systems.