Rule-based controls (RBC) are a type of control system that operate based on a set of predefined rules or conditions. These rules are used to guide the behavior of the system in different situations, and the system responds to changes in its environment by following these rules. RBC is often used in automation and control systems to perform a specific task or set of tasks in a consistent and predictable manner. They can potentially scale up because the absence of a model makes the solution easily applicable to different cases. However, they can be difficult to optimally tune because they are not adaptable enough for the complexity of the environment dynamics.
Model predictive control (MPC) is a control technique that uses a predetermined model of a system to predict its future behavior and optimize control actions in order to achieve a desired output. The predetermined model can be either a physic-based model or a machine learning-based model that is calibrated or trained on historical data collected from the system. MPC can fit complex environment dynamics and achieve excellent results, but its performance depends on the accuracy of the predetermined model, which can be expensive to obtain.
While a Model Predictive Control uses a predetermined model, Reinforcement learning control (RL) is a control approach that relies on learning from experience to determine the best actions to take. It involves training an agent to interact with its environment in order to maximize a reward signal. The agent is not told which actions to take, but must discover which actions yield the most reward by trying them. RL adapts continuously to the controlled environment and continues learning using real-time data collected, making it suitable for large-scale, complex, or uncertain environments.
1. Rawlings, James B., and David Q. Mayne. "Model Predictive Control: Theory and Design." Nob Hill Pub., 2009.
2. Sutton, Richard S., and Andrew G. Barto. "Reinforcement Learning: An Introduction." MIT Press, 2018.