[TOC]
↗ Mathematical Modeling & Abstraction
↗ Complex System Science & Systems Theory
↗ Operations Research (OR) ↗ Mathematical Optimization (Programming)
↗ Game Theory & Decision Making in Multi-Agents Environments ↗ Reinforcement Learning (RL) & Sequential Decision Making
The Map of Control Theory
https://engineeringmedia.com/map-of-control
==Control theory is a field of control engineering and applied mathematics that deals with the control of dynamical systems.== The aim is to develop a model or algorithm governing the application of system inputs to drive the system to a desired state, while minimizing any delay, overshoot, or steady-state error and ensuring a level of control stability; often with the aim to achieve a degree of optimality.
To do this, a controller with the requisite corrective behavior is required. This controller monitors the controlled process variable (PV), and compares it with the reference or set point (SP). The difference between actual and desired value of the process variable, called the error signal, or SP-PV error, is applied as feedback to generate a control action to bring the controlled process variable to the same value as the set point. Other aspects which are also studied are controllability and observability. Control theory is used in control system engineering to design automation that have revolutionized manufacturing, aircraft, communications and other industries, and created new fields such as robotics.
Extensive use is usually made of a diagrammatic style known as the block diagram. In it the transfer function, also known as the system function or network function, is a mathematical model of the relation between the input and output based on the differential equations describing the system.
Control theory dates from the 19th century, when the theoretical basis for the operation of governors was first described by James Clerk Maxwell. Control theory was further advanced by Edward Routh in 1874, Charles Sturm and in 1895, Adolf Hurwitz, who all contributed to the establishment of control stability criteria; and from 1922 onwards, the development of PID control theory by Nicolas Minorsky. Although the most direct application of mathematical control theory is its use in control systems engineering (dealing with process control systems for robotics and industry), control theory is routinely applied to problems both the natural and behavioral sciences. ==As the general theory of feedback systems, control theory is useful wherever feedback occurs, making it important to fields like economics, operations research, and the life sciences.==
🤖 GPT-5.0 https://chatgpt.com/share/69724455-df38-800f-9f0f-172dcad200d1
Control theory is a branch of engineering and applied mathematics that studies how to model, analyze, and design systems that automatically regulate their behavior to achieve a desired goal.
At its core, control theory answers questions like:
- How can we make a system behave in a desired way?
- How do we keep a system stable despite disturbances or uncertainty?
- How can we optimize performance while respecting physical constraints?
Core Idea (Intuition) A control system continuously:
- Measures the system’s current state (via sensors)
- Compares it to a desired target (the reference)
- Computes an action (the control input)
- Applies the action to reduce the error This feedback loop is the heart of control theory.
Example:
A thermostat measures room temperature and turns heating on/off to maintain a set temperature.
Key Concepts
System Modeling Systems are represented using:
- Differential equations
- State-space models
- Transfer functions These models describe how inputs affect outputs over time.
Feedback and Stability
- Feedback: Using the system’s output to influence its input
- Stability: Ensuring the system does not diverge or oscillate uncontrollably A stable system returns to equilibrium after a disturbance.
Controllers Common controllers include:
- PID (Proportional–Integral–Derivative) controllers
Widely used in industry due to simplicity and robustness - State feedback controllers
- Optimal controllers (e.g., LQR)
- Adaptive and robust controllers
Performance & Robustness Control theory balances:
- Speed vs. overshoot
- Accuracy vs. energy usage
- Performance vs. uncertainty/noise
Major Branches of Control Theory
- Classical Control: Frequency-domain methods (Bode plots, Nyquist)
- Modern Control: State-space, optimal control
- Robust Control: Performance under model uncertainty
- Adaptive Control: Controllers that learn/update parameters
- Nonlinear Control: For systems that are not linear
- Stochastic Control: Systems with randomness/noise
#cybernetics #control_theory
🤖 Google search AI mode
Cybernetics and control theory are ==deeply intertwined fields studying regulatory systems==, with cybernetics focusing broadly on communication and control in animals/machines (feedback loops, self-regulation) and control theory providing mathematical tools for engineering systems to achieve goals, both examining how systems maintain stability and function, often using concepts like negative feedback. Cybernetics offers a holistic framework, while control theory provides concrete methods for engineering stable, goal-directed behaviors in technologies, influencing AI, robotics, and systems science.
- Definition: The science of communication and control, studying feedback mechanisms in living organisms, machines, and societies.
- Key Concepts: Information, feedback loops, self-organization, adaptation, and the "circular causality" between systems and their environment.
- Focus: Understanding the underlying principles of goal-directed behavior, treating systems in terms of function rather than physical parts.
- Definition: A branch of engineering and mathematics that designs algorithms and models to influence the behavior of dynamical systems.
- Key Concepts: Feedback controllers (closed-loop systems), stability, optimization, and robust/adaptive control.
- Focus: Developing practical methods (like PID controllers) to make systems (mechanical, electrical, etc.) behave as desired, often using mathematical rigor.
Relationship and Overlap
- Foundation: Cybernetics laid conceptual groundwork (feedback, information flow) that control theory formalized mathematically, notes Springer Nature and arXiv.
- Shared Goal: Both aim to understand and engineer systems that self-regulate or achieve goals, whether biological (a thermostat, a cell) or artificial (a robot, a chemical plant).
- Modern Intersection: Control theory provides tools (e.g., adaptive control) used in cybernetic applications, while cybernetic thinking informs complex systems in AI and cybersecurity, says arXiv.
In essence, cybernetics asks how systems regulate themselves, and control theory provides the mathematical how-to for building those regulator
🔗 https://en.wikipedia.org/wiki/Control_theory 🔗 https://en.wikipedia.org/wiki/Control_loop#Open-loop_and_closed-loop
Fundamentally, there are two types of control loop: open-loop control (feedforward), and closed-loop control (feedback).
- In open-loop control, the control action from the controller is independent of the "process output" (or "controlled process variable"). A good example of this is a central heating boiler controlled only by a timer, so that heat is applied for a constant time, regardless of the temperature of the building. The control action is the switching on/off of the boiler, but the controlled variable should be the building temperature, but is not because this is open-loop control of the boiler, which does not give closed-loop control of the temperature.
- In closed loop control, the control action from the controller is dependent on the process output. In the case of the boiler analogy, this would include a thermostat to monitor the building temperature, and thereby feed back a signal to ensure the controller maintains the building at the temperature set on the thermostat. A closed loop controller therefore has a feedback loop which ensures the controller exerts a control action to give a process output the same as the "reference input" or "set point". For this reason, closed loop controllers are also called feedback controllers.
The definition of a closed loop control system according to the British Standards Institution is "a control system possessing monitoring feedback, the deviation signal formed as a result of this feedback being used to control the action of a final control element in such a way as to tend to reduce the deviation to zero."
Likewise; "A Feedback Control System is a system which tends to maintain a prescribed relationship of one system variable to another by comparing functions of these variables and using the difference as a means of control."
🔗 https://en.wikipedia.org/wiki/Control_theory#Classical_control_theory
Every control system must guarantee first the stability of the closed-loop behavior. For linear systems, this can be obtained by directly placing the poles. Nonlinear control systems use specific theories (normally based on Aleksandr Lyapunov's Theory) to ensure stability without regard to the inner dynamics of the system. The possibility to fulfill different specifications varies from the model considered and the control strategy chosen.
List of the main control techniques
- Optimal control is a particular control technique in which the control signal optimizes a certain "cost index": for example, in the case of a satellite, the jet thrusts needed to bring it to desired trajectory that consume the least amount of fuel. Two optimal control design methods have been widely used in industrial applications, as it has been shown they can guarantee closed-loop stability. These are Model Predictive Control (MPC) and linear-quadratic-Gaussian control (LQG). The first can more explicitly take into account constraints on the signals in the system, which is an important feature in many industrial processes. However, the "optimal control" structure in MPC is only a means to achieve such a result, as it does not optimize a true performance index of the closed-loop control system. Together with PID controllers, MPC systems are the most widely used control technique in process control.
- Robust control deals explicitly with uncertainty in its approach to controller design. Controllers designed using robust control methods tend to be able to cope with small differences between the true system and the nominal model used for design. The early methods of Bode and others were fairly robust; the state-space methods invented in the 1960s and 1970s were sometimes found to lack robustness. Examples of modern robust control techniques include H-infinity loop-shaping developed by Duncan McFarlane and Keith Glover, Sliding mode control (SMC) developed by Vadim Utkin, and safe protocols designed for control of large heterogeneous populations of electric loads in Smart Power Grid applications. Robust methods aim to achieve robust performance and/or stability in the presence of small modeling errors.
- Stochastic control deals with control design with uncertainty in the model. In typical stochastic control problems, it is assumed that there exist random noise and disturbances in the model and the controller, and the control design must take into account these random deviations.
- Adaptive control uses on-line identification of the process parameters, or modification of controller gains, thereby obtaining strong robustness properties. Adaptive controls were applied for the first time in the aerospace industry in the 1950s, and have found particular success in that field.
- A hierarchical control system is a type of control system in which a set of devices and governing software is arranged in a hierarchical tree. When the links in the tree are implemented by a computer network, then that hierarchical control system is also a form of networked control system.
- Intelligent control uses various AI computing approaches like artificial neural networks, Bayesian probability, fuzzy logic, machine learning, evolutionary computation and genetic algorithms or a combination of these methods, such as neuro-fuzzy algorithms, to control a dynamic system.
- Self-organized criticality control may be defined as attempts to interfere in the processes by which the self-organized system dissipates energy.
[!links] ↗ Artificial Intelligence ↗ Statistical (Data-Driven) Learning & Machine Learning (ML) ↗ Artificial Neural Networks (ANN) & Deep Learning Methods ↗ Neural Network Models
Intelligent control is a class of control techniques that use various artificial intelligence computing approaches like neural networks, Bayesian probability, fuzzy logic, machine learning, reinforcement learning, evolutionary computation and genetic algorithms. Intelligent control can be divided into the following major sub-domains:
- Neural network control
- Machine learning control
- Reinforcement learning
- Bayesian control
- Fuzzy control
- Neuro-fuzzy control
- Expert Systems
- Genetic control
[!links] ↗ Mathematical Modeling & Abstraction
严格地说,控制理论并没有消亡。它只是逐渐失去了在工程系统智能化进程中的主导地位。在二十世纪中叶,控制理论是自动化领域的基石。从最早的PID调节,到状态空间方法,再到最优控制、鲁棒控制与H∞理论,几乎每一个时代的工程系统——无论是航天、制造还是电力领域——都建立在它的理论框架之上。
然而,进入二十一世纪后,形势发生了根本性的变化。复杂系统的维度与非线性程度呈指数级增长;传感器、网络与信息系统交织成高维耦合的巨网;与此同时,大数据与机器学习崛起,带来了新的建模与决策范式。
在这样的背景下,传统控制理论那种“解析解+严格建模+数学可证”的经典范式,逐渐显露出局限。它假设系统可以被精确描述——必须先建立模型,才能设计控制器。但现实中的复杂系统,往往无法被完全建模:非线性、时变性、不确定性乃至环境干扰,都让精确建模成为奢望。
于是,控制理论在智能时代的舞台上,逐渐被边缘化。这并不是因为控制理论错了,而是因为它被困在了自己的数学优雅之中——它的完美推理与严格证明,反而使它难以伸展到真实而混沌的世界。
控制方面的一个问题是,它真的深入到很多不同的细分领域。所以,化学工程的控制器和高速机械加工或机器人操纵器的控制器非常不同。后者,也就是我的专长,也需要很好地理解机器人运动学和动力学建模。除此之外,高级机器人控制器通常会使用视觉或其他感官输入,这是一个巨大的领域,通常与机器学习或人工智能交叉。还有遥控设备,它们在通信延迟、力反馈的稳定性等方面有自己的细节。
控制理论本质上是一门应用科学。在我看来,基础知识非常有用,可以给你一个非常强大的工程工具集来分析大量的难题。但是,这些技能不可避免地必须应用于特定的背景下,这需要另一套完整的知识。在我看来,控制理论让你能够进入许多不同的应用领域,并拥有一个非常强大的数学基础。但是,纯粹的控制理论本身很可能不会是你整个职业生涯所做的事情。
入门:本科专业课
四大数学: 高数、线代、概率论、复变函数与积分变换
线性代数一定要去看MIT公开课,b站一搜就有,一定要看,要不然会和我一样,学了好几年,连特征根是啥东东都不知道。。。。。。
自动控制原理: 主要有胡寿松、黄家英两大版本,胡寿松是一大本书,黄家英是上下两本书,讲的更详细一些,个人感觉比胡寿松的好,但是这两本的稳定性理论部分讲的都不怎么样,可以去看看《应用非线性控制》中的讲解。
自学的话,看书实在是太难以理解了,很多概念讲的一点也不好,教程强烈推荐B站搜卢京潮老师的自控+B站DR_CAN老师-自动控制理论和动态系统建模与分析