PythonLinearNonLinearControl is a library implementing the linear and nonlinear control theories in python.
Algorithm | Use Linear Model | Use Nonlinear Model | Need Gradient (Hamiltonian) | Need Gradient (Model) | Need Hessian (Model) |
---|---|---|---|---|---|
Linear Model Predictive Control (MPC) | ✓ | x | x | x | x |
Cross Entropy Method (CEM) | ✓ | ✓ | x | x | x |
Model Preidictive Path Integral Control of Nagabandi, A. (MPPI) | ✓ | ✓ | x | x | x |
Model Preidictive Path Integral Control of Williams, G. (MPPIWilliams) | ✓ | ✓ | x | x | x |
Random Shooting Method (Random) | ✓ | ✓ | x | x | x |
Iterative LQR (iLQR) | x | ✓ | x | ✓ | x |
Differential Dynamic Programming (DDP) | x | ✓ | x | ✓ | ✓ |
Unconstrained Nonlinear Model Predictive Control (NMPC) | x | ✓ | ✓ | x | x |
Constrained Nonlinear Model Predictive Control CGMRES (NMPC-CGMRES) | x | ✓ | ✓ | x | x |
Constrained Nonlinear Model Predictive Control Newton (NMPC-Newton) | x | ✓ | x | x | x |
"Need Gradient" means that you have to implement the gradient of the model or the gradient of hamiltonian.
This library is also easily to extend for your own situations.
Following algorithms are implemented in PythonLinearNonlinearControl
- Linear Model Predictive Control (MPC)
- Ref: Maciejowski, J. M. (2002). Predictive control: with constraints.
- Cross Entropy Method (CEM)
- Ref: Chua, K., Calandra, R., McAllister, R., & Levine, S. (2018). Deep reinforcement learning in a handful of trials using probabilistic dynamics models. In Advances in Neural Information Processing Systems (pp. 4754-4765)
- Model Preidictive Path Integral Control of Nagabandi, A. (MPPI)
- Ref: Nagabandi, A., Konoglie, K., Levine, S., & Kumar, V. (2019). Deep Dynamics Models for Learning Dexterous Manipulation. arXiv preprint arXiv:1909.11652.
- Model Preidictive Path Integral Control of Williams, G. (MPPIWilliams)
- Ref: Williams, G., Wagener, N., Goldfain, B., Drews, P., Rehg, J. M., Boots, B., & Theodorou, E. A. (2017, May). Information theoretic MPC for model-based reinforcement learning. In 2017 IEEE International Conference on Robotics and Automation (ICRA) (pp. 1714-1721). IEEE.
- Random Shooting Method (Random)
- Ref: Chua, K., Calandra, R., McAllister, R., & Levine, S. (2018). Deep reinforcement learning in a handful of trials using probabilistic dynamics models. In Advances in Neural Information Processing Systems (pp. 4754-4765)
- Iterative LQR (iLQR)
- Ref: Tassa, Y., Erez, T., & Todorov, E. (2012, October). Synthesis and stabilization of complex behaviors through online trajectory optimization. In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems (pp. 4906-4913). IEEE. and Study Wolf, https://github.com/anassinator/ilqr
- Dynamic Differential Programming (DDP)
- Ref: Tassa, Y., Erez, T., & Todorov, E. (2012, October). Synthesis and stabilization of complex behaviors through online trajectory optimization. In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems (pp. 4906-4913). IEEE. and Study Wolf, https://github.com/anassinator/ilqr
- Unconstrained Nonlinear Model Predictive Control (NMPC)
- Ref: Ohtsuka, T., & Fujii, H. A. (1997). Real-time optimization algorithm for nonlinear receding-horizon control. Automatica, 33(6), 1147-1154.
- Constrained Nonlinear Model Predictive Control -CGMRES- (NMPC-CGMRES)
- Ref: Ohtsuka, T., & Fujii, H. A. (1997). Real-time optimization algorithm for nonlinear receding-horizon control. Automatica, 33(6), 1147-1154.
- Constrained Nonlinear Model Predictive Control -Newton- (NMPC-Newton)
- Ref: Ohtsuka, T., & Fujii, H. A. (1997). Real-time optimization algorithm for nonlinear receding-horizon control. Automatica, 33(6), 1147-1154.
Name | Linear | Nonlinear | State Size | Input size |
---|---|---|---|---|
First Order Lag System | ✓ | x | 4 | 2 |
Two wheeled System (Constant Goal) | x | ✓ | 3 | 2 |
Two wheeled System (Moving Goal) (Coming soon) | x | ✓ | 3 | 2 |
Cartpole (Swing up) | x | ✓ | 4 | 1 |
All states and inputs of environments are continuous. It should be noted that the algorithms for linear model could be applied to nonlinear enviroments if you have linealized the model of nonlinear environments.
You could know abount our environmets more in Environments.md
python setup.py install
or
pip install .
python setup.py develop
or
pip install -e .
You can run the experiments as follows:
python scripts/simple_run.py --env first-order_lag --controller CEM
figures and animations are saved in the ./result folder.
When we design control systems, we should have Model, Planner, Controller and Runner as shown in the figure. It should be noted that Model and Environment are different. As mentioned before, we the algorithms for linear model could be applied to nonlinear enviroments if you have linealized model of nonlinear environments. In addition, you can use Neural Network or any non-linear functions to the model, although this library can not deal with it now.
System model. For an instance, in the case that a model is linear, this model should have a form, "x[k+1] = Ax[k] + Bu[k]".
If you use gradient based control method, you are preferred to implement the gradients of the model, other wise the controllers use numeric gradients.
Planner make the goal states.
Controller calculate the optimal inputs by using the model by using the algorithms.
Runner runs the simulation.
Please, see more detail in each scripts.
If you are interested in the old version of this library, that was not a library just examples, please see v1.0
Coming soon !!
- numpy
- matplotlib
- cvxopt
- scipy
@Misc{PythonLinearNonLinearControl,
author = {Shunichi Sekiguchi},
title = {PythonLinearNonlinearControl},
note = "\url{https://github.com/Shunichi09/PythonLinearNonlinearControl}",
}