Super fast introduction to CasADi
Simple optimization
This very brief example shows how to use casadi framework to solve a very simple optimization problem in which we just want to minimize a function
w = opti.variable(1); % Define the optimization variable as a scalar
p = 0.2*exp(0.2*w).*(sin(w)+5*cos(w)); % Just a function we want to minimize
p_plot = 0.2*exp(0.2.*wplot).*(sin(wplot)+5.*cos(wplot));
title("Function we are trying to minimize")
Here we setup the optimization problem
opti.minimize(p); % Cost function
opti.subject_to(w>=0) % Contraint
opti.solver('ipopt'); % Solver used to actually solve the optimization
opti.set_initial(w, 5); % Set initial value. Note that this may affect the result (try to set it at 15)
sol = opti.solve() % Solve
This is Ipopt version 3.12.3, running with linear solver mumps.
NOTE: Other linear solvers might be more efficient (see Ipopt documentation).
Number of nonzeros in equality constraint Jacobian...: 0
Number of nonzeros in inequality constraint Jacobian.: 3
Number of nonzeros in Lagrangian Hessian.............: 1
Total number of variables............................: 1
variables with only lower bounds: 0
variables with lower and upper bounds: 0
variables with only upper bounds: 0
Total number of equality constraints.................: 0
Total number of inequality constraints...............: 3
inequality constraints with only lower bounds: 3
inequality constraints with lower and upper bounds: 0
inequality constraints with only upper bounds: 0
iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls
0 2.4974848e-001 0.00e+000 4.73e-002 -1.0 0.00e+000 - 0.00e+000 0.00e+000 0
1 -1.8592821e+000 0.00e+000 1.97e+000 -1.0 1.88e+000 - 1.00e+000 1.00e+000f 1
2 -2.0278340e+000 0.00e+000 1.22e-001 -1.0 4.39e-001 - 9.34e-001 1.00e+000f 1
3 -2.0284580e+000 0.00e+000 2.16e-004 -2.5 2.30e-002 - 1.00e+000 1.00e+000f 1
4 -2.0284597e+000 0.00e+000 6.13e-007 -3.8 1.21e-003 - 1.00e+000 1.00e+000f 1
5 -2.0284597e+000 0.00e+000 1.54e-009 -5.7 6.04e-005 - 1.00e+000 1.00e+000f 1
6 -2.0284597e+000 0.00e+000 2.32e-013 -8.6 7.43e-007 - 1.00e+000 1.00e+000f 1
Number of Iterations....: 6
(scaled) (unscaled)
Objective...............: -2.0284597366900035e+000 -2.0284597366900035e+000
Dual infeasibility......: 2.3219712033736032e-013 2.3219712033736032e-013
Constraint violation....: 0.0000000000000000e+000 0.0000000000000000e+000
Complementarity.........: 2.5062909859343483e-009 2.5062909859343483e-009
Overall NLP error.......: 2.5062909859343483e-009 2.5062909859343483e-009
Number of objective function evaluations = 7
Number of objective gradient evaluations = 7
Number of equality constraint evaluations = 0
Number of inequality constraint evaluations = 7
Number of equality constraint Jacobian evaluations = 0
Number of inequality constraint Jacobian evaluations = 7
Number of Lagrangian Hessian evaluations = 6
Total CPU secs in IPOPT (w/o function evaluations) = 0.015
Total CPU secs in NLP function evaluations = 0.001
EXIT: Optimal Solution Found.
solver : t_proc (avg) t_wall (avg) n_eval
nlp_f | 0 ( 0) 0 ( 0) 7
nlp_g | 0 ( 0) 0 ( 0) 7
nlp_grad_f | 0 ( 0) 0 ( 0) 8
nlp_hess_l | 0 ( 0) 0 ( 0) 6
nlp_jac_g | 1.00ms (125.00us) 996.00us (124.50us) 8
total | 18.00ms ( 18.00ms) 18.20ms ( 18.20ms) 1
sol =
Opti {
instance #2144
#variables: 1 (nx = 1)
#parameters: 0 (np = 0)
#constraints: 3 (ng = 3)
CasADi solver allocated.
CasADi solver was called: Solve_Succeeded
}
Now display the solution
wstar = sol.value(w) % Parameters that minimize the cost
pstar = sol.value(p) % Value of the cost function at wstar
Simulate the forced response of a damped oscillator
Here we do a bit of practice using the casadi symbols. It will be then used as a basis for a more interesting optimization.
The system under analysis is shown in figure. The equation of motion is written using CasADi symbols. Note that being this a linear dynamical system, the differential equation can be solved using Laplace Transform. Here we do not do so just to show how casadi can be used.
m=1; k=5; c=0.7; dt=0.01; % Parameters of the oscillator system
x1 = SX.sym('x1') % Define a variable x1 as a symbol (position)
x2 = SX.sym('x2') % Define a variable x2 as a symbol (velocity)
U = SX.sym('F') % Define a variable F as a symbol (force)
X = [x1;x2]; % state vector
The dynamics is linear, so Laplace Transform can be used. This is just an example to show the CasADi tools
ode = [x2; (-k*x1 - c*x2 + U)/m] % dynamics of the system
ode = [x2, (((-5*x1)-(0.7*x2))+F)]
Now we need a mapping from the state and control to the future state of the system
% Now we can create a CasADi function, which basically
% just put the arguments: X and U into the output: ode
f = Function('f', {X, U}, {ode}) % This could have been the kinematics model of a robot
f = f:(i0[2],i1)->(o0[2]) SXFunction
Now we can simulate the motion of the system. To do so we need to integrate the dynamics.
There are different ways to do so. In this case we will use a simple explicit Euler Integration, but CasADi also implements integration function with many options
% If we want to get the state at the next timestep we have to integrate
Now simulate the system for 10 seconds
T = 10; %5 seconds of simulation
x_int = cat(2, x0, zeros(2, N));
x_int(:, i+1) = x_int(:, i) + full(f(x_int(:, i), 0)*dt);
Optimal control for the damped oscillator
Now some more interesting stuff.
We move the mass away from its equilibrium (i.e. we are stretching the spring) and we ask: "How should an external force behave in order to stabilize (bring the system to zero position and velocity) the system as fast as possible?
opti = Opti()
opti =
Opti {
instance #2143
#variables: 0 (nx = 0)
#parameters: 0 (np = 0)
#constraints: 0 (ng = 0)
CasADi solver needs updating.
}
xs = opti.variable(2, N+1); % state
us = opti.variable(1, N); % control
wx = 10; wu = 0.1; % weights of the cost function
opti.subject_to(xs(:, 1) == x0); % initial condition is to start at some x0 displacement
for i=1:N % this is a sum over all the timesteps
cost = cost + ( wx*sumsqr(xs(1, i)) + wu*sumsqr(us(i)))*dt; % Cost function is zero only at equilibrium
opti.subject_to(xs(:, i+1) == xs(:, i) + f(xs(:, i), us(i))*dt); % We have to impose continuity constraints. (basically dynamics)
opti.minimize(cost) % Minimize the cost
opti.solve()
This is Ipopt version 3.12.3, running with linear solver mumps.
NOTE: Other linear solvers might be more efficient (see Ipopt documentation).
Number of nonzeros in equality constraint Jacobian...: 7002
Number of nonzeros in inequality constraint Jacobian.: 0
Number of nonzeros in Lagrangian Hessian.............: 2000
Total number of variables............................: 3002
variables with only lower bounds: 0
variables with lower and upper bounds: 0
variables with only upper bounds: 0
Total number of equality constraints.................: 2002
Total number of inequality constraints...............: 0
inequality constraints with only lower bounds: 0
inequality constraints with lower and upper bounds: 0
inequality constraints with only upper bounds: 0
iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls
0 0.0000000e+000 1.00e+000 0.00e+000 -1.0 0.00e+000 - 0.00e+000 0.00e+000 0
1 3.7666261e+000 2.22e-016 1.14e-015 -1.7 6.02e+000 - 1.00e+000 1.00e+000h 1
Number of Iterations....: 1
(scaled) (unscaled)
Objective...............: 3.7666261069103832e+000 3.7666261069103832e+000
Dual infeasibility......: 1.1379786002407855e-015 1.1379786002407855e-015
Constraint violation....: 2.2204460492503131e-016 2.2204460492503131e-016
Complementarity.........: 0.0000000000000000e+000 0.0000000000000000e+000
Overall NLP error.......: 1.1379786002407855e-015 1.1379786002407855e-015
Number of objective function evaluations = 2
Number of objective gradient evaluations = 2
Number of equality constraint evaluations = 2
Number of inequality constraint evaluations = 0
Number of equality constraint Jacobian evaluations = 2
Number of inequality constraint Jacobian evaluations = 0
Number of Lagrangian Hessian evaluations = 1
Total CPU secs in IPOPT (w/o function evaluations) = 0.068
Total CPU secs in NLP function evaluations = 0.009
EXIT: Optimal Solution Found.
solver : t_proc (avg) t_wall (avg) n_eval
nlp_f | 1.00ms (500.00us) 1.00ms (500.00us) 2
nlp_g | 2.00ms ( 1.00ms) 1.56ms (782.00us) 2
nlp_grad_f | 2.00ms (666.67us) 2.56ms (854.67us) 3
nlp_hess_l | 1.00ms ( 1.00ms) 1.01ms ( 1.01ms) 1
nlp_jac_g | 6.00ms ( 2.00ms) 6.03ms ( 2.01ms) 3
total | 79.00ms ( 79.00ms) 78.74ms ( 78.74ms) 1
ans =
Opti {
instance #2143
#variables: 2 (nx = 3002)
#parameters: 0 (np = 0)
#constraints: 1001 (ng = 2002)
CasADi solver allocated.
CasADi solver was called: Solve_Succeeded
}
Now that the optimization has been solved, let's check the results
xsol = opti.value(xs)
1.0000 1.0000 0.9989 0.9967 0.9935 0.9894 0.9843 0.9783 0.9714 0.9637 0.9552 0.9459 0.9359 0.9253 0.9140 0.9021 0.8896 0.8766 0.8631 0.8490 0.8346 0.8197 0.8045 0.7889 0.7730 0.7568 0.7403 0.7236 0.7067 0.6896 0.6724 0.6550 0.6375 0.6199 0.6023 0.5846 0.5669 0.5491 0.5314 0.5138 0.4962 0.4787 0.4612 0.4439 0.4267 0.4096 0.3927 0.3759 0.3593 0.3429
0 -0.1102 -0.2164 -0.3186 -0.4168 -0.5112 -0.6016 -0.6882 -0.7710 -0.8501 -0.9254 -0.9971 -1.0651 -1.1296 -1.1905 -1.2481 -1.3022 -1.3529 -1.4004 -1.4447 -1.4858 -1.5239 -1.5589 -1.5910 -1.6201 -1.6465 -1.6702 -1.6911 -1.7095 -1.7253 -1.7387 -1.7497 -1.7584 -1.7648 -1.7691 -1.7712 -1.7714 -1.7696 -1.7659 -1.7603 -1.7531 -1.7441 -1.7336 -1.7215 -1.7079 -1.6930 -1.6767 -1.6591 -1.6403 -1.6204
usol = opti.value(us)
-6.0178 -5.6951 -5.3775 -5.0652 -4.7583 -4.4569 -4.1613 -3.8714 -3.5875 -3.3096 -3.0378 -2.7723 -2.5130 -2.2601 -2.0137 -1.7736 -1.5401 -1.3130 -1.0926 -0.8786 -0.6713 -0.4704 -0.2762 -0.0884 0.0928 0.2676 0.4359 0.5978 0.7533 0.9026 1.0456 1.1824 1.3131 1.4378 1.5565 1.6693 1.7763 1.8775 1.9732 2.0633 2.1479 2.2272 2.3013 2.3702 2.4341 2.4931 2.5472 2.5966 2.6415 2.6818
plot(t, x_int(1, :), 'b--')
title("Difference between free response and forced")
legend("x(t) Free", "x(t) Controlled")
This is the control action that drives the system in that way
title("Optimized control action")