This package is divided into three parts:
The function hierarchy is an abstract hierarchy which defines meta-functions in the sense that the user has to derive from these classes to define its real function.
struct MyLinearFunction : public LinearFunction { // FIXME: some code is missing }; struct MyDerivableFunction : public DerivableFunction { // FIXME: some code is missing }; // etc...
Users can define as many functions as they want as long as they respect the constraints defined by the classes they inherit from:
It is important to note that all of these functions are functions.
These functions have to be considered as m functions such that . In particular, it means that in the gradient/Hessian methods, the second argument (integer) refers to which function you want to get the gradient/Hessian from.
It avoids useless computations when the whole Jacobian is not needed. Hence, it also avoids the use of a tensor as this structure is particularly costly and not built-in in Eigen (the matrix library the package relies on).
This set of meta-functions is completed by two generic implementations of quadratic and linear functions in respectively NumericQuadraticFunction and NumericLinearFunction. Those are real functions as the user is meant to instantiate them, not derive from them.
The problem class defines a complete optimization problem. It is composed of:
The problem class is templated by two parameters:
Those two parameters define a class of problem. For instance, a QP solver solvers problems where the objective function is quadratic and the constraints are linear.
Each class of problem matches a set of solvers designed to solve exactly those problems.
The copy constructor of the Problem class allows to make the problem more generic by replacing F or C by a more general type.
Examples:
I.e.: you can always go higher in the function hierarchy but you cannot go deeper.
However, this transformation (as any dynamic typing) causes to forget specific information and might slow down the process. For example: solving a linear problem as a nonlinear one is slower than directly solving it with methods specific to linear problems.
The solver hierarchy is divided into three parts:
The generic solver is an abstract class which is the root of the hierarchy. All the solvers have to inherit from this class directly or indirectly.
It only defines three methods:
solve
(virtual pure) getMinimum
reset
The method solve
has to fill the protected attribute result_
. It is implemented by the bridges at the bottom of the hierarchy.
By default result_
contains an instance of the No_Solution
class. It has to be changed to an instance of Result
, ResultWithWarnings
or SolverError
.
No_Solution
: represents the fact that solve has never been called. A user should never retrieve an instance of No_Solution
. Result
, ResultWithWarnings
: represents a valid result. If ResultWithWarnings
is used, some non-critical problems might have happened and are specified in the object (minor numerical instabilities for instance). SolverError
: indicates that the optimization has failed. The solve
method should rarely be called by the user. Instead, the user calls getMinimum
which calls solve
if required. If getMinimum
is called several times, the problem is only solved once.
The reset
method should be called when one wants to manually force the recomputation of the solution.
The Solver
class is parametrized by two parameters as the Problem
class. It defines an abstract solver for a class of optimization problems.
For instance, a QP should inherit from Solver<QuadraticFunction, LinearFunction>
.
This level of the hierarchy integrates the notion of Problem
, hence the problem
method is defined to access the problem. Once a solver is created with a problem, the problem can never be modified again.
The leaves of the hierarchy are bridges. The role of a bridge is to convert the generic representation of the problem into the solver's representation. It might also emulate some missing features of the back-end solver.
A bridge mainly implements the solve
method. It may also expose the underlying solver's internal mechanism for fine tuning.