roboptim::GenericFiniteDifferenceGradient< T, FdgPolicy > Class Template Reference

Compute automatically a gradient with finite differences. More...

#include <roboptim/core/finite-difference-gradient.hh>

Inheritance diagram for roboptim::GenericFiniteDifferenceGradient< T, FdgPolicy >:
roboptim::GenericDifferentiableFunction< T > roboptim::GenericFunction< T >

List of all members.

Public Member Functions

 ROBOPTIM_DIFFERENTIABLE_FUNCTION_FWD_TYPEDEFS_ (GenericDifferentiableFunction< T >)
 GenericFiniteDifferenceGradient (const GenericFunction< T > &f, value_type e=finiteDifferenceEpsilon)
 Instantiate a finite differences gradient.
 ~GenericFiniteDifferenceGradient ()

Protected Member Functions

virtual void impl_compute (result_ref, const_argument_ref) const
 Function evaluation.
virtual void impl_gradient (gradient_ref, const_argument_ref argument, size_type=0) const
 Gradient evaluation.
virtual void impl_jacobian (jacobian_ref jacobian, const_argument_ref argument) const
 Jacobian evaluation.

Protected Attributes

const GenericFunction< T > & adaptee_
 Reference to the wrapped function.
const value_type epsilon_
argument_t xEps_

Detailed Description

template<typename T, typename FdgPolicy>
class roboptim::GenericFiniteDifferenceGradient< T, FdgPolicy >

Compute automatically a gradient with finite differences.

Finite difference gradient is a method to approximate a function's gradient. It is particularly useful in RobOptim to avoid the need to compute the analytical gradient manually.

This class takes a Function as its input and wraps it into a derivable function.

The one dimensional formula is:

\[f'(x)\approx {f(x+\epsilon)-f(x)\over \epsilon}\]

where $\epsilon$ is a constant given when calling the class constructor.

Examples:
finite-difference-gradient.cc.

Constructor & Destructor Documentation

template<typename T, typename FdgPolicy>
roboptim::GenericFiniteDifferenceGradient< T, FdgPolicy >::GenericFiniteDifferenceGradient ( const GenericFunction< T > &  f,
value_type  e = finiteDifferenceEpsilon 
)

Instantiate a finite differences gradient.

Instantiate a derivable function that will wraps a non derivable function and compute automatically its gradient using finite differences.

Parameters:
ffunction that will e wrapped
eepsilon used in finite difference computation
template<typename T , typename FdgPolicy >
roboptim::GenericFiniteDifferenceGradient< T, FdgPolicy >::~GenericFiniteDifferenceGradient ( )

Member Function Documentation

template<typename T , typename FdgPolicy >
void roboptim::GenericFiniteDifferenceGradient< T, FdgPolicy >::impl_compute ( result_ref  result,
const_argument_ref  argument 
) const [protected, virtual]

Function evaluation.

Evaluate the function, has to be implemented in concrete classes.

Warning:
Do not call this function directly, call operator()(result_ref, const_argument_ref) const instead.
Parameters:
resultresult will be stored in this vector
argumentpoint at which the function will be evaluated

Implements roboptim::GenericFunction< T >.

template<typename T , typename FdgPolicy >
void roboptim::GenericFiniteDifferenceGradient< T, FdgPolicy >::impl_gradient ( gradient_ref  gradient,
const_argument_ref  argument,
size_type  functionId = 0 
) const [protected, virtual]

Gradient evaluation.

Compute the gradient, has to be implemented in concrete classes. The gradient is computed for a specific sub-function which id is passed through the functionId argument.

Warning:
Do not call this function directly, call gradient instead.
Parameters:
gradientgradient will be store in this argument
argumentpoint where the gradient will be computed
functionIdevaluated function id in the split representation

Implements roboptim::GenericDifferentiableFunction< T >.

template<typename T , typename FdgPolicy >
void roboptim::GenericFiniteDifferenceGradient< T, FdgPolicy >::impl_jacobian ( jacobian_ref  jacobian,
const_argument_ref  arg 
) const [protected, virtual]

Jacobian evaluation.

Computes the jacobian, can be overridden by concrete classes. The default behavior is to compute the jacobian from the gradient.

Warning:
Do not call this function directly, call jacobian instead.
Parameters:
jacobianjacobian will be store in this argument
argpoint where the jacobian will be computed

ROBOPTIM_DO_NOT_CHECK_ALLOCATION

Reimplemented from roboptim::GenericDifferentiableFunction< T >.


Member Data Documentation

template<typename T, typename FdgPolicy>
const GenericFunction<T>& roboptim::GenericFiniteDifferenceGradient< T, FdgPolicy >::adaptee_ [protected]

Reference to the wrapped function.

template<typename T, typename FdgPolicy>
const value_type roboptim::GenericFiniteDifferenceGradient< T, FdgPolicy >::epsilon_ [protected]
template<typename T, typename FdgPolicy>
argument_t roboptim::GenericFiniteDifferenceGradient< T, FdgPolicy >::xEps_ [mutable, protected]
 All Classes Namespaces Files Functions Variables Typedefs Enumerations Enumerator Friends Defines