Skip to content
This repository has been archived by the owner on Oct 26, 2018. It is now read-only.

Commit

Permalink
Variety of changes to documentation and example code.
Browse files Browse the repository at this point in the history
1. Update version history.
2. Minor changes to the tutorial to reflect the bounds constrained
   problem.
3. Added static factory methods to the SnavelyReprojectionError.
4. Removed relative gradient tolerance from types.h as it is
   not true anymore.

Change-Id: I8de386e5278a008c84ef2d3290d2c4351417a9f1
  • Loading branch information
sandwichmaker committed Apr 29, 2014
1 parent 658407d commit b166806
Show file tree
Hide file tree
Showing 5 changed files with 180 additions and 32 deletions.
38 changes: 22 additions & 16 deletions docs/source/tutorial.rst
Original file line number Diff line number Diff line change
Expand Up @@ -7,17 +7,21 @@
========
Tutorial
========
Ceres solves robustified non-linear least squares problems of the form

.. math:: \frac{1}{2}\sum_{i} \rho_i\left(\left\|f_i\left(x_{i_1}, ... ,x_{i_k}\right)\right\|^2\right).
:label: ceresproblem
Ceres solves robustified non-linear bounds constrained least squares
problems of the form

.. math:: :label: ceresproblem

\min_{\mathbf{x}} &\quad \frac{1}{2}\sum_{i} \rho_i\left(\left\|f_i\left(x_{i_1}, ... ,x_{i_k}\right)\right\|^2\right) \\
\text{s.t.} &\quad l_j \le x_j \le u_j

Problems of this form comes up in a broad range of areas across
science and engineering - from `fitting curves`_ in statistics, to
constructing `3D models from photographs`_ in computer vision.

.. _fitting curves: http://en.wikipedia.org/wiki/Nonlinear_regression
.. _3D model from photographs: http://en.wikipedia.org/wiki/Bundle_adjustment
.. _3D models from photographs: http://en.wikipedia.org/wiki/Bundle_adjustment

In this chapter we will learn how to solve :eq:`ceresproblem` using
Ceres Solver. Full working code for all the examples described in this
Expand All @@ -34,13 +38,16 @@ problems small groups of scalars occur together. For example the three
components of a translation vector and the four components of the
quaternion that define the pose of a camera. We refer to such a group
of small scalars as a ``ParameterBlock``. Of course a
``ParameterBlock`` can just be a single parameter.
``ParameterBlock`` can just be a single parameter. :math:`l_j` and
:math:`u_j` are bounds on the parameter block :math:`x_j`.

:math:`\rho_i` is a :class:`LossFunction`. A :class:`LossFunction` is
a scalar function that is used to reduce the influence of outliers on
the solution of non-linear least squares problems. As a special case,
when :math:`\rho_i(x) = x`, i.e., the identity function, we get the
more familiar `non-linear least squares problem
the solution of non-linear least squares problems.

As a special case, when :math:`\rho_i(x) = x`, i.e., the identity
function, and :math:`l_j = -\infty` and :math:`u_j = \infty` we get
the more familiar `non-linear least squares problem
<http://en.wikipedia.org/wiki/Non-linear_least_squares>`_.

.. math:: \frac{1}{2}\sum_{i} \left\|f_i\left(x_{i_1}, ... ,x_{i_k}\right)\right\|^2.
Expand Down Expand Up @@ -75,10 +82,10 @@ function :math:`f(x) = 10 - x`:

The important thing to note here is that ``operator()`` is a templated
method, which assumes that all its inputs and outputs are of some type
``T``. The reason for using templates here is because Ceres will call
``CostFunctor::operator<T>()``, with ``T=double`` when just the
residual is needed, and with a special type ``T=Jet`` when the
Jacobians are needed. In :ref:`section-derivatives` we discuss the
``T``. The use of templating here allows Ceres to call
``CostFunctor::operator<T>()``, with ``T=double`` when just the value
of the residual is needed, and with a special type ``T=Jet`` when the
Jacobians are needed. In :ref:`section-derivatives` we will discuss the
various ways of supplying derivatives to Ceres in more detail.

Once we have a way of computing the residual function, it is now time
Expand Down Expand Up @@ -642,10 +649,9 @@ as follows:
ceres::Problem problem;
for (int i = 0; i < bal_problem.num_observations(); ++i) {
ceres::CostFunction* cost_function =
new ceres::AutoDiffCostFunction<SnavelyReprojectionError, 2, 9, 3>(
new SnavelyReprojectionError(
bal_problem.observations()[2 * i + 0],
bal_problem.observations()[2 * i + 1]));
SnavelyReprojectionError::Create(
bal_problem.observations()[2 * i + 0],
bal_problem.observations()[2 * i + 1]);
problem.AddResidualBlock(cost_function,
NULL /* squared loss */,
bal_problem.mutable_camera_for_observation(i),
Expand Down
134 changes: 131 additions & 3 deletions docs/source/version_history.rst
Original file line number Diff line number Diff line change
Expand Up @@ -4,19 +4,147 @@
Releases
========

HEAD
====

New Features
------------

#. Parameters can now have upper and/or lower bounds when using the
trust region minimizer.
#. Problems in which the sparsity structure of the Jacobian changes
over the course of the optimization can now be solved much more
efficiently. (Richard Stebbing)
#. Improved support for Microsoft Visual C++ including the ability to
build and ship DLLs. (Björn Piltz, Alex Stewart and Sergey
Sharybin)
#. Simpler and more informative solver termination type
reporting. (See below for more details)
#. New `website <http://www.ceres-solver.org>`_ based entirely on
Sphinx.
#. ``AutoDiffLocalParameterization`` allows the use of automatic
differentiation for defining ``LocalParameterization`` objects
(Alex Stewart)
#. LBFGS is faster due to fewer memory copies.
#. Parameter blocks are not restricted to be less than 32k in size,
they can be upto 2G in size.
#. Faster ``SPARSE_NORMAL_CHOLESKY`` solver when using ``CX_SPARSE``
as the sparse linear algebra library.
#. ``Problem::IsParameterBlockPresent`` can be used to check if a
parameter block is already present in the problem.
#. ``Problem::GetParameterzation`` can be used to access the
parameterization associated with a parameter block.
#. Added the (2,4,9) template specialization for PartitionedMatrixView
and SchurEliminator.
#. An example demonstrating the use of
DynamicAutoDiffCostFunction. (Joydeep Biswas)
#. An example demonstrating the use of dynamic sparsity (Richard
Stebbing)
#. An example from Blender demonstrating the use of a custom
``IterationCallback``. (Sergey Sharybin)


Backward Incompatible API Changes
---------------------------------

#. ``Solver::Options::linear_solver_ordering`` used to be a naked
pointer that Ceres took ownership of. This is error prone behaviour
which leads to problems when copying the ``Solver::Options`` struct
around. This has been replaced with a ``shared_ptr`` to handle
ownership correctly across copies.

#. The enum used for reporting the termination/convergence status of
the solver has been renamed from ``SolverTerminationType`` to
``TerminationType``.

The enum values have also changed. ``FUNCTION_TOLERANCE``,
``GRADIENT_TOLERANCE`` and ``PARAMETER_TOLERANCE`` have all been
replaced by ``CONVERGENCE``.

``NUMERICAL_FAILURE`` has been replaed by ``FAILURE``.

``USER_ABORT`` has been renamed to ``USER_FAILURE``.

Further ``Solver::Summary::error`` has been renamed to
``Solver::Summary::message``. It contains a more detailed
explanation for why the solver terminated.

#. ``Solver::Options::gradient_tolerance`` used to be a relative
gradient tolerance. i.e., The solver converged when

.. math::
\|g(x)\|_\infty < \text{gradient_tolerance} * \|g(x_0)\|_\infty
where :math:`g(x)` is the gradient of the objective function at
:math:`x` and :math:`x_0` is the parmeter vector at the start of
the optimization.

This has changed to an absolute tolerance, i.e. the solver
converges when

.. math::
\|g(x)\|_\infty < \text{gradient_tolerance}
#. Ceres cannot be built without the line search minimizer
anymore. Thus the preprocessor define
``CERES_NO_LINE_SEARCH_MINIMIZER`` has been removed.

Bug Fixes
---------

#. Variety of code cleanups, optimizations and bug fixes to the line
search minimizer code (Alex Stewart)
#. Fixed ``BlockSparseMatrix::Transpose`` when the matrix has row and
column blocks. (Richard Bowen)
#. Better error checking when ``Problem::RemoveResidualBlock`` is
called. (Alex Stewart)
#. Fixed a memory leack in ``SchurComplementSolver``.
#. Added ``epsilon()`` method to ``NumTraits<ceres::Jet<T, N> >``. (Filippo
Basso)
#. Fixed a bug in `CompressedRowSparseMatrix::AppendRows`` and
``DeleteRows``.q
#. Handle empty problems consistently.
#. Restore the state of the ``Problem`` after a call to
``Problem::Evaluate``. (Stefan Leutenegger)
#. Better error checking and reporting for linear solvers.
#. Use explicit formula to solve quadratic polynomials instead of the
eigenvalue solver.
#. Fix constant parameter handling in inner iterations (Mikael
Persson).
#. SuiteSparse errors do not cause a fatal crash anymore.
#. Fix ``corrector_test.cc``.
#. Relax the requirements on loss function derivatives.
#. Minor bugfix to logging.h (Scott Ettinger)
#. Updated ``gmock`` and ``gtest`` to the latest upstream version.
#. Fix build breakage on old versions of SuiteSparse.
#. Fixed build issues related to Clang / LLVM 3.4 (Johannes
Schönberger)
#. METIS_FOUND is never set. Changed the commit to fit the setting of
the other #._FOUND definitions. (Andreas Franek)
#. Variety of bug fixes to the ``CMake`` build system (Alex Stewart)
#. Removed fictious shared library target from the NDK build.
#. Solver::Options now uses ``shared_ptr`` to handle ownership of
``Solver::Options::linear_solver_ordering`` and
``Solver::Options::inner_iteration_ordering``. As a consequence the
``NDK`` build now depends on ``libc++`` from the ``LLVM`` project.
#. Variety of lint cleanups (William Rucklidge & Jim Roseborough)
#. Various internal cleanups.


1.8.0
=====

New Features
------------
#. Significant improved ``CMake`` files with better robustness,
dependency checking and GUI support. (Alex Stewart)
#. ``DynamicNumericDiffCostFunction`` for numerically differentiated
cost functions whose sizing is determined at run time.
#. Added ``DynamicNumericDiffCostFunction`` for numerically
differentiated cost functions whose sizing is determined at run
time.
#. ``NumericDiffCostFunction`` now supports a dynamic number of
residuals just like ``AutoDiffCostFunction``.
#. ``Problem`` exposes more of its structure in its API.
#. Faster Automatic differentiation (Tim Langlois)
#. Faster automatic differentiation (Tim Langlois)
#. Added the commonly occuring ``2_d_d`` template specialization for
the Schur Eliminator.
#. Faster ``ITERATIVE_SCHUR`` solver using template specializations.
Expand Down
20 changes: 8 additions & 12 deletions examples/bundle_adjuster.cc
Original file line number Diff line number Diff line change
Expand Up @@ -263,18 +263,14 @@ void BuildProblem(BALProblem* bal_problem, Problem* problem) {
CostFunction* cost_function;
// Each Residual block takes a point and a camera as input and
// outputs a 2 dimensional residual.
if (FLAGS_use_quaternions) {
cost_function = new AutoDiffCostFunction<
SnavelyReprojectionErrorWithQuaternions, 2, 4, 6, 3>(
new SnavelyReprojectionErrorWithQuaternions(
observations[2 * i + 0],
observations[2 * i + 1]));
} else {
cost_function =
new AutoDiffCostFunction<SnavelyReprojectionError, 2, 9, 3>(
new SnavelyReprojectionError(observations[2 * i + 0],
observations[2 * i + 1]));
}
cost_function =
(FLAGS_use_quaternions)
? SnavelyReprojectionErrorWithQuaternions::Create(
observations[2 * i + 0],
observations[2 * i + 1])
: SnavelyReprojectionError::Create(
observations[2 * i + 0],
observations[2 * i + 1]);

// If enabled use Huber's loss function.
LossFunction* loss_function = FLAGS_robustify ? new HuberLoss(1.0) : NULL;
Expand Down
18 changes: 18 additions & 0 deletions examples/snavely_reprojection_error.h
Original file line number Diff line number Diff line change
Expand Up @@ -91,6 +91,14 @@ struct SnavelyReprojectionError {
return true;
}

// Factory to hide the construction of the CostFunction object from
// the client code.
static ceres::CostFunction* Create(const double observed_x,
const double observed_y) {
return (new ceres::AutoDiffCostFunction<SnavelyReprojectionError, 2, 9, 3>(
new SnavelyReprojectionError(observed_x, observed_y)));
}

double observed_x;
double observed_y;
};
Expand Down Expand Up @@ -146,6 +154,16 @@ struct SnavelyReprojectionErrorWithQuaternions {
return true;
}

// Factory to hide the construction of the CostFunction object from
// the client code.
static ceres::CostFunction* Create(const double observed_x,
const double observed_y) {
return (new ceres::AutoDiffCostFunction<
SnavelyReprojectionErrorWithQuaternions, 2, 4, 6, 3>(
new SnavelyReprojectionErrorWithQuaternions(observed_x,
observed_y)));
}

double observed_x;
double observed_y;
};
Expand Down
2 changes: 1 addition & 1 deletion include/ceres/types.h
Original file line number Diff line number Diff line change
Expand Up @@ -305,7 +305,7 @@ enum TerminationType {
// by the user was satisfied.
//
// 1. (new_cost - old_cost) < function_tolerance * old_cost;
// 2. max_i |gradient_i| < gradient_tolerance * max_i|initial_gradient_i|
// 2. max_i |gradient_i| < gradient_tolerance
// 3. |step|_2 <= parameter_tolerance * ( |x|_2 + parameter_tolerance)
//
// The user's parameter blocks will be updated with the solution.
Expand Down

0 comments on commit b166806

Please sign in to comment.