Skip to content

Commit

Permalink
[MRG][DOC] Fixes almost all warnings in the docs (#338)
Browse files Browse the repository at this point in the history
* Update API names, unuse depretaed html4

* Fixes a lot of warning. Add Methods doctree

* More warnings solved

* Fix docs dependencies

* New style for Example Code and References

* Add all Methods to all classes in docstrings, in alphabetical order

* Add MetricTransformer and MahalanobisMixin to auto-docs

* Delete unused vars in docs. Use simple quotes

* Fix identation

* Fix Github CI instead of old Travis CI

* References Lists are now numbered

* RemoveExample Code body almost everywhere

* Removed Methods directive. Kept warnings

* Deprecated directive now is red as in sklearn
  • Loading branch information
mvargas33 authored Nov 17, 2021
1 parent a797635 commit 4e0c444
Show file tree
Hide file tree
Showing 16 changed files with 175 additions and 117 deletions.
36 changes: 36 additions & 0 deletions doc/_static/css/styles.css
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
.hatnote {
border-color: #e1e4e5 ;
border-style: solid ;
border-width: 1px ;
font-size: x-small ;
font-style: italic ;
margin-left: auto ;
margin-right: auto ;
margin-bottom: 24px;
padding: 12px;
}
.hatnote-gray {
background-color: #f5f5f5
}
.hatnote li {
list-style-type: square;
margin-left: 12px !important;
}
.hatnote ul {
list-style-type: square;
margin-left: 0px !important;
margin-bottom: 0px !important;
}
.deprecated {
color: #b94a48;
background-color: #F3E5E5;
border-color: #eed3d7;
margin-top: 0.5rem;
padding: 0.5rem;
border-radius: 0.5rem;
margin-bottom: 0.5rem;
}

.deprecated p {
margin-bottom: 0 !important;
}
11 changes: 2 additions & 9 deletions doc/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -38,9 +38,6 @@
html_static_path = ['_static']
htmlhelp_basename = 'metric-learndoc'

# Option to only need single backticks to refer to symbols
default_role = 'any'

# Option to hide doctests comments in the documentation (like # doctest:
# +NORMALIZE_WHITESPACE for instance)
trim_doctest_flags = True
Expand All @@ -67,10 +64,6 @@
# generate autosummary even if no references
autosummary_generate = True

# Switch to old behavior with html4, for a good display of references,
# as described in https://github.com/sphinx-doc/sphinx/issues/6705
html4_writer = True


# Temporary work-around for spacing problem between parameter and parameter
# type in the doc, see https://github.com/numpy/numpydoc/issues/215. The bug
Expand All @@ -79,8 +72,8 @@
# In an ideal world, this would get fixed in this PR:
# https://github.com/readthedocs/sphinx_rtd_theme/pull/747/files
def setup(app):
app.add_javascript('js/copybutton.js')
app.add_stylesheet("basic.css")
app.add_js_file('js/copybutton.js')
app.add_css_file('css/styles.css')


# Remove matplotlib agg warnings from generated doc when using plt.show
Expand Down
6 changes: 3 additions & 3 deletions doc/index.rst
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
metric-learn: Metric Learning in Python
=======================================
|Travis-CI Build Status| |License| |PyPI version| |Code coverage|
|GitHub Actions Build Status| |License| |PyPI version| |Code coverage|

`metric-learn <https://github.com/scikit-learn-contrib/metric-learn>`_
contains efficient Python implementations of several popular supervised and
Expand Down Expand Up @@ -57,8 +57,8 @@ Documentation outline

:ref:`genindex` | :ref:`search`

.. |Travis-CI Build Status| image:: https://api.travis-ci.org/scikit-learn-contrib/metric-learn.svg?branch=master
:target: https://travis-ci.org/scikit-learn-contrib/metric-learn
.. |GitHub Actions Build Status| image:: https://github.com/scikit-learn-contrib/metric-learn/workflows/CI/badge.svg
:target: https://github.com/scikit-learn-contrib/metric-learn/actions?query=event%3Apush+branch%3Amaster
.. |PyPI version| image:: https://badge.fury.io/py/metric-learn.svg
:target: http://badge.fury.io/py/metric-learn
.. |License| image:: http://img.shields.io/:license-mit-blue.svg?style=flat
Expand Down
2 changes: 2 additions & 0 deletions doc/metric_learn.rst
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,8 @@ Base Classes

metric_learn.Constraints
metric_learn.base_metric.BaseMetricLearner
metric_learn.base_metric.MetricTransformer
metric_learn.base_metric.MahalanobisMixin
metric_learn.base_metric._PairsClassifierMixin
metric_learn.base_metric._TripletsClassifierMixin
metric_learn.base_metric._QuadrupletsClassifierMixin
Expand Down
56 changes: 29 additions & 27 deletions doc/supervised.rst
Original file line number Diff line number Diff line change
Expand Up @@ -152,7 +152,7 @@ neighbors (with same labels) of :math:`\mathbf{x}_{i}`, :math:`y_{ij}=0`
indicates :math:`\mathbf{x}_{i}, \mathbf{x}_{j}` belong to different classes,
:math:`[\cdot]_+=\max(0, \cdot)` is the Hinge loss.

.. topic:: Example Code:
.. rubric:: Example Code

::

Expand All @@ -167,15 +167,15 @@ indicates :math:`\mathbf{x}_{i}, \mathbf{x}_{j}` belong to different classes,
lmnn = LMNN(k=5, learn_rate=1e-6)
lmnn.fit(X, Y, verbose=False)

.. topic:: References:
.. rubric:: References

.. [1] Weinberger et al. `Distance Metric Learning for Large Margin
Nearest Neighbor Classification
<http://jmlr.csail.mit.edu/papers/volume10/weinberger09a/weinberger09a.pdf>`_.
JMLR 2009

.. [2] `Wikipedia entry on Large Margin Nearest Neighbor <https://en.wikipedia.org/wiki/Large_margin_nearest_neighbor>`_
.. container:: hatnote hatnote-gray

[1]. Weinberger et al. `Distance Metric Learning for Large Margin Nearest Neighbor Classification <http://jmlr.csail.mit.edu/papers/volume10/weinberger09a/weinberger09a.pdf>`_. JMLR 2009.

[2]. `Wikipedia entry on Large Margin Nearest Neighbor <https://en.wikipedia.org/wiki/Large_margin_nearest_neighbor>`_.


.. _nca:

Expand Down Expand Up @@ -216,7 +216,7 @@ the sum of probability of being correctly classified:
\mathbf{L} = \text{argmax}\sum_i p_i
.. topic:: Example Code:
.. rubric:: Example Code

::

Expand All @@ -231,13 +231,14 @@ the sum of probability of being correctly classified:
nca = NCA(max_iter=1000)
nca.fit(X, Y)

.. topic:: References:
.. rubric:: References


.. container:: hatnote hatnote-gray

.. [1] Goldberger et al.
`Neighbourhood Components Analysis <https://papers.nips.cc/paper/2566-neighbourhood-components-analysis.pdf>`_.
NIPS 2005
[1]. Goldberger et al. `Neighbourhood Components Analysis <https://papers.nips.cc/paper/2566-neighbourhood-components-analysis.pdf>`_. NIPS 2005.

.. [2] `Wikipedia entry on Neighborhood Components Analysis <https://en.wikipedia.org/wiki/Neighbourhood_components_analysis>`_
[2]. `Wikipedia entry on Neighborhood Components Analysis <https://en.wikipedia.org/wiki/Neighbourhood_components_analysis>`_.


.. _lfda:
Expand Down Expand Up @@ -289,7 +290,7 @@ nearby data pairs in the same class are made close and the data pairs in
different classes are separated from each other; far apart data pairs in the
same class are not imposed to be close.

.. topic:: Example Code:
.. rubric:: Example Code

::

Expand All @@ -309,15 +310,14 @@ same class are not imposed to be close.

To work around this, fit instances of this class to data once, then keep the instance around to do transformations.

.. topic:: References:
.. rubric:: References

.. [1] Sugiyama. `Dimensionality Reduction of Multimodal Labeled Data by Local
Fisher Discriminant Analysis <http://www.jmlr.org/papers/volume8/sugiyama07b/sugiyama07b.pdf>`_.
JMLR 2007

.. [2] Tang. `Local Fisher Discriminant Analysis on Beer Style Clustering
<https://gastrograph.com/resources/whitepapers/local-fisher
-discriminant-analysis-on-beer-style-clustering.html#>`_.
.. container:: hatnote hatnote-gray

[1]. Sugiyama. `Dimensionality Reduction of Multimodal Labeled Data by Local Fisher Discriminant Analysis <http://www.jmlr.org/papers/volume8/sugiyama07b/sugiyama07b.pdf>`_. JMLR 2007.

[2]. Tang. `Local Fisher Discriminant Analysis on Beer Style Clustering <https://gastrograph.com/resources/whitepapers/local-fisher-discriminant-analysis-on-beer-style-clustering.html#>`_.

.. _mlkr:

Expand Down Expand Up @@ -363,7 +363,7 @@ calculating a weighted average of all the training samples:
\hat{y}_i = \frac{\sum_{j\neq i}y_jk_{ij}}{\sum_{j\neq i}k_{ij}}
.. topic:: Example Code:
.. rubric:: Example Code

::

Expand All @@ -377,10 +377,12 @@ calculating a weighted average of all the training samples:
mlkr = MLKR()
mlkr.fit(X, Y)

.. topic:: References:
.. rubric:: References


.. container:: hatnote hatnote-gray

.. [1] Weinberger et al. `Metric Learning for Kernel Regression <http://proceedings.mlr.
press/v2/weinberger07a/weinberger07a.pdf>`_. AISTATS 2007
[1]. Weinberger et al. `Metric Learning for Kernel Regression <http://proceedings.mlr.press/v2/weinberger07a/weinberger07a.pdf>`_. AISTATS 2007.


.. _supervised_version:
Expand Down Expand Up @@ -417,7 +419,7 @@ quadruplets, where for each quadruplet the two first points are from the same
class, and the two last points are from a different class (so indeed the two
last points should be less similar than the two first points).

.. topic:: Example Code:
.. rubric:: Example Code

::

Expand Down
9 changes: 6 additions & 3 deletions doc/unsupervised.rst
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ It can be used for ZCA whitening of the data (see the Wikipedia page of
`whitening transformation <https://en.wikipedia.org/wiki/\
Whitening_transformation>`_).

.. topic:: Example Code:
.. rubric:: Example Code

::

Expand All @@ -32,6 +32,9 @@ Whitening_transformation>`_).
cov = Covariance().fit(iris)
x = cov.transform(iris)

.. topic:: References:
.. rubric:: References

.. [1] On the Generalized Distance in Statistics, P.C.Mahalanobis, 1936

.. container:: hatnote hatnote-gray

[1]. On the Generalized Distance in Statistics, P.C.Mahalanobis, 1936.
Loading

0 comments on commit 4e0c444

Please sign in to comment.