From 87bd4682a9038c3886b0e9bd272586746659c4cd Mon Sep 17 00:00:00 2001 From: Nathan Glenn Date: Sun, 12 Jul 2015 17:54:40 +0900 Subject: [PATCH] differentiate sections with the same label name Prefix the names of 5 algorithm sections with the name of their algorithm so that they are no longer all the same (`alg:train`). This fixes incorrect references caused by `multiply defined` labels. --- book/a_immune/airs.tex | 4 ++-- book/a_neural/backpropagation.tex | 4 ++-- book/a_neural/lvq.tex | 4 ++-- book/a_neural/perceptron.tex | 4 ++-- book/a_neural/som.tex | 4 ++-- 5 files changed, 10 insertions(+), 10 deletions(-) diff --git a/book/a_immune/airs.tex b/book/a_immune/airs.tex index f4e94ba7..aa147c78 100644 --- a/book/a_immune/airs.tex +++ b/book/a_immune/airs.tex @@ -55,7 +55,7 @@ \subsection{Strategy} % The algorithmic procedure summarizes the specifics of realizing a strategy as a systemized and parameterized computation. It outlines how the algorithm is organized in terms of the data structures and representations. The procedure may be described in terms of software engineering and computer science artifacts such as Pseudocode, design diagrams, and relevant mathematical equations. \subsection{Procedure} % What is the computational recipe for a technique? -Algorithm~\ref{alg:train} provides a high-level pseudocode for preparing memory cell vectors using the Artificial Immune Recognition System, specifically the canonical AIRS2. +Algorithm~\ref{alg:airs_train} provides a high-level pseudocode for preparing memory cell vectors using the Artificial Immune Recognition System, specifically the canonical AIRS2. An affinity (distance) measure between input patterns must be defined. For real-valued vectors, this is commonly the Euclidean distance: \begin{equation} @@ -134,7 +134,7 @@ \subsection{Procedure} \Return{\MemoryPool}\; % end \caption{Pseudocode for AIRS2.} - \label{alg:train} + \label{alg:airs_train} \end{algorithm} % Heuristics: Usage guidelines diff --git a/book/a_neural/backpropagation.tex b/book/a_neural/backpropagation.tex index 18e74705..72c5c931 100644 --- a/book/a_neural/backpropagation.tex +++ b/book/a_neural/backpropagation.tex @@ -60,7 +60,7 @@ \subsection{Procedure} The Back-propagation algorithm is a method for training the weights in a multi-layer feed-forward neural network. As such, it requires a network structure to be defined of one or more layers where one layer is fully connected to the next layer. A standard network structure is one input layer, one hidden layer, and one output layer. The method is primarily concerned with adapting the weights to the calculated error in the presence of input patterns, and the method is applied backward from the network output layer through to the input layer. % What is the computational recipe for a technique? -Algorithm~\ref{alg:train} provides a high-level pseudocode for preparing a network using the Back-propagation training method. A weight is initialized for each input plus an additional weight for a fixed bias constant input that is almost always set to 1.0. The activation of a single neuron to a given input pattern is calculated as follows: +Algorithm~\ref{alg:backpropagation_train} provides a high-level pseudocode for preparing a network using the Back-propagation training method. A weight is initialized for each input plus an additional weight for a fixed bias constant input that is almost always set to 1.0. The activation of a single neuron to a given input pattern is calculated as follows: \begin{equation} activation = \bigg(\sum_{k=1}^{n} w_{k} \times x_{ki}\bigg) + w_{bias} \times 1.0 \end{equation} @@ -137,7 +137,7 @@ \subsection{Procedure} \Return{\Network}\; % end \caption{Pseudocode for Back-propagation.} - \label{alg:train} + \label{alg:backpropagation_train} \end{algorithm} % Heuristics: Usage guidelines diff --git a/book/a_neural/lvq.tex b/book/a_neural/lvq.tex index 35260d0b..7f13388e 100644 --- a/book/a_neural/lvq.tex +++ b/book/a_neural/lvq.tex @@ -57,7 +57,7 @@ \subsection{Procedure} Vector Quantization is a technique from signal processing where density functions are approximated with prototype vectors for applications such as compression. Learning Vector Quantization is similar in principle, although the prototype vectors are learned through a supervised winner-take-all method. % What is the computational recipe for a technique? -Algorithm~\ref{alg:train} provides a high-level pseudocode for preparing codebook vectors using the Learning Vector Quantization method. +Algorithm~\ref{alg:lvq_train} provides a high-level pseudocode for preparing codebook vectors using the Learning Vector Quantization method. Codebook vectors are initialized to small floating point values, or sampled from an available dataset. The Best Matching Unit (BMU) is the codebook vector from the pool that has the minimum distance to an input vector. A distance measure between input patterns must be defined. For real-valued vectors, this is commonly the Euclidean distance: \begin{equation} @@ -111,7 +111,7 @@ \subsection{Procedure} \Return{\CodebookVectors}\; % end \caption{Pseudocode for LVQ1.} - \label{alg:train} + \label{alg:lvq_train} \end{algorithm} % Heuristics: Usage guidelines diff --git a/book/a_neural/perceptron.tex b/book/a_neural/perceptron.tex index 767eab3c..19b3385f 100644 --- a/book/a_neural/perceptron.tex +++ b/book/a_neural/perceptron.tex @@ -56,7 +56,7 @@ \subsection{Procedure} The Perceptron is comprised of a data structure (weights) and separate procedures for training and applying the structure. The structure is really just a vector of weights (one for each expected input) and a bias term. % What is the computational recipe for a technique? -Algorithm~\ref{alg:train} provides a pseudocode for training the Perceptron. A weight is initialized for each input plus an additional weight for a fixed bias constant input that is almost always set to 1.0. The activation of the network to a given input pattern is calculated as follows: +Algorithm~\ref{alg:perceptron_train} provides a pseudocode for training the Perceptron. A weight is initialized for each input plus an additional weight for a fixed bias constant input that is almost always set to 1.0. The activation of the network to a given input pattern is calculated as follows: \begin{equation} activation \leftarrow \sum_{k=1}^{n}\big( w_{k} \times x_{ki}\big) + w_{bias} \times 1.0 \end{equation} @@ -105,7 +105,7 @@ \subsection{Procedure} \Return{\Weights}\; % end \caption{Pseudocode for the Perceptron.} - \label{alg:train} + \label{alg:perceptron_train} \end{algorithm} % Heuristics: Usage guidelines diff --git a/book/a_neural/som.tex b/book/a_neural/som.tex index 5f9c2752..9c9ab1bc 100644 --- a/book/a_neural/som.tex +++ b/book/a_neural/som.tex @@ -59,7 +59,7 @@ \subsection{Procedure} The Self-Organizing map is comprised of a collection of codebook vectors connected together in a topological arrangement, typically a one dimensional line or a two dimensional grid. The codebook vectors themselves represent prototypes (points) within the domain, whereas the topological structure imposes an ordering between the vectors during the training process. The result is a low dimensional projection or approximation of the problem domain which may be visualized, or from which clusters may be extracted. % What is the computational recipe for a technique? -Algorithm~\ref{alg:train} provides a high-level pseudocode for preparing codebook vectors using the Self-Organizing Map method. +Algorithm~\ref{alg:som_train} provides a high-level pseudocode for preparing codebook vectors using the Self-Organizing Map method. Codebook vectors are initialized to small floating point values, or sampled from the domain. The Best Matching Unit (BMU) is the codebook vector from the pool that has the minimum distance to an input vector. A distance measure between input patterns must be defined. For real-valued vectors, this is commonly the Euclidean distance: \begin{equation} @@ -130,7 +130,7 @@ \subsection{Procedure} \Return{\CodebookVectors}\; % end \caption{Pseudocode for the SOM.} - \label{alg:train} + \label{alg:som_train} \end{algorithm} % Heuristics: Usage guidelines