Skip to content

Commit

Permalink
deploy: 9dcef3f
Browse files Browse the repository at this point in the history
  • Loading branch information
punkduckable committed Oct 28, 2024
1 parent 1059daf commit 361b63d
Show file tree
Hide file tree
Showing 19 changed files with 746 additions and 218 deletions.
Binary file modified .doctrees/autoapi/lasdi/gp/index.doctree
Binary file not shown.
Binary file modified .doctrees/autoapi/lasdi/inputs/index.doctree
Binary file not shown.
Binary file modified .doctrees/autoapi/lasdi/latent_space/index.doctree
Binary file not shown.
Binary file modified .doctrees/autoapi/lasdi/param/index.doctree
Binary file not shown.
Binary file modified .doctrees/environment.pickle
Binary file not shown.
Binary file modified .doctrees/index.doctree
Binary file not shown.
69 changes: 59 additions & 10 deletions _sources/autoapi/lasdi/gp/index.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -17,27 +17,76 @@ Functions
Module Contents
---------------

.. py:function:: fit_gps(X, Y)
.. py:function:: fit_gps(X: numpy.ndarray, Y: numpy.ndarray) -> list[sklearn.gaussian_process.GaussianProcessRegressor]
Trains a GP for each column of Y. If Y has shape N x k, then we train k GP regressors. In this
case, we assume that X has shape N x M. Thus, the Input to the GP is in \mathbb{R}^M. For each
k, we train a GP where the i'th row of X is the input and the i,k component of Y is the
corresponding target. Thus, we return a list of k GP Regressor objects, the k'th one of which
makes predictions for the k'th coefficient in the latent dynamics.

Trains each GP given the interpolation dataset.
X: (n_train, n_param) numpy 2d array
Y: (n_train, n_coef) numpy 2d array
We assume each target coefficient is independent with each other.
gp_dictionnary is a dataset containing the trained GPs (as sklearn objects)



.. py:function:: eval_gp(gp_dictionnary, param_grid)
-----------------------------------------------------------------------------------------------
:Parameters: * **X** (*A 2d numpy array of shape (n_train, input_dim), where n_train is the number of training*)
* **examples and input_dim is the number of components in each input (e.g., the number of**
* **parameters)**
* **Y** (*A 2d numpy array of shape (n_train, n_coef), where n_train is the number of training*)
* **examples and n_coef is the number of coefficients in the latent dynamics.**

-----------------------------------------------------------------------------------------------
:returns: * *A list of trained GP regressor objects. If Y has k columns, then the returned list has k*
* *elements. It's i'th element holds a trained GP regressor object whose training inputs are the*
* *columns of X and whose corresponding target values are the elements of the i'th column of Y.*


.. py:function:: eval_gp(gp_list: list[sklearn.gaussian_process.GaussianProcessRegressor], param_grid: numpy.ndarray) -> tuple
Computes the GPs predictive mean and standard deviation for points of the parameter space grid



.. py:function:: sample_coefs(gp_dictionnary, param, n_samples)
-----------------------------------------------------------------------------------------------
:Parameters: * **gp_list** (*a list of trained GP regressor objects. The number of elements in this list should*)
* **match the number of columns in param_grid. The i'th element of this list is a GP regressor**
* **object that predicts the i'th coefficient.**
* **param_grid** (*A 2d numpy.ndarray object of shape (number of parameter combination, number of*)
* **parameters). The i,j element of this array specifies the value of the j'th parameter in the**
* **i'th combination of parameters. We use this as the testing set for the GP evaluation.**

-----------------------------------------------------------------------------------------------
:returns: * *A two element tuple. Both are 2d numpy arrays of shape (number of parameter combinations,*
* *number of coefficients). The two arrays hold the predicted means and std's for each parameter*
* *at each training example, respectively.*
* *Thus, the i,j element of the first return variable holds the predicted mean of the j'th*
* *coefficient in the latent dynamics at the i'th training example. Likewise, the i,j element of*
* *the second return variable holds the standard deviation in the predicted distribution for the*
* *j'th coefficient in the latent dynamics at the i'th combination of parameter values.*


.. py:function:: sample_coefs(gp_list: list[sklearn.gaussian_process.GaussianProcessRegressor], param: numpy.ndarray, n_samples: int)
Generates sets of ODE (SINDy) coefficients sampled from the predictive distribution for those
coefficients at the specified parameter value (parma). Specifically, for the k'th SINDy
coefficient, we draw n_samples samples of the predictive distribution for the k'th coefficient
when param is the parameter.



Generates sample sets of ODEs for one given parameter.
coef_samples is a list of length n_samples, where each terms is a matrix of SINDy coefficients sampled from the GP predictive
distributions
-----------------------------------------------------------------------------------------------
:Parameters: * **gp_list** (*a list of trained GP regressor objects. The number of elements in this list should*)
* **match the number of columns in param_grid. The i'th element of this list is a GP regressor**
* **object that predicts the i'th coefficient.**
* **param** (*A combination of parameter values. i.e., a single test example. We evaluate each GP in*)
* **the gp_list at this parameter value (getting a prediction for each coefficient).**
* **n_samples** (*Number of samples of the predicted latent dynamics used to build ensemble of fom*)
* **predictions. N_s in the paper.**

-----------------------------------------------------------------------------------------------
:returns: * *A 2d numpy ndarray object called coef_samples. It has shape (n_samples, n_coef), where n_coef*
* *is the number of coefficients (length of gp_list). The i,j element of this list is the i'th*
* *sample of the j'th SINDy coefficient.*


61 changes: 43 additions & 18 deletions _sources/autoapi/lasdi/inputs/index.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -20,46 +20,71 @@ Classes
lasdi.inputs.InputParser


Functions
---------

.. autoapisummary::

lasdi.inputs.getDictFromList


Module Contents
---------------

.. py:data:: verbose
:type: bool
:value: False


.. py:class:: InputParser(dict, name='')
.. py:class:: InputParser(dict: InputParser.__init__.dict, name: str = '')
A InputParser objects acts as a wrapper around a dictionary of settings. Thus, each setting is
a key and the corresponding value is the setting's value. Because one setting may itself be
a dictionary (we often group settings; each group has a name but several constituent settings),
the underlying dictionary is structured as a sequence of nested dictionaries. This class allows
the user to select a specific setting from that structure by specifying (via a list of strings)
where in that nested structure the desired setting lives.


.. py:attribute:: dict_
:type: dict
:value: None



.. py:attribute:: name
:type: str
:value: ''



.. py:method:: getInput(keys, fallback=None, datatype=None)
.. py:method:: getInput(keys: list, fallback=None, datatype=None)
A InputParser object acts as a wrapper around a dictionary of settings. That is, self.dict_
is structured as a nested family of dictionaries. Each setting corresponds to a key in
self.dict_. The setting's value is the corresponding value in self.dict_. In many cases,
a particular setting may be nested within others. That is, a setting's value may itself be
another dictionary housing various sub-settings. This function allows us to fetch a
specific setting from this nested structure.

Specifically, we specify a list of strings. keys[0] should be a key in self.dict_
If so, we set val = self.dict_[keys[0]]. If there are more keys, then val should be a
dictionary and keys[1] should be a key in this dictionary. In this case, we replace val
with val[key[1]] and so on. This continues until we have exhausted all keys. There is one
important exception:

If at some point in the process, there are more keys but val is not a dictionary, or if
there are more keys and val is a dictionary but the next key is not a key in that
dictionary, then we return the fallback value. If the fallback value does not exist,
returns an error.

Find the value corresponding to the list of keys.
If the specified keys do not exist, use the fallback value.
If the fallback value does not exist, returns an error.
If the datatype is specified, enforce the output value has the right datatype.


-------------------------------------------------------------------------------------------
:Parameters: * **keys** (*A list of keys we want to fetch from self.dict. keys[0] should be a key in self.dict_*)
* **If so, we set val = self.dict_[keys[0]]. If there are more keys, then val should be a**
* **dictionary and keys[1] should be a key in this dictionary. In this case, we replace val**
* **with val[key[1]] and so on. This continues until we have exhausted all keys.**
* **fallback** (*A sort of default value. If at some point, val is not a dictionary (and there are*)
* **more keys) or val is a dictionary but the next key is not a valid key in that dictionary,**
* **then we return the fallback value.**
* **datatype** (*If not None, then we require that the final val has this datatype. If the final*)
* **val does not have the desired datatype, we raise an exception.**

.. py:function:: getDictFromList(list_, inputDict)
-------------------------------------------------------------------------------------------
:rtype: The final val value as outlined by the process described above.

get a dict with {key: val} from a list of dicts
NOTE: it returns only the first item in the list,
even if the list has more than one dict with {key: val}.


99 changes: 86 additions & 13 deletions _sources/autoapi/lasdi/latent_space/index.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -34,63 +34,103 @@ Module Contents

.. py:data:: act_dict
.. py:function:: initial_condition_latent(param_grid, physics, autoencoder)
.. py:function:: initial_condition_latent(param_grid: numpy.ndarray, physics: lasdi.physics.Physics, autoencoder: torch.nn.Module) -> list[numpy.ndarray]
Outputs the initial condition in the latent space: Z0 = encoder(U0)
This function maps a set of initial conditions for the fom to initial conditions for the
latent space dynamics. Specifically, we take in a set of possible parameter values. For each
set of parameter values, we recover the fom IC (from physics), then map this fom IC to a
latent space IC (by encoding it using the autoencoder). We do this for each parameter
combination and then return a list housing the latent space ICs.


-----------------------------------------------------------------------------------------------
:Parameters: * **param_grid** (*A 2d numpy.ndarray object of shape (number of parameter combination) x (number of*)
* **parameters).**
* **physics** (*A "Physics" object that stores the datasets for each parameter combination.*)
* **autoencoder** (*The actual autoencoder object that we use to map the ICs into the latent space.*)

.. py:class:: MultiLayerPerceptron(layer_sizes, act_type='sigmoid', reshape_index=None, reshape_shape=None, threshold=0.1, value=0.0, num_heads=1)
-----------------------------------------------------------------------------------------------
:returns: * *A list of numpy ndarray objects whose i'th element holds the latent space initial condition*
* *for the i'th set of parameters in the param_grid. That is, if we let U0_i denote the fom IC for*
* *the i'th set of parameters, then the i'th element of the returned list is Z0_i = encoder(U0_i).*


.. py:class:: MultiLayerPerceptron(layer_sizes: list[int], act_type: str = 'sigmoid', reshape_index: int = None, reshape_shape: tuple[int] = None, threshold: float = 0.1, value: float = 0.0)
Bases: :py:obj:`torch.nn.Module`


.. py:attribute:: n_layers
:type: int


.. py:attribute:: layer_sizes
:type: list[int]


.. py:attribute:: fcs
.. py:attribute:: layers
:type: list[torch.nn.Module]
:value: []



.. py:attribute:: reshape_index
:type: int


.. py:attribute:: reshape_shape
:type: list[int]


.. py:attribute:: act_type
:type: str


.. py:method:: forward(x: torch.Tensor) -> torch.Tensor
This function defines the forward pass through self.


-------------------------------------------------------------------------------------------
:Parameters: * **x** (*A tensor holding a batch of inputs. We pass this tensor through the network's layers*)
* **and then return the result. If self.reshape_index == 0 and self.reshape_shape has k**
* **elements, then the final k elements of x's shape must match self.reshape_shape.**

.. py:attribute:: use_multihead
:value: False
-------------------------------------------------------------------------------------------
:returns: * *The image of x under the network's layers. If self.reshape_index == -1 and*
* *self.reshape_shape has k elements, then we reshape the output so that the final k elements*
* *of its shape match those of self.reshape_shape.*



.. py:method:: forward(x)
.. py:method:: init_weight() -> None
This function initializes the weight matrices and bias vectors in self's layers.

.. py:method:: apply_attention(x, act_idx)

-------------------------------------------------------------------------------------------
:Parameters: **None!**

.. py:method:: init_weight()
-------------------------------------------------------------------------------------------
:rtype: Nothing!


.. py:class:: Autoencoder(physics, config)

.. py:class:: Autoencoder(physics: lasdi.physics.Physics, config: dict)
Bases: :py:obj:`torch.nn.Module`


.. py:attribute:: qgrid_size
:type: list[int]


.. py:attribute:: space_dim
:type: numpy.ndarray


.. py:attribute:: n_z
:type: int


.. py:attribute:: encoder
Expand All @@ -99,12 +139,45 @@ Module Contents
.. py:attribute:: decoder
.. py:method:: forward(x)
.. py:method:: forward(x: torch.Tensor) -> torch.Tensor
This function defines the forward pass through self.


-------------------------------------------------------------------------------------------
:Parameters: * **x** (*A tensor holding a batch of inputs. We pass this tensor through the encoder + decoder*)
* **and then return the result.**

-------------------------------------------------------------------------------------------
:rtype: The image of x under the encoder + decoder.



.. py:method:: export() -> dict
This function extracts self's parameters and returns them in a dictionary.


-------------------------------------------------------------------------------------------
:Parameters: **None!**

-------------------------------------------------------------------------------------------
:rtype: The A dictionary housing self's state dictionary.



.. py:method:: load(dict_: dict) -> None
This function loads self's state dictionary.


.. py:method:: export()
-------------------------------------------------------------------------------------------
:Parameters: * **dict_** (*This should be a dictionary with the key "autoencoder_param" whose corresponding*)
* **value is the state dictionary of an autoencoder which has the same architecture (i.e.,**
* **layer sizes) as self.**

-------------------------------------------------------------------------------------------
:rtype: Nothing!

.. py:method:: load(dict_)


Loading

0 comments on commit 361b63d

Please sign in to comment.