diff --git a/.doctrees/autoapi/lasdi/gp/index.doctree b/.doctrees/autoapi/lasdi/gp/index.doctree index abe324f..f0b3452 100644 Binary files a/.doctrees/autoapi/lasdi/gp/index.doctree and b/.doctrees/autoapi/lasdi/gp/index.doctree differ diff --git a/.doctrees/autoapi/lasdi/inputs/index.doctree b/.doctrees/autoapi/lasdi/inputs/index.doctree index 51d6628..cc3295c 100644 Binary files a/.doctrees/autoapi/lasdi/inputs/index.doctree and b/.doctrees/autoapi/lasdi/inputs/index.doctree differ diff --git a/.doctrees/autoapi/lasdi/param/index.doctree b/.doctrees/autoapi/lasdi/param/index.doctree index c99c893..5cd3abf 100644 Binary files a/.doctrees/autoapi/lasdi/param/index.doctree and b/.doctrees/autoapi/lasdi/param/index.doctree differ diff --git a/.doctrees/autoapi/lasdi/timing/index.doctree b/.doctrees/autoapi/lasdi/timing/index.doctree index a87ade8..c0a0c34 100644 Binary files a/.doctrees/autoapi/lasdi/timing/index.doctree and b/.doctrees/autoapi/lasdi/timing/index.doctree differ diff --git a/.doctrees/environment.pickle b/.doctrees/environment.pickle index 9079d0a..8ce5ac0 100644 Binary files a/.doctrees/environment.pickle and b/.doctrees/environment.pickle differ diff --git a/.doctrees/index.doctree b/.doctrees/index.doctree index c05adb2..5dacf96 100644 Binary files a/.doctrees/index.doctree and b/.doctrees/index.doctree differ diff --git a/_sources/autoapi/lasdi/gp/index.rst.txt b/_sources/autoapi/lasdi/gp/index.rst.txt index 7cedd0b..ebedee2 100644 --- a/_sources/autoapi/lasdi/gp/index.rst.txt +++ b/_sources/autoapi/lasdi/gp/index.rst.txt @@ -17,76 +17,27 @@ Functions Module Contents --------------- -.. py:function:: fit_gps(X: numpy.ndarray, Y: numpy.ndarray) -> list[sklearn.gaussian_process.GaussianProcessRegressor] - - Trains a GP for each column of Y. If Y has shape N x k, then we train k GP regressors. In this - case, we assume that X has shape N x M. Thus, the Input to the GP is in \mathbb{R}^M. For each - k, we train a GP where the i'th row of X is the input and the i,k component of Y is the - corresponding target. Thus, we return a list of k GP Regressor objects, the k'th one of which - makes predictions for the k'th coefficient in the latent dynamics. +.. py:function:: fit_gps(X, Y) + Trains each GP given the interpolation dataset. + X: (n_train, n_param) numpy 2d array + Y: (n_train, n_coef) numpy 2d array We assume each target coefficient is independent with each other. + gp_dictionnary is a dataset containing the trained GPs (as sklearn objects) - ----------------------------------------------------------------------------------------------- - :Parameters: * **X** (*A 2d numpy array of shape (n_train, input_dim), where n_train is the number of training*) - * **examples and input_dim is the number of components in each input (e.g., the number of** - * **parameters)** - * **Y** (*A 2d numpy array of shape (n_train, n_coef), where n_train is the number of training*) - * **examples and n_coef is the number of coefficients in the latent dynamics.** - - ----------------------------------------------------------------------------------------------- - :returns: * *A list of trained GP regressor objects. If Y has k columns, then the returned list has k* - * *elements. It's i'th element holds a trained GP regressor object whose training inputs are the* - * *columns of X and whose corresponding target values are the elements of the i'th column of Y.* - - -.. py:function:: eval_gp(gp_list: list[sklearn.gaussian_process.GaussianProcessRegressor], param_grid: numpy.ndarray) -> tuple +.. py:function:: eval_gp(gp_dictionnary, param_grid) Computes the GPs predictive mean and standard deviation for points of the parameter space grid - ----------------------------------------------------------------------------------------------- - :Parameters: * **gp_list** (*a list of trained GP regressor objects. The number of elements in this list should*) - * **match the number of columns in param_grid. The i'th element of this list is a GP regressor** - * **object that predicts the i'th coefficient.** - * **param_grid** (*A 2d numpy.ndarray object of shape (number of parameter combination, number of*) - * **parameters). The i,j element of this array specifies the value of the j'th parameter in the** - * **i'th combination of parameters. We use this as the testing set for the GP evaluation.** - - ----------------------------------------------------------------------------------------------- - :returns: * *A two element tuple. Both are 2d numpy arrays of shape (number of parameter combinations,* - * *number of coefficients). The two arrays hold the predicted means and std's for each parameter* - * *at each training example, respectively.* - * *Thus, the i,j element of the first return variable holds the predicted mean of the j'th* - * *coefficient in the latent dynamics at the i'th training example. Likewise, the i,j element of* - * *the second return variable holds the standard deviation in the predicted distribution for the* - * *j'th coefficient in the latent dynamics at the i'th combination of parameter values.* - - -.. py:function:: sample_coefs(gp_list: list[sklearn.gaussian_process.GaussianProcessRegressor], param: numpy.ndarray, n_samples: int) - - Generates sets of ODE (SINDy) coefficients sampled from the predictive distribution for those - coefficients at the specified parameter value (parma). Specifically, for the k'th SINDy - coefficient, we draw n_samples samples of the predictive distribution for the k'th coefficient - when param is the parameter. - - +.. py:function:: sample_coefs(gp_dictionnary, param, n_samples) - ----------------------------------------------------------------------------------------------- - :Parameters: * **gp_list** (*a list of trained GP regressor objects. The number of elements in this list should*) - * **match the number of columns in param_grid. The i'th element of this list is a GP regressor** - * **object that predicts the i'th coefficient.** - * **param** (*A combination of parameter values. i.e., a single test example. We evaluate each GP in*) - * **the gp_list at this parameter value (getting a prediction for each coefficient).** - * **n_samples** (*Number of samples of the predicted latent dynamics used to build ensemble of fom*) - * **predictions. N_s in the paper.** + Generates sample sets of ODEs for one given parameter. + coef_samples is a list of length n_samples, where each terms is a matrix of SINDy coefficients sampled from the GP predictive + distributions - ----------------------------------------------------------------------------------------------- - :returns: * *A 2d numpy ndarray object called coef_samples. It has shape (n_samples, n_coef), where n_coef* - * *is the number of coefficients (length of gp_list). The i,j element of this list is the i'th* - * *sample of the j'th SINDy coefficient.* diff --git a/_sources/autoapi/lasdi/inputs/index.rst.txt b/_sources/autoapi/lasdi/inputs/index.rst.txt index 9d6839f..e9a51ca 100644 --- a/_sources/autoapi/lasdi/inputs/index.rst.txt +++ b/_sources/autoapi/lasdi/inputs/index.rst.txt @@ -20,71 +20,46 @@ Classes lasdi.inputs.InputParser +Functions +--------- + +.. autoapisummary:: + + lasdi.inputs.getDictFromList + + Module Contents --------------- .. py:data:: verbose - :type: bool :value: False -.. py:class:: InputParser(dict: InputParser.__init__.dict, name: str = '') - - A InputParser objects acts as a wrapper around a dictionary of settings. Thus, each setting is - a key and the corresponding value is the setting's value. Because one setting may itself be - a dictionary (we often group settings; each group has a name but several constituent settings), - the underlying dictionary is structured as a sequence of nested dictionaries. This class allows - the user to select a specific setting from that structure by specifying (via a list of strings) - where in that nested structure the desired setting lives. - +.. py:class:: InputParser(dict, name='') .. py:attribute:: dict_ - :type: dict :value: None .. py:attribute:: name - :type: str :value: '' - .. py:method:: getInput(keys: list, fallback=None, datatype=None) - - A InputParser object acts as a wrapper around a dictionary of settings. That is, self.dict_ - is structured as a nested family of dictionaries. Each setting corresponds to a key in - self.dict_. The setting's value is the corresponding value in self.dict_. In many cases, - a particular setting may be nested within others. That is, a setting's value may itself be - another dictionary housing various sub-settings. This function allows us to fetch a - specific setting from this nested structure. - - Specifically, we specify a list of strings. keys[0] should be a key in self.dict_ - If so, we set val = self.dict_[keys[0]]. If there are more keys, then val should be a - dictionary and keys[1] should be a key in this dictionary. In this case, we replace val - with val[key[1]] and so on. This continues until we have exhausted all keys. There is one - important exception: - - If at some point in the process, there are more keys but val is not a dictionary, or if - there are more keys and val is a dictionary but the next key is not a key in that - dictionary, then we return the fallback value. If the fallback value does not exist, - returns an error. + .. py:method:: getInput(keys, fallback=None, datatype=None) + Find the value corresponding to the list of keys. + If the specified keys do not exist, use the fallback value. + If the fallback value does not exist, returns an error. + If the datatype is specified, enforce the output value has the right datatype. - ------------------------------------------------------------------------------------------- - :Parameters: * **keys** (*A list of keys we want to fetch from self.dict. keys[0] should be a key in self.dict_*) - * **If so, we set val = self.dict_[keys[0]]. If there are more keys, then val should be a** - * **dictionary and keys[1] should be a key in this dictionary. In this case, we replace val** - * **with val[key[1]] and so on. This continues until we have exhausted all keys.** - * **fallback** (*A sort of default value. If at some point, val is not a dictionary (and there are*) - * **more keys) or val is a dictionary but the next key is not a valid key in that dictionary,** - * **then we return the fallback value.** - * **datatype** (*If not None, then we require that the final val has this datatype. If the final*) - * **val does not have the desired datatype, we raise an exception.** - ------------------------------------------------------------------------------------------- - :rtype: The final val value as outlined by the process described above. +.. py:function:: getDictFromList(list_, inputDict) + get a dict with {key: val} from a list of dicts + NOTE: it returns only the first item in the list, + even if the list has more than one dict with {key: val}. diff --git a/_sources/autoapi/lasdi/param/index.rst.txt b/_sources/autoapi/lasdi/param/index.rst.txt index 5dfda82..dfc9691 100644 --- a/_sources/autoapi/lasdi/param/index.rst.txt +++ b/_sources/autoapi/lasdi/param/index.rst.txt @@ -32,167 +32,64 @@ Functions Module Contents --------------- -.. py:function:: get_1dspace_from_list(param_dict: dict) -> tuple[int, numpy.ndarray] - - This function generates the parameter range (set of possible parameter values) for a parameter - that uses the list type test space. That is, "test_space_type" should be a key for the - parameter dictionary and the corresponding value should be "list". The parameter dictionary - should also have a "list" key whose value is a list of the possible parameter values. - - We parse this list and turn it into a numpy ndarray. - - - ----------------------------------------------------------------------------------------------- - :Parameters: * **param_dict** (*A dictionary specifying one of the parameters. We should fetch this from the*) - * **configuration yaml file. It must have a "list" key whose corresponding value is a list of** - * **floats.** - - ----------------------------------------------------------------------------------------------- - :returns: * **Two arguments** (*Nx and paramRange. paramRange is a 1d numpy ndarray (whose ith value is the*) - * *i'th element of param_dict["list"]). Nx is the length of paramRange.* - - -.. py:function:: create_uniform_1dspace(param_dict: dict) -> tuple[int, numpy.ndarray] - - This function generates the parameter range (set of possible parameter values) for a parameter - that uses the uniform type test space. That is, "test_space_type" should be a key for the - parameter dictionary and the corresponding value should be "uniform". The parameter dictionary - should also have the following keys: - "min" - "max" - "sample_size" - "log_scale" - "min" and "max" specify the minimum and maximum value of the parameter, respectively. - "sample_size" specifies the number of parameter values we generate. Finally, log_scale, if - true, specifies if we should use a uniform or logarithmic spacing between samples of the - parameter. - - The values corresponding to "min" and "max" should be floats while the values corresponding to - "sample_size" and "log_scale" should be an int and a bool, respectively. - - - ----------------------------------------------------------------------------------------------- - :Parameters: * **param_dict** (*A dictionary specifying one of the parameters. We should fetch this from the*) - * **configuration yaml file. It must have a "min", "max", "sample_size", and "log_scale"** - * **keys (see above).** - - ----------------------------------------------------------------------------------------------- - :returns: * **Two arguments** (*Nx and paramRange. paramRange is a 1d numpy ndarray (whose ith value is the*) - * *i'th possible value of the parameter. Thus, paramRange[0] = param_dict["min"] and* - * *paramRange[-1] = param_dict["max"]). Nx is the length of paramRange or, equivalently* - * *param_dict["sample_size"].* +.. py:function:: get_1dspace_from_list(config) +.. py:function:: create_uniform_1dspace(config) .. py:data:: getParam1DSpace - :type: dict[str, callable] -.. py:class:: ParameterSpace(config: dict) +.. py:class:: ParameterSpace(config) .. py:attribute:: param_list - :type: list[dict] :value: [] .. py:attribute:: param_name - :type: list[str] :value: [] .. py:attribute:: n_param - :type: int :value: 0 .. py:attribute:: train_space - :type: numpy.ndarray :value: None .. py:attribute:: test_space - :type: numpy.ndarray :value: None .. py:attribute:: n_init - :type: int :value: 0 .. py:attribute:: test_grid_sizes - :type: list[int] :value: [] .. py:attribute:: test_meshgrid - :type: tuple[numpy.ndarray] :value: None - .. py:method:: n_train() -> int - - Returns the number of combinations of parameters in the training set. - - - - .. py:method:: n_test() -> int - - Returns the number of combinations of parameters in the testing set. - - - - .. py:method:: createInitialTrainSpace(param_list: list[dict]) -> numpy.ndarray - - Sets up a grid of parameter values to train at. Note that we only use the min and max value - of each parameter when setting up this grid. - - - ------------------------------------------------------------------------------------------- - :Parameters: * **param_list** (*A list of parameter dictionaries. Each entry should be a dictionary with the*) - * **following keys** -- - - - name - - min - - max - - ------------------------------------------------------------------------------------------- - :returns: * *A 2d array of shape ((2)^k, k), where k is the number of parameters (k == len(param_list)).* - * *The i'th column is the flattened i'th mesh_grid array we when we create a mesh grid using* - * *the min and max value of each parameter as the argument. See "createHyperMeshGrid" for* - * *details.* - * *Specifically, we return exactly what "createHyperGridSpace" returns. See the doc-string* - * *for that function for further details.* + .. py:method:: n_train() + .. py:method:: n_test() - .. py:method:: createTestGridSpace(param_list: list[dict]) -> tuple[list[int], tuple[numpy.ndarray], numpy.ndarray] - This function sets up a grid of parameter values to test at. + .. py:method:: createInitialTrainSpace(param_list) - ------------------------------------------------------------------------------------------- - :Parameters: * **param_list** (*A list of parameter dictionaries. Each dictionary should either use the*) - * **"uniform" or "list" format. See create_uniform_1dspace and get_1dspace_from_list,** - * **respectively.** - - ------------------------------------------------------------------------------------------- - :returns: * *A three element tuple.* - * *The first is a list whose i'th element specifies the number of distinct values of the i'th* - * *parameter we consider (this is the length of the i'th element of "paramRanges" below).* - * *The second is a a tuple of k numpy ndarrays (where k = len(param_list)), the i'th one of* - * *which is a k-dimensional array with shape (N0, ... , N{k - 1}), where Ni =* - * *param_list[i].size whose i(0), ... , i(k - 1) element specifies the value of the i'th* - * *parameter in the i(0), ... , i(k - 1)'th unique combination of parameter values.* - * *The third one is a 2d array of parameter values. It has shape (M, k), where* - * *M = \prod_{i = 0}^{k - 1} param_list[i].size.* - + .. py:method:: createTestGridSpace(param_list) .. py:method:: getParameter(param_vector) @@ -202,110 +99,35 @@ Module Contents - .. py:method:: createHyperMeshGrid(param_ranges: list[numpy.ndarray]) -> tuple[numpy.ndarray] - - This function generates arrays of parameter values. Specifically, if there are k - parameters (param_ranges has k elements), then we return k k-d arrays, the i'th one of - which is a k-dimensional array whose i(0), ... , i(k - 1) element specifies the value of - the i'th variable in the i(0), ... , i(k - 1)'th unique combination of parameter values. - - - ------------------------------------------------------------------------------------------- - :Parameters: * **param_ranges** (*list of numpy 1d arrays, each corresponding to 1d parameter grid space. The*) - * **i'th element of this list should be a 2-element numpy.ndarray object housing the max and** - * **min value for the i'th parameter. The list size should equal the number of parameters.** - - ------------------------------------------------------------------------------------------- - :returns: * *the "paramSpaces" tuple. This is a tuple of numpy ndarray objects, the i'th one of which* - * *gives the grid of parameter values for the i'th parameter. Specifically, if there are* - * *k parameters and if param_range[i].size = Ni, then the j'th return array has shape* - * *(N0, ... , N{k - 1}) and the i(0), ... , i(k - 1) element of this array houses the i(j)'th* - * *value of the j'th parameter.* - * *Thus, if there are k parameters, the returned tuple has k elements, each one of* - * *which is an array of shape (N0, ... , N{k - 1}).* - - - - .. py:method:: createHyperGridSpace(mesh_grids: tuple[numpy.ndarray]) -> numpy.ndarray - - Flattens the mesh_grid numpy.ndarray objects returned by createHyperMeshGrid and combines - them into a single 2d array of shape (grid size, number of parameters) (see below). - - - ------------------------------------------------------------------------------------------- - :Parameters: * **mesh_grids** (*tuple of numpy nd arrays, corresponding to each parameter. This should ALWAYS*) - * **be the output of the "CreateHyperMeshGrid" function. See the return section of that** - * **function's docstring for details.** - - ------------------------------------------------------------------------------------------- - :returns: * *The param_grid. This is a 2d numpy.ndarray object of shape (grid size, number of* - * *parameters). If each element of mesh_grids is a numpy.ndarray object of shape (N(1), ... ,* - * *N(k)) (k parameters), then (grid size) = N(1)*N(2)*...*N(k) and (number of parameters) = k.* - - - - .. py:method:: appendTrainSpace(param: numpy.ndarray) -> None - - Adds a new parameter to self's train space attribute. - - - ------------------------------------------------------------------------------------------- - :Parameters: * **param** (*A 1d numpy ndarray object. It should have shape (n_param) and should hold a*) - * **parameter value that we want to add to the training set.** - - ------------------------------------------------------------------------------------------- - :rtype: Nothing! - - - - .. py:method:: export() -> dict - - This function packages the testing/training examples into a dictionary, which it returns. - + .. py:method:: createHyperMeshGrid(param_ranges) - ------------------------------------------------------------------------------------------- - :Parameters: * **None!** - * **-------------------------------------------------------------------------------------------** + param_ranges: list of numpy 1d arrays, each corresponding to 1d parameter grid space. + The list size is equal to the number of parameters. - :returns: * *A dictionary with 4 keys. Below is a list of the keys with a short description of each* - * *corresponding value.* -- train_space: self.train_space, a 2d array of shape (n_train, n_param) whose i,j element - holds the value of the j'th parameter in the i'th training case. + Output: paramSpaces + - tuple of numpy nd arrays, corresponding to each parameter. + Dimension of the array equals to the number of parameters - test_space: self.test_space, a 2d array of shape (n_test, n_param) whose i,j element - holds the value of the j'th parameter in the i'th testing case. - test_grid_sizes: A list whose i'th element specifies how many distinct parameter values - we use for the i'th parameter. - test_meshgrid: a tuple of n_param numpy.ndarray array objects whose i'th element is a - n_param-dimensional array whose i(1), i(2), ... , i(n_param) element holds the value of - the i'th parameter in the i(1), ... , i(n_param) combination of parameter values in the - testing test. + .. py:method:: createHyperGridSpace(mesh_grids) - n_init: The number of combinations of training parameters in the training set. + mesh_grids: tuple of numpy nd arrays, corresponding to each parameter. + Dimension of the array equals to the number of parameters + Output: param_grid + - numpy 2d array of size (grid size x number of parameters). + grid size is the size of a numpy nd array. - .. py:method:: load(dict_: dict) -> None - This function builds a parameter space object from a dictionary. This dictionary should - be one that was returned by th export method, or a loaded copy of a dictionary that was - returned by the export method. + .. py:method:: appendTrainSpace(param) - ------------------------------------------------------------------------------------------- - :Parameters: * **dict_** (*This should be a dictionary with the following keys:*) -- - - train_space - - test_space - - test_grid_sizes - - test_meshgrid - - n_init - * **This dictionary should have been returned by the export method. We use the values in this** - * **dictionary to set up self.** + .. py:method:: export() - ------------------------------------------------------------------------------------------- - :rtype: Nothing! + .. py:method:: load(dict_) diff --git a/_sources/autoapi/lasdi/timing/index.rst.txt b/_sources/autoapi/lasdi/timing/index.rst.txt index 29846d6..2f919f9 100644 --- a/_sources/autoapi/lasdi/timing/index.rst.txt +++ b/_sources/autoapi/lasdi/timing/index.rst.txt @@ -107,30 +107,7 @@ Module Contents .. py:method:: export() - Export the list of jobs and their number of calls and total time - into a dictionary. - - Note: - All jobs must be ended before calling this method. - - Returns: - :obj:`dict` that contains "names", "calls", and "times" as keys - - .. py:method:: load(dict_) - Load the list of jobs and their number of calls and total time - from a dictionary. - - Args: - `dict_` (:obj:`dict`): Dictionary that contains the list of jobs and their calls and times. - - Note: - :obj:`dict_['names']`, :obj:`dict_['calls']` and :obj:`dict_['times']` must have the same size. - - Returns: - Does not return a value - - diff --git a/_sources/index.rst.txt b/_sources/index.rst.txt index cb633a2..69cf38c 100644 --- a/_sources/index.rst.txt +++ b/_sources/index.rst.txt @@ -18,10 +18,7 @@ It also supports parametric interpolation of latent dynamics according to uncert References =================== -<<<<<<< HEAD * Bonneville, Christophe, Xiaolong He, April Tran, Jun Sur Park, William Fries, Daniel A. Messenger, Siu Wun Cheung et al. "A Comprehensive Review of Latent Space Dynamics Identification Algorithms for Intrusive and Non-Intrusive Reduced-Order-Modeling." arXiv preprint arXiv:2403.10748 (2024). -======= ->>>>>>> 9269e9cb2d85c993efecb52c726f2f1ff657d487 * Fries, William D., Xiaolong He, and Youngsoo Choi. "LaSDI: Parametric latent space dynamics identification." Computer Methods in Applied Mechanics and Engineering 399 (2022): 115436. * He, Xiaolong, Youngsoo Choi, William D. Fries, Jonathan L. Belof, and Jiun-Shyan Chen. "gLaSDI: Parametric physics-informed greedy latent space dynamics identification." Journal of Computational Physics 489 (2023): 112267. * Tran, April, Xiaolong He, Daniel A. Messenger, Youngsoo Choi, and David M. Bortz. "Weak-form latent space dynamics identification." Computer Methods in Applied Mechanics and Engineering 427 (2024): 116998. diff --git a/autoapi/lasdi/gp/index.html b/autoapi/lasdi/gp/index.html index 7a4efa3..ff468ef 100644 --- a/autoapi/lasdi/gp/index.html +++ b/autoapi/lasdi/gp/index.html @@ -100,14 +100,14 @@
|
-Trains a GP for each column of Y. If Y has shape N x k, then we train k GP regressors. In this |
+
|
+Trains each GP given the interpolation dataset. |
|
+|
|
Computes the GPs predictive mean and standard deviation for points of the parameter space grid |
|
-Generates sets of ODE (SINDy) coefficients sampled from the predictive distribution for those |
+
|
+Generates sample sets of ODEs for one given parameter. |
Trains a GP for each column of Y. If Y has shape N x k, then we train k GP regressors. In this -case, we assume that X has shape N x M. Thus, the Input to the GP is in mathbb{R}^M. For each -k, we train a GP where the i’th row of X is the input and the i,k component of Y is the -corresponding target. Thus, we return a list of k GP Regressor objects, the k’th one of which -makes predictions for the k’th coefficient in the latent dynamics.
-We assume each target coefficient is independent with each other.
----
-- -
parameters)
- -
Y (A 2d numpy array of shape (n_train, n_coef), where n_train is the number of training)
- -
examples and n_coef is the number of coefficients in the latent dynamics.
-+lasdi.gp.fit_gps(X, Y) +-
-- -
columns of X and whose corresponding target values are the elements of the i’th column of Y.
Trains each GP given the interpolation dataset. +X: (n_train, n_param) numpy 2d array +Y: (n_train, n_coef) numpy 2d array +We assume each target coefficient is independent with each other. +gp_dictionnary is a dataset containing the trained GPs (as sklearn objects)
Computes the GPs predictive mean and standard deviation for points of the parameter space grid
----
-- -
object that predicts the i’th coefficient.
- -
param_grid (A 2d numpy.ndarray object of shape (number of parameter combination, number of)
- -
parameters). The i,j element of this array specifies the value of the j’th parameter in the
- -
i’th combination of parameters. We use this as the testing set for the GP evaluation.
--
-- -
at each training example, respectively.
- -
Thus, the i,j element of the first return variable holds the predicted mean of the j’th
- -
coefficient in the latent dynamics at the i’th training example. Likewise, the i,j element of
- -
the second return variable holds the standard deviation in the predicted distribution for the
- -
j’th coefficient in the latent dynamics at the i’th combination of parameter values.
Generates sets of ODE (SINDy) coefficients sampled from the predictive distribution for those -coefficients at the specified parameter value (parma). Specifically, for the k’th SINDy -coefficient, we draw n_samples samples of the predictive distribution for the k’th coefficient -when param is the parameter.
----
-- -
object that predicts the i’th coefficient.
- -
param (A combination of parameter values. i.e., a single test example. We evaluate each GP in)
- -
the gp_list at this parameter value (getting a prediction for each coefficient).
- -
n_samples (Number of samples of the predicted latent dynamics used to build ensemble of fom)
- -
predictions. N_s in the paper.
-+lasdi.gp.sample_coefs(gp_dictionnary, param, n_samples) +-
-- -
sample of the j’th SINDy coefficient.
Generates sample sets of ODEs for one given parameter. +coef_samples is a list of length n_samples, where each terms is a matrix of SINDy coefficients sampled from the GP predictive +distributions
A InputParser objects acts as a wrapper around a dictionary of settings. Thus, each setting is
|
+get a dict with {key: val} from a list of dicts |
A InputParser objects acts as a wrapper around a dictionary of settings. Thus, each setting is -a key and the corresponding value is the setting’s value. Because one setting may itself be -a dictionary (we often group settings; each group has a name but several constituent settings), -the underlying dictionary is structured as a sequence of nested dictionaries. This class allows -the user to select a specific setting from that structure by specifying (via a list of strings) -where in that nested structure the desired setting lives.
-A InputParser object acts as a wrapper around a dictionary of settings. That is, self.dict_ -is structured as a nested family of dictionaries. Each setting corresponds to a key in -self.dict_. The setting’s value is the corresponding value in self.dict_. In many cases, -a particular setting may be nested within others. That is, a setting’s value may itself be -another dictionary housing various sub-settings. This function allows us to fetch a -specific setting from this nested structure.
-Specifically, we specify a list of strings. keys[0] should be a key in self.dict_ -If so, we set val = self.dict_[keys[0]]. If there are more keys, then val should be a -dictionary and keys[1] should be a key in this dictionary. In this case, we replace val -with val[key[1]] and so on. This continues until we have exhausted all keys. There is one -important exception:
---If at some point in the process, there are more keys but val is not a dictionary, or if -there are more keys and val is a dictionary but the next key is not a key in that -dictionary, then we return the fallback value. If the fallback value does not exist, -returns an error.
-
-+getInput(keys, fallback=None, datatype=None) +-
-- -
dictionary and keys[1] should be a key in this dictionary. In this case, we replace val
- -
with val[key[1]] and so on. This continues until we have exhausted all keys.
- -
fallback (A sort of default value. If at some point, val is not a dictionary (and there are)
- -
more keys) or val is a dictionary but the next key is not a valid key in that dictionary,
- -
then we return the fallback value.
- -
datatype (If not None, then we require that the final val has this datatype. If the final)
- -
val does not have the desired datatype, we raise an exception.
Find the value corresponding to the list of keys. +If the specified keys do not exist, use the fallback value. +If the fallback value does not exist, returns an error. +If the datatype is specified, enforce the output value has the right datatype.
+get a dict with {key: val} from a list of dicts +NOTE: it returns only the first item in the list, +even if the list has more than one dict with {key: val}.
|
-This function generates the parameter range (set of possible parameter values) for a parameter |
+
|
+|
|
-This function generates the parameter range (set of possible parameter values) for a parameter |
+
|
+
This function generates the parameter range (set of possible parameter values) for a parameter -that uses the list type test space. That is, “test_space_type” should be a key for the -parameter dictionary and the corresponding value should be “list”. The parameter dictionary -should also have a “list” key whose value is a list of the possible parameter values.
-We parse this list and turn it into a numpy ndarray.
----
-- -
floats.
This function generates the parameter range (set of possible parameter values) for a parameter -that uses the uniform type test space. That is, “test_space_type” should be a key for the -parameter dictionary and the corresponding value should be “uniform”. The parameter dictionary -should also have the following keys:
---“min” -“max” -“sample_size” -“log_scale”
-
“min” and “max” specify the minimum and maximum value of the parameter, respectively. -“sample_size” specifies the number of parameter values we generate. Finally, log_scale, if -true, specifies if we should use a uniform or logarithmic spacing between samples of the -parameter.
-The values corresponding to “min” and “max” should be floats while the values corresponding to -“sample_size” and “log_scale” should be an int and a bool, respectively.
----
-- -
keys (see above).
---
-- -
paramRange[-1] = param_dict[“max”]). Nx is the length of paramRange or, equivalently
- -
param_dict[“sample_size”].
Returns the number of combinations of parameters in the training set.
-Returns the number of combinations of parameters in the testing set.
-Sets up a grid of parameter values to train at. Note that we only use the min and max value -of each parameter when setting up this grid.
----
-- -
name
- -
min
- -
max
---
-- -
the min and max value of each parameter as the argument. See “createHyperMeshGrid” for
- -
details.
- -
Specifically, we return exactly what “createHyperGridSpace” returns. See the doc-string
- -
for that function for further details.
This function sets up a grid of parameter values to test at.
----
-- -
respectively.
---
-- -
parameter we consider (this is the length of the i’th element of “paramRanges” below).
- -
The second is a a tuple of k numpy ndarrays (where k = len(param_list)), the i’th one of
- -
which is a k-dimensional array with shape (N0, … , N{k - 1}), where Ni =
- -
param_list[i].size whose i(0), … , i(k - 1) element specifies the value of the i’th
- -
parameter in the i(0), … , i(k - 1)’th unique combination of parameter values.
- -
The third one is a 2d array of parameter values. It has shape (M, k), where
- -
M = prod_{i = 0}^{k - 1} param_list[i].size.
This function generates arrays of parameter values. Specifically, if there are k -parameters (param_ranges has k elements), then we return k k-d arrays, the i’th one of -which is a k-dimensional array whose i(0), … , i(k - 1) element specifies the value of -the i’th variable in the i(0), … , i(k - 1)’th unique combination of parameter values.
----
-- -
min value for the i’th parameter. The list size should equal the number of parameters.
-+-
- -
k parameters and if param_range[i].size = Ni, then the j’th return array has shape
- -
(N0, … , N{k - 1}) and the i(0), … , i(k - 1) element of this array houses the i(j)’th
- -
value of the j’th parameter.
- -
Thus, if there are k parameters, the returned tuple has k elements, each one of
- +createHyperMeshGrid(param_ranges) +
which is an array of shape (N0, … , N{k - 1}).
+
- param_ranges: list of numpy 1d arrays, each corresponding to 1d parameter grid space.
- +
The list size is equal to the number of parameters.
+- Output: paramSpaces
+
-tuple of numpy nd arrays, corresponding to each parameter. +Dimension of the array equals to the number of parameters
Flattens the mesh_grid numpy.ndarray objects returned by createHyperMeshGrid and combines -them into a single 2d array of shape (grid size, number of parameters) (see below).
----
-- -
function’s docstring for details.
-+-
- +createHyperGridSpace(mesh_grids) +
N(k)) (k parameters), then (grid size) = N(1)*N(2)…*N(k) and (number of parameters) = k.*
+
- mesh_grids: tuple of numpy nd arrays, corresponding to each parameter.
- +
Dimension of the array equals to the number of parameters
+- Output: param_grid
+
-numpy 2d array of size (grid size x number of parameters).
grid size is the size of a numpy nd array.
+Adds a new parameter to self’s train space attribute.
-This function packages the testing/training examples into a dictionary, which it returns.
-A dictionary with 4 keys. Below is a list of the keys with a short description of each
corresponding value. – train_space: self.train_space, a 2d array of shape (n_train, n_param) whose i,j element -holds the value of the j’th parameter in the i’th training case.
-test_space: self.test_space, a 2d array of shape (n_test, n_param) whose i,j element -holds the value of the j’th parameter in the i’th testing case.
-test_grid_sizes: A list whose i’th element specifies how many distinct parameter values -we use for the i’th parameter.
-test_meshgrid: a tuple of n_param numpy.ndarray array objects whose i’th element is a -n_param-dimensional array whose i(1), i(2), … , i(n_param) element holds the value of -the i’th parameter in the i(1), … , i(n_param) combination of parameter values in the -testing test.
-n_init: The number of combinations of training parameters in the training set.
-This function builds a parameter space object from a dictionary. This dictionary should -be one that was returned by th export method, or a loaded copy of a dictionary that was -returned by the export method.
------
-- -
train_space
- -
test_space
- -
test_grid_sizes
- -
test_meshgrid
- -
n_init
-
-- -
This dictionary should have been returned by the export method. We use the values in this
- -
dictionary to set up self.
Export the list of jobs and their number of calls and total time -into a dictionary.
-All jobs must be ended before calling this method.
-dict
that contains “names”, “calls”, and “times” as keys
Load the list of jobs and their number of calls and total time -from a dictionary.
-dict_ (dict
): Dictionary that contains the list of jobs and their calls and times.
dict_['names']
, dict_['calls']
and dict_['times']
must have the same size.
Does not return a value
-<<<<<<< HEAD -* Bonneville, Christophe, Xiaolong He, April Tran, Jun Sur Park, William Fries, Daniel A. Messenger, Siu Wun Cheung et al. “A Comprehensive Review of Latent Space Dynamics Identification Algorithms for Intrusive and Non-Intrusive Reduced-Order-Modeling.” arXiv preprint arXiv:2403.10748 (2024). -======= ->>>>>>> 9269e9cb2d85c993efecb52c726f2f1ff657d487 -* Fries, William D., Xiaolong He, and Youngsoo Choi. “LaSDI: Parametric latent space dynamics identification.” Computer Methods in Applied Mechanics and Engineering 399 (2022): 115436. -* He, Xiaolong, Youngsoo Choi, William D. Fries, Jonathan L. Belof, and Jiun-Shyan Chen. “gLaSDI: Parametric physics-informed greedy latent space dynamics identification.” Journal of Computational Physics 489 (2023): 112267. -* Tran, April, Xiaolong He, Daniel A. Messenger, Youngsoo Choi, and David M. Bortz. “Weak-form latent space dynamics identification.” Computer Methods in Applied Mechanics and Engineering 427 (2024): 116998. -* Park, Jun Sur Richard, Siu Wun Cheung, Youngsoo Choi, and Yeonjong Shin. “tLaSDI: Thermodynamics-informed latent space dynamics identification.” arXiv preprint arXiv:2403.05848 (2024). -* Bonneville, Christophe, Youngsoo Choi, Debojyoti Ghosh, and Jonathan L. Belof. “Gplasdi: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder.” Computer Methods in Applied Mechanics and Engineering 418 (2024): 116535. -* He, Xiaolong, April Tran, David M. Bortz, and Youngsoo Choi. “Physics-informed active learning with simultaneous weak-form latent space dynamics identification.” arXiv preprint arXiv:2407.00337 (2024).
+Bonneville, Christophe, Xiaolong He, April Tran, Jun Sur Park, William Fries, Daniel A. Messenger, Siu Wun Cheung et al. “A Comprehensive Review of Latent Space Dynamics Identification Algorithms for Intrusive and Non-Intrusive Reduced-Order-Modeling.” arXiv preprint arXiv:2403.10748 (2024).
Fries, William D., Xiaolong He, and Youngsoo Choi. “LaSDI: Parametric latent space dynamics identification.” Computer Methods in Applied Mechanics and Engineering 399 (2022): 115436.
He, Xiaolong, Youngsoo Choi, William D. Fries, Jonathan L. Belof, and Jiun-Shyan Chen. “gLaSDI: Parametric physics-informed greedy latent space dynamics identification.” Journal of Computational Physics 489 (2023): 112267.
Tran, April, Xiaolong He, Daniel A. Messenger, Youngsoo Choi, and David M. Bortz. “Weak-form latent space dynamics identification.” Computer Methods in Applied Mechanics and Engineering 427 (2024): 116998.
Park, Jun Sur Richard, Siu Wun Cheung, Youngsoo Choi, and Yeonjong Shin. “tLaSDI: Thermodynamics-informed latent space dynamics identification.” arXiv preprint arXiv:2403.05848 (2024).
Bonneville, Christophe, Youngsoo Choi, Debojyoti Ghosh, and Jonathan L. Belof. “Gplasdi: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder.” Computer Methods in Applied Mechanics and Engineering 418 (2024): 116535.
He, Xiaolong, April Tran, David M. Bortz, and Youngsoo Choi. “Physics-informed active learning with simultaneous weak-form latent space dynamics identification.” arXiv preprint arXiv:2407.00337 (2024).