-
Notifications
You must be signed in to change notification settings - Fork 156
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] Saving and loading LITETimeClassifier #2436
Comments
@hadifawaz1999 any idea? I thought we covered saving/loading in testing |
hello, thanks for raising the issues, its actually done (for now) on purpose like that. So the thing is that The reason why So it is not actually an issue, just the way the infrastructure is made, the solution would be to load each individual lite models alone, using from aeon.classification.deep_learning import LITETimeClassifier
file_path = './'
best_file_name = "best_model"
cls = LITETimeClassifier(save_best_model=True,
best_file_name=best_file_name,
file_path=,
n_classifiers=5)
cls.fit(X, y) This is for the train, and for the loading and prediction it would be: import numpy as np
import os
from aeon.classification.deep_learning import IndividualLITEClassifier
preds_probas = []
for i in range(5):
cls = IndividualLITEClassifier()
cls.load_model(model_path=os.path.join(file_path, best_file_name+str(i)),
classes=np.unique(y))
preds_probas.append(cls.predict_proba(X))
ensemble_prediction = np.argmax(np.mean(preds_probas, axis=0), axis=1) |
Thanks for your replies. I separated loading from predicting like so: def _load_classifiers() -> list[IndividualLITEClassifier]:
clfs = []
for i in range(5):
clf = IndividualLITEClassifier()
clf.load_model(
model_path=path_to_models / ("best_model" + str(i) + ".keras"),
classes=[0, 1],
)
clfs.append(clf)
return clfs
def _predict(clfs: list[IndividualLITEClassifier], X: NDArray) -> NDArray:
preds_probas = [clf.predict_proba(X) for clf in clfs]
return argmax(mean(preds_probas, axis=0), axis=1)
class Classifier:
def __init__(self):
self.lite_classifiers = _load_classifiers()
def predict(self, X: NDArray) -> NDArray:
return _predict(self.lite_classifiers, X) In my opinion, this should be more user-friendly. Maybe we can implement a class method for |
It's very interesting point, i have not thought about it before you raised the issue, would be great to have the functionality in both LITETimeClassifier and InceptionTimeClassifier, that takes a list of model_paths and load them all into the internal list. Are you interested in contributing such functionality ? |
If I find the time, I can work on this. Maybe on a lonely Friday :-). How should the interface look like? Should we implement "load_model()" for "LITETimeClassifier" (and InceptionTimeClassifier)? |
Yes basically the function load_model should be re-implemented in these two classes, that takes a list of model_path(s) and the classes parameters, and basically uses the individual classes and use their load_model function, and fill them in the self.classifiers_ list, and set is_fitted to True, as the load_model does in BaseDeepClassifier. There is no issue on timing ! you do it on your free time and we're happy to help if needed |
Describe the issue
Hi! I am trying to save and load a trained
LITETimeClassifier
object.The
aeon
documentation states that we can do that using thesave_best_model=True
andfile_path=...
parameters. However, for theLITETimeClassifier
, only the sub-models (individual LITE models) are stored as Keras objects, and so,BaseDeepClassifier.load_model()
can not be used.Then, I tried direct instantiation of a
LITETimeClassifier
usingbut upon prediction, I get an error that
What is the preferred way of saving and storing trained
LITETimeClassifier
objects?I also ried pickling, but then I get
Suggest a potential alternative/fix
No response
Additional context
No response
The text was updated successfully, but these errors were encountered: