Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Saving and loading LITETimeClassifier #2436

Open
dschrempf opened this issue Dec 9, 2024 · 6 comments
Open

[BUG] Saving and loading LITETimeClassifier #2436

dschrempf opened this issue Dec 9, 2024 · 6 comments
Assignees
Labels
bug Something isn't working classification Classification package deep learning Deep learning related

Comments

@dschrempf
Copy link
Contributor

dschrempf commented Dec 9, 2024

Describe the issue

Hi! I am trying to save and load a trained LITETimeClassifier object.

The aeon documentation states that we can do that using the save_best_model=True and file_path=... parameters. However, for the LITETimeClassifier, only the sub-models (individual LITE models) are stored as Keras objects, and so, BaseDeepClassifier.load_model() can not be used.

Then, I tried direct instantiation of a LITETimeClassifier using

LITETimeClassifier(
        file_path=str(fp),
    )

but upon prediction, I get an error that

"This instance of LITETimeClassifier has not been fitted yet; please call fit first.".

What is the preferred way of saving and storing trained LITETimeClassifier objects?

I also ried pickling, but then I get

AttributeError: Can't pickle local object 'ReduceLROnPlateau._reset.<locals>.<lambda>'                                                                                                         

Suggest a potential alternative/fix

No response

Additional context

No response

@MatthewMiddlehurst MatthewMiddlehurst changed the title Saving and loading LITETimeClassifier [BUG] Saving and loading LITETimeClassifier Dec 9, 2024
@MatthewMiddlehurst MatthewMiddlehurst added bug Something isn't working classification Classification package deep learning Deep learning related labels Dec 9, 2024
@MatthewMiddlehurst
Copy link
Member

@hadifawaz1999 any idea? I thought we covered saving/loading in testing

@hadifawaz1999
Copy link
Member

hello, thanks for raising the issues, its actually done (for now) on purpose like that. So the thing is that LITETimeClassifier is an ensemble of many IndividualLITEClassifier models, so if you choose to save the models, it will save by default 5 keras files, one for each individual lite models, the functionality of load_model for deep learning is implemented in BaseDeepClassifier, the one in BaseClassifier cannot be used for deep learning

The reason why LITETimeClassifier uses the load_model of BaseClassifier is because it does not inherit BaseDeepClassifier, as it is not itself a deep learning model, but the IndividualLITEClassifier does have the deep load_model function.

So it is not actually an issue, just the way the infrastructure is made, the solution would be to load each individual lite models alone, using IndividualLITEClassifier, and doing the ensemble yourself in your code, so something similar to:
(Assuming the data are stored in two variables X and y)

from aeon.classification.deep_learning import LITETimeClassifier

file_path = './'
best_file_name = "best_model"

cls = LITETimeClassifier(save_best_model=True,
                                           best_file_name=best_file_name,
                                           file_path=,
                                           n_classifiers=5)
cls.fit(X, y)

This is for the train, and for the loading and prediction it would be:

import numpy as np
import os
from aeon.classification.deep_learning import IndividualLITEClassifier

preds_probas = []
for i in range(5):
     cls = IndividualLITEClassifier()
     cls.load_model(model_path=os.path.join(file_path, best_file_name+str(i)),
                                 classes=np.unique(y))
    
     preds_probas.append(cls.predict_proba(X))

ensemble_prediction = np.argmax(np.mean(preds_probas, axis=0), axis=1)

@dschrempf
Copy link
Contributor Author

dschrempf commented Dec 10, 2024

Thanks for your replies. I separated loading from predicting like so:

def _load_classifiers() -> list[IndividualLITEClassifier]:
    clfs = []
    for i in range(5):
        clf = IndividualLITEClassifier()
        clf.load_model(
            model_path=path_to_models / ("best_model" + str(i) + ".keras"),
            classes=[0, 1],
        )
        clfs.append(clf)
    return clfs

def _predict(clfs: list[IndividualLITEClassifier], X: NDArray) -> NDArray:
    preds_probas = [clf.predict_proba(X) for clf in clfs]
    return argmax(mean(preds_probas, axis=0), axis=1)

class Classifier:
    def __init__(self):
        self.lite_classifiers = _load_classifiers()

    def predict(self, X: NDArray) -> NDArray:
        return _predict(self.lite_classifiers, X)

In my opinion, this should be more user-friendly. Maybe we can implement a class method for LITETimeClassifier()? To the very least, we should document how this can be done.

@hadifawaz1999
Copy link
Member

Thanks for your replies. I separated loading from predicting like so:

def _load_classifiers() -> list[IndividualLITEClassifier]:
    clfs = []
    for i in range(5):
        clf = IndividualLITEClassifier()
        clf.load_model(
            model_path=path_to_models / ("best_model" + str(i) + ".keras"),
            classes=[0, 1],
        )
        clfs.append(clf)
    return clfs

def _predict(clfs: list[IndividualLITEClassifier], X: NDArray) -> NDArray:
    preds_probas = [clf.predict_proba(X) for clf in clfs]
    return argmax(mean(preds_probas, axis=0), axis=1)

class Classifier:
    def __init__(self):
        self.lite_classifiers = _load_classifiers()

    def predict(self, X: NDArray) -> NDArray:
        return _predict(self.lite_classifiers, X)

In my opinion, this should be more user-friendly. Maybe we can implement a class method for LITETimeClassifier()? To the very least, we should document how this can be done.

It's very interesting point, i have not thought about it before you raised the issue, would be great to have the functionality in both LITETimeClassifier and InceptionTimeClassifier, that takes a list of model_paths and load them all into the internal list. Are you interested in contributing such functionality ?

@dschrempf
Copy link
Contributor Author

If I find the time, I can work on this. Maybe on a lonely Friday :-). How should the interface look like? Should we implement "load_model()" for "LITETimeClassifier" (and InceptionTimeClassifier)?

@hadifawaz1999
Copy link
Member

If I find the time, I can work on this. Maybe on a lonely Friday :-). How should the interface look like? Should we implement "load_model()" for "LITETimeClassifier" (and InceptionTimeClassifier)?

Yes basically the function load_model should be re-implemented in these two classes, that takes a list of model_path(s) and the classes parameters, and basically uses the individual classes and use their load_model function, and fill them in the self.classifiers_ list, and set is_fitted to True, as the load_model does in BaseDeepClassifier.

There is no issue on timing ! you do it on your free time and we're happy to help if needed

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working classification Classification package deep learning Deep learning related
Projects
None yet
Development

No branches or pull requests

3 participants