You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, first of all, Thank you for your work! I am using KANs to model environmental data and have high hopes.
When training and pruning my KAN i would like to produce metrics and for this i forward pass the test set of my data. While doing this, I noticed that the output of auto_symbolic() changes depending on the last forward pass while everything else stays the same. Why is this? Do forward passes change the model? If so, why?
Is there a resource i can consult to better understand the code for getting the symbolic formulas?
The behavior is reproducible with the hellokan.ipynb from the pykan repo when changing train and test data to my dataset. Depending on whether a prediction is made using a random tensor model(torch.rand(...)) before calling model.auto_symbolic(), different formulas are produced. I tried setting the model to evaluation mode and calling torch.no_grad() before passing the random tensor but it had no effect on the behavior.
The text was updated successfully, but these errors were encountered:
Hello, first of all, Thank you for your work! I am using KANs to model environmental data and have high hopes.
When training and pruning my KAN i would like to produce metrics and for this i forward pass the test set of my data. While doing this, I noticed that the output of auto_symbolic() changes depending on the last forward pass while everything else stays the same. Why is this? Do forward passes change the model? If so, why?
Is there a resource i can consult to better understand the code for getting the symbolic formulas?
The behavior is reproducible with the hellokan.ipynb from the pykan repo when changing train and test data to my dataset. Depending on whether a prediction is made using a random tensor model(torch.rand(...)) before calling model.auto_symbolic(), different formulas are produced. I tried setting the model to evaluation mode and calling torch.no_grad() before passing the random tensor but it had no effect on the behavior.
The text was updated successfully, but these errors were encountered: