Why direct run of Julia code gives different output than loss value reported by model.equations_ #377
Replies: 4 comments 6 replies
-
Another example: model.equations_.iloc[5, :] is:
Output of the Julia Helper object jl's code is:
where the Julia Helper object jl's code (using the Float32 function 1.65964429721289*x0 + 0.38505693) is:
Edit: My Julia Helper object also prints out a accurate loss value of:
when I use
instead of the simplified format
|
Beta Was this translation helpful? Give feedback.
-
Hi @unary-code, Could you summarize the question/issue? I don't quite follow. Thanks, |
Beta Was this translation helpful? Give feedback.
-
Could you explain more? You mean to the Julia helper object "jl" ? |
Beta Was this translation helpful? Give feedback.
-
Interesting: I believe that when I ran:
I got this error:
and this was printed out:
However, I changed "global integral = my_tuple[1]" to "integral = my_tuple[1]" in my model's objective function eval_loss, My thoughts for why the model's loss value was inaccurate: Since the PySR package creates multiple threads which call eval_loss at the same time, the value of the variable "integral" that was being used to divide "predictions" by, was actually coming from a different call to eval_loss with a different parameter tree from a different integral. When I did "global integral = my_tuple[1]", this actually changed the value of integral for a different call to eval_loss function for a different parameter "tree". Question: Is there any way to make the call to model.fit(X, y) only create one thread instead of multiple threads? A.K.A. is there a way to make the call to model.fit(X, y) run completely sequentially, instead of parallel? Question: I did "progress=False" when I constructed the model, but when I ran model.fit(X, y), it still showed the progress bar. Any reason this happens? |
Beta Was this translation helpful? Give feedback.
-
A direct run of the objective function by the Julia Helper gave this answer:
when given this code:
However, the exact same code (this time, I made sure to copy the entire objective function inside the variable named "objective", when run by model.fit(X, y), results in a model.equations_ that reports a loss value of 0.084261 for the function "((x0 + x0) / (0.19358382 * x0))
".
model.equations_ does not report the lowest-found loss value after calling eval_loss multiple times, right?
Why would this difference in output happen between the loss value printed by the Julia helper "jl" object and the loss value reported by model.equations_ ?
Feel free to use https://www.diffchecker.com/ so you can quickly understand the very small differences between the Julia helper code and the objective function used by the model "model".
Beta Was this translation helpful? Give feedback.
All reactions