Implement mathematical constraints #667
Replies: 4 comments 9 replies
-
To implement a custom loss about function constraints, the right way is to pass a Julia function as a string in Python to the If you describe your constraints exactly or provide an example in Python I can help get it working. The examples that you cited are indeed the most relevant. |
Beta Was this translation helpful? Give feedback.
-
It is crazy!!! your algorithm eliminates variables instead of accepting the constraints function eval_loss(tree, dataset::Dataset{T,L}, options)::L where {T,L}
derivative_with_respect_to = 1
predicted, gradient, complete = eval_diff_tree_array(tree, dataset.X, options, derivative_with_respect_to)
if !complete
# encountered NaN/Inf, so return early
return L(Inf)
end
i=1
positivity = sum(i -> gradient[i] > 0 ? L(0) : abs2(gradient[i]), eachindex(gradient))
sum_square_loss = sum(i -> abs2(predicted[i] - dataset.y[i]), eachindex(predicted, dataset.y))
i=2
positivity = sum(i -> gradient[i] < 0 ? L(0) : abs2(gradient[i]), eachindex(gradient))
sum_square_loss = sum(i -> abs2(predicted[i] - dataset.y[i]), eachindex(predicted, dataset.y))
beta = L(1e-2)
return (sum_square_loss + beta * positivity) / dataset.n
end code eliminates variable 1 and variable 2 function eval_loss(tree, dataset::Dataset{T,L}, options)::L where {T,L}
derivative_with_respect_to = 1
predicted, gradient, complete = eval_diff_tree_array(tree, dataset.X, options, derivative_with_respect_to)
if !complete
# encountered NaN/Inf, so return early
return L(Inf)
end
i=1
positivity = sum(i -> gradient[i] > 0 ? L(0) : abs2(gradient[i]), eachindex(gradient))
sum_square_loss = sum(i -> abs2(predicted[i] - dataset.y[i]), eachindex(predicted, dataset.y))
contains_x1 = any(
node -> (
node.degree == 0
&& !(node.constant)
&& node.feature == 1
),
tree
)
if !(contains_x1)
# penalty term
sum_square_loss += L(1e10)
end
i=2
positivity = sum(i -> gradient[i] < 0 ? L(0) : abs2(gradient[i]), eachindex(gradient))
sum_square_loss = sum(i -> abs2(predicted[i] - dataset.y[i]), eachindex(predicted, dataset.y))
contains_x3 = any(
node -> (
node.degree == 0
&& !(node.constant)
&& node.feature == 1
),
tree
)
if !(contains_x1)
# penalty term
sum_square_loss += L(1e10)
end
beta = L(1e-2)
return (sum_square_loss + beta * positivity) / dataset.n
end It doesn't work. I increased the regularisation parameter but the algorithm can't converge. |
Beta Was this translation helpful? Give feedback.
-
I just tried to produce a sample; in the main code, all is correct. the interesting part is the algorithm keeps two parameters that do not match with y unit. |
Beta Was this translation helpful? Give feedback.
-
so you say that this part
|
Beta Was this translation helpful? Give feedback.
-
I want to implement mathematical constraints in a soft way in Pysr. I think if I apply them in a hard way I lose a lot of functions. These constraints are like I know the positiveness or negativeness of function with respect to some values, and behavior of function derivative with respect to some variables.
I find that it seems the main of your program is written in Julia. Then you wrapped it to work in Python. I tried to read "src" part of your code but it's hard for me because I am not familiar with Julia. So, I read the discussion part of the page and I saw this question: Constraints? "#304"
In answer, you said: "Update: added full_objective (PySR) and loss_function (SymbolicRegression.jl) for this purpose."
Actually, I did not find "full_objective" part in the Python code. I found and read "loss_function" in Julia's part. I also read this question: "#449" and this question:"#256"
It seems to me you are trying to say that you can define a new loss function. I want to know: can I define this new loss criterion in Python? If I define a new loss function I must download your code manipulate it and compile it again? I see your loss criterion repeated somewhere in the code, Do I need to make changes elsewhere?
Beta Was this translation helpful? Give feedback.
All reactions