Replies: 3 comments 3 replies
-
Thanks!
By this, do you mean parameters which have no imaginary component and whose real component is positive? (Since “positive” doesn’t apply to complex numbers) Or do you mean positive imaginary and positive real component? |
Beta Was this translation helpful? Give feedback.
-
So, right now, there are various assumptions that the expressions have has the same numeric type as the dataset. i.e., expressions will have complex-valued constants. However, perhaps you could try a soft constraint that the parameters are close to the positive real line? See https://astroautomata.com/PySR/examples/#9-custom-objectives for an example. There are many other examples in the discussions page. Here's how you could do it: function my_objective(tree, dataset::Dataset{T,L}, options)::L where {T,L}
(prediction, completion) = eval_tree_array(tree, dataset.X, options)
y = dataset.y
if !completion
return L(Inf)
end
function node_penalty(node)
is_constant_node = node.degree == 0 && node.constant
if is_constant_node
val = node.val
return L(abs(abs(val) - val)) # How far the constant is from its projection to the positive real line
else
return L(0)
end
end
# Aggregate the node penalty over every node in the tree:
total_node_penalty = sum(node_penalty, tree)
total_node_penalty *= 1000.0 # Upweight penalty (likely need to tune this?)
# Regular relative MSE loss:
prediction_loss = sum(i -> abs2(prediction[i] - y[i])/abs2(y[i]), eachindex(y)) / length(y)
return L(total_node_penalty + prediction_loss)
end then pass that as a string to Let me know how that works! Note that constants will still have some small complex component. But hopefully this will eliminate most of them... |
Beta Was this translation helpful? Give feedback.
-
P.S., your initial loss was provided as elementwise_loss="f(prediction, target) = abs2(prediction - target)/abs2(prediction)" however, you don't want to do this, as the genetic algorithm will "game the system" and send prediction to infinity! You probably want to use elementwise_loss="f(prediction, target) = abs2(prediction - target)/abs2(target)" instead. But use the |
Beta Was this translation helpful? Give feedback.
-
Dear Developers,
Thank you for your excellent work on the PySR repository. It is truly fantastic. I have been working on regressing a ZARC circuit using the code below, but I have encountered some issues:
Could there be a workaround for this? Specifically, I am looking for a solution where the symbolic regression parameters are positive, but the function input and output remain complex.
Thank you for your kind help.
P.S. This question is related but somewhat different to #338
CODE:
Beta Was this translation helpful? Give feedback.
All reactions