How to track the current iteration number in a custom loss function? #859
-
Hi, w_a, w_b, w_c = Ref(0.1), Ref(0.4), Ref(0.4)
function custom_loss(tree::Node, dataset::Dataset{T,L}, options::Options) where {T,L}
...
loss = L_a * w_a + L_b * w_b + L_c * w_c
return loss
end
model = SRRegressor(
niterations=1000,
binary_operators=[+, -, *, /],
unary_operators=[sin, cos, exp, log],
population_size=50,
loss_function=custom_loss
)
mach = machine(model, x, y)
fit!(mach) I’d like to adjust the weights of my loss terms dynamically based on the current iteration. Any help would be much appreciated! Thanks! 😊 |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 6 replies
-
Hi @GFODK, Unfortunately you cannot have a dynamic loss function, because it is cached at various points in the search rather than re-computing each time. So if your loss function changes, it will break some assumptions in the code. What you can do though is adjust the loss function and call model = SRRegressor(
niterations=1,
binary_operators=[+, -, *, /],
unary_operators=[sin, cos, exp, log],
population_size=50,
loss_function=loss_functions[1]
)
mach = machine(model, x, y)
fit!(mach)
for i in 2:100
mach.model.loss_function = loss_functions[i]
mach.model.niterations += 1
fit!(mach)
end or something like this. You might want to build some callable struct that has parameters you can declare, like struct LossContext <: Function
w_a::Float64
w_b::Float64
w_c::Float64
end
function (ctx::LossContext)(tree::Node, dataset::Dataset{T,L}, options::Options) where {T,L}
...
return L_a * ctx.w_a + L_b * ctx.w_b + L_c * ctx.w_c
end and then you can just define the loss like SRRegressor(loss_function=LossContext(0.1, 0.4, 0.4)) |
Beta Was this translation helpful? Give feedback.
-
Hi @MilesCranmer,
After the loop for the second iteration finished, I checked the model using println(mach.model), and here’s what I saw:
From this, I could confirm that niterations was updated to 2, and the updated weights were also reflected in loss_function. However, when I printed out the weights inside the custom loss function like this:
I still saw only the initial values of 1.0 being printed every time. Now I’m wondering — could it be that internally mach.model is being cloned or copied, so the loss_function gets fixed at initialization and doesn’t actually update afterward? If that’s the case, and I need to recreate the machine object to apply the updated loss_function, wouldn’t that mean I’m losing all progress from the previous training iterations? |
Beta Was this translation helpful? Give feedback.
Hi @GFODK,
Unfortunately you cannot have a dynamic loss function, because it is cached at various points in the search rather than re-computing each time. So if your loss function changes, it will break some assumptions in the code.
What you can do though is adjust the loss function and call
fit!
again—it will start where it left off, and recompute all the losses on existing populations of expressions. So, for example: