Description
I have designed some operators for model loss monitoring, like the following continuous fragments:
and I achieved it for this approach taking a small proportion in CPU runtime, I don't want to burden the CPU runtime without the need to detect the optimal model after training runtime.
adanet/adanet/core/iteration.py
Lines 743 to 747 in 0364cc4
I don't think the underlined function is a quick response code because it selects the optimal model at least O(n) runtime from all trained candidate ensembles only by loss evaluation. It's an implementation based on the tf API, but it doesn't take into account how users can get the ideal combination based on the strategy they are using in as few trials as possible.
adanet/adanet/core/iteration.py
Lines 1089 to 1109 in 0364cc4