Releases: microsoft/FLAML
v0.9.2
New Features:
- New task: text summarization
- Reproducibility of hyperparameter search sequence
- Run flaml in azureml + ray
What's Changed
- url update for doc edit by @sonichi in #345
- Adding the NLP task summarization by @liususan091219 @XinZofStevens @GideonWu0105 in #346
- reproducibility for random sampling by @sonichi in #349
- doc update by @sonichi in #352
- azureml + ray by @sonichi in #344
- Fixing the bug in custom metric by @liususan091219 in #356
- Simplify lgbm example by @ruizhuanguw in #358
- fixing custom metric by @liususan091219 in #357
- Example by @sonichi in #359
New Contributors
- @ruizhuanguw @XinZofStevens @GideonWu0105 made their first contribution in #358
Full Changelog: v0.9.1...v0.9.2
v0.9.1
This release contains several feature improvements and bug fixes. For example,
- support for custom data splitter.
- evaluation_function can receive incumbent result in local search and perform domain-specific early stopping by comparing with the incumbent result. As long as the comparison result (better or worse) is known, the evaluation can be stopped.
- support and automate huggingface metrics.
- use cfo in tune.run if bs is not installed.
- fixed a bug in modifying n_estimators to satisfy constraints.
- new documentation website.
What's Changed
- Update flaml_pytorch_cifar10.ipynb by @sonichi in #328
- adding HF metrics by @liususan091219 in #335
- train at least one iter when not trained by @sonichi in #336
- use cfo in tune.run if bs is not installed by @sonichi in #334
- Makes the evaluation_function could receive the incumbent best result as input in Tune by @Shao-kun-Zhang in #339
- support for customized splitters by @wuchihsu in #333
- Deploy a new doc website by @sonichi, @qingyun-wu and @Shao-kun-Zhang in #338
- version update by @sonichi in #341
New Contributors
- @Shao-kun-Zhang made their first contribution in #339
Full Changelog: v0.9.0...v0.9.1
v0.9.0
- Revise flaml.tune API
- Add a “scheduler” argument (a user can choose from “flaml”, “asha” or a customized scheduler)
- Rename "prune_attr" to "resource_attr"
- Rename “training_function” to “evaluation_function”
- Remove the “report_intermediate_result” argument (covered by “scheduler” instead)
- Add tests for the supported schedulers
- Re-run notebooks that use schedulers
- Add save_best_config() to save best config in a json file
What's Changed
- add save_best_config() by @sonichi in #324
- tune api for schedulers by @qingyun-wu in #322
- add init.py in nlp by @sonichi in #325
- rename training_function by @qingyun-wu in #327
Full Changelog: v0.8.2...v0.9.0
v0.8.2
What's Changed
- include default value in rf search space by @sonichi in #317
- adding TODOs for NLP module, so students can implement other tasks easier by @liususan091219 in #321
- pred_time_limit clarification and logging by @sonichi in #319
- bug fix in confg2params by @sonichi in #323
Full Changelog: v0.8.1...v0.8.2
v0.8.1
What's Changed
- Update test_regression.py by @fengsxy in #306
- Add conda forge minimal test by @MichalChromcak in #309
- fixing config2params for transformersestimator by @liususan091219 in #316
- Code quality improvement based on #275 by @abnsy and @sonichi in #313
- skip cv preparation if eval_method is holdout by @sonichi in #314
New Contributors
Full Changelog: v0.8.0...v0.8.1
v0.8.0
In this release, we add two nlp tasks: sequence classification and sequence regression to flaml.AutoML
, using transformer-based neural networks. Previously the nlp module was detached from flaml.AutoML
with a separate API. We redesigned the API such that the nlp tasks can be accessed from the same API as other tasks, and adding more nlp tasks in future would be easy. Thanks for the hard work @liususan091219 !
We've also continued to make more performance & feature improvements. Examples:
- We added a variation of XGBoost search space which uses limited
max_depth
. It includes the default configuration from XGBoost library. The new search space leads to significantly better performance for some regression datasets. - We allow arguments for
flaml.AutoML
to be passed to the constructor. This enables multioutput regression by combining sklearn's MultioutputRegressor and flaml's AutoML. - We made more memory optimization, while allowing users to keep the best model per estimator in memory through the "model_history" option.
What's Changed
- Unify regression and classification for XGBoost by @sonichi in #276
- when max_iter=1, skip search only if retrain_final by @sonichi in #280
- example update by @sonichi in #281
- Merge exp into flaml by @liususan091219 in #210
- add best_loss_per_estimator by @qingyun-wu in #286
- model_history -> save_best_model_per_estimator by @sonichi in #283
- datetime feature engineering by @sonichi in #285
- add warmstart test by @qingyun-wu in #298
- empty search space by @sonichi in #295
- multioutput regression by @sonichi in #292
- add max_depth to xgboost search space by @sonichi in #282
- custom metric function clarification by @sonichi in #300
- checkpoint naming in nonray mode, fix ray mode, delete checkpoints in nonray mode by @liususan091219 in #293
Full Changelog: v0.7.1...v0.8.0
v0.7.1
v0.7.0
New feature: multivariate time series forecasting.
What's Changed
- Fix exception in CFO's
_create_condition
if all candidate start points didn't return yet by @Yard1 in #263 - Integrate multivariate time series forecasting by @int-chaos in #254
- Update Dockerfile by @wuchihsu in #269
- limit time and memory consumption by @sonichi in #264
New Contributors
Full Changelog: v0.6.9...v0.7.0
v0.6.9
v0.6.8
What's Changed
- fix the bug in hierarchical search space (#248); make dependency on lgbm and xgboost optional (#252) by @sonichi in #250
- Add conda forge badge by @MichalChromcak in #251
New Contributors
- @MichalChromcak made their first contribution in #251
Full Changelog: v0.6.7...v0.6.8