Skip to content

Releases: dmlc/xgboost

1.6.2 Patch Release

23 Aug 13:05
b993424
Compare
Choose a tag to compare

This is a patch release for bug fixes.

  • Remove pyarrow workaround. (#7884)
  • Fix monotone constraint with tuple input. (#7891)
  • Verify shared object version at load. (#7928)
  • Fix LTR with weighted Quantile DMatrix. (#7975)
  • Fix Python package source install. (#8036)
  • Limit max_depth to 30 for GPU. (#8098)
  • Fix compatibility with the latest cupy. (#8129)
  • [dask] Deterministic rank assignment. (#8018)
  • Fix loading DMatrix binary in distributed env. (#8149)

1.6.1 Patch Release

09 May 09:08
5d92a7d
Compare
Choose a tag to compare

v1.6.1 (2022 May 9)

This is a patch release for bug fixes and Spark barrier mode support. The R package is unchanged.

Experimental support for categorical data

  • Fix segfault when the number of samples is smaller than the number of categories. (#7853)
  • Enable partition-based split for all model types. (#7857)

JVM packages

We replaced the old parallelism tracker with spark barrier mode to improve the robustness of the JVM package and fix the GPU training pipeline.

  • Fix GPU training pipeline quantile synchronization. (#7823, #7834)
  • Use barrier model in spark package. (#7836, #7840, #7845, #7846)
  • Fix shared object loading on some platforms. (#7844)

Artifacts

You can verify the downloaded packages by running this on your Unix shell:

echo "<hash> <artifact>" | shasum -a 256 --check
2633f15e7be402bad0660d270e0b9a84ad6fcfd1c690a5d454efd6d55b4e395b  ./xgboost.tar.gz

Release 1.6.0 stable

16 Apr 04:02
f75c007
Compare
Choose a tag to compare

v1.6.0 (2022 Apr 16)

After a long period of development, XGBoost v1.6.0 is packed with many new features and
improvements. We summarize them in the following sections starting with an introduction to
some major new features, then moving on to language binding specific changes including new
features and notable bug fixes for that binding.

Development of categorical data support

This version of XGBoost features new improvements and full coverage of experimental
categorical data support in Python and C package with tree model. Both hist, approx
and gpu_hist now support training with categorical data. Also, partition-based
categorical split is introduced in this release. This split type is first available in
LightGBM in the context of gradient boosting. The previous XGBoost release supported one-hot
split where the splitting criteria is of form x \in {c}, i.e. the categorical feature x is tested
against a single candidate. The new release allows for more expressive conditions: x \in S
where the categorical feature x is tested against multiple candidates. Moreover, it is now
possible to use any tree algorithms (hist, approx, gpu_hist) when creating categorical splits.
For more information, please see our tutorial on categorical data, along with
examples linked on that page. (#7380, #7708, #7695, #7330, #7307, #7322, #7705,
#7652, #7592, #7666, #7576, #7569, #7529, #7575, #7393, #7465, #7385, #7371, #7745, #7810)

In the future, we will continue to improve categorical data support with new features and
optimizations. Also, we are looking forward to bringing the feature beyond Python binding,
contributions and feedback are welcomed! Lastly, as a result of experimental status, the
behavior might be subject to change, especially the default value of related
hyper-parameters.

Experimental support for multi-output model

XGBoost 1.6 features initial support for the multi-output model, which includes
multi-output regression and multi-label classification. Along with this, the XGBoost
classifier has proper support for base margin without to need for the user to flatten the
input. In this initial support, XGBoost builds one model for each target similar to the
sklearn meta estimator, for more details, please see our quick
introduction
.

(#7365, #7736, #7607, #7574, #7521, #7514, #7456, #7453, #7455, #7434, #7429, #7405, #7381)

External memory support

External memory support for both approx and hist tree method is considered feature
complete in XGBoost 1.6. Building upon the iterator-based interface introduced in the
previous version, now both hist and approx iterates over each batch of data during
training and prediction. In previous versions, hist concatenates all the batches into
an internal representation, which is removed in this version. As a result, users can
expect higher scalability in terms of data size but might experience lower performance due
to disk IO. (#7531, #7320, #7638, #7372)

Rewritten approx

The approx tree method is rewritten based on the existing hist tree method. The
rewrite closes the feature gap between approx and hist and improves the performance.
Now the behavior of approx should be more aligned with hist and gpu_hist. Here is a
list of user-visible changes:

  • Supports both max_leaves and max_depth.
  • Supports grow_policy.
  • Supports monotonic constraint.
  • Supports feature weights.
  • Use max_bin to replace sketch_eps.
  • Supports categorical data.
  • Faster performance for many of the datasets.
  • Improved performance and robustness for distributed training.
  • Supports prediction cache.
  • Significantly better performance for external memory when depthwise policy is used.

New serialization format

Based on the existing JSON serialization format, we introduce UBJSON support as a more
efficient alternative. Both formats will be available in the future and we plan to
gradually phase out support for the old
binary model format. Users can opt to use the different formats in the serialization
function by providing the file extension json or ubj. Also, the save_raw function in
all supported languages bindings gains a new parameter for exporting the model in different
formats, available options are json, ubj, and deprecated, see document for the
language binding you are using for details. Lastly, the default internal serialization
format is set to UBJSON, which affects Python pickle and R RDS. (#7572, #7570, #7358,
#7571, #7556, #7549, #7416)

General new features and improvements

Aside from the major new features mentioned above, some others are summarized here:

  • Users can now access the build information of XGBoost binary in Python and C
    interface. (#7399, #7553)
  • Auto-configuration of seed_per_iteration is removed, now distributed training should
    generate closer results to single node training when sampling is used. (#7009)
  • A new parameter huber_slope is introduced for the Pseudo-Huber objective.
  • During source build, XGBoost can choose cub in the system path automatically. (#7579)
  • XGBoost now honors the CPU counts from CFS, which is usually set in docker
    environments. (#7654, #7704)
  • The metric aucpr is rewritten for better performance and GPU support. (#7297, #7368)
  • Metric calculation is now performed in double precision. (#7364)
  • XGBoost no longer mutates the global OpenMP thread limit. (#7537, #7519, #7608, #7590,
    #7589, #7588, #7687)
  • The default behavior of max_leave and max_depth is now unified (#7302, #7551).
  • CUDA fat binary is now compressed. (#7601)
  • Deterministic result for evaluation metric and linear model. In previous versions of
    XGBoost, evaluation results might differ slightly for each run due to parallel reduction
    for floating-point values, which is now addressed. (#7362, #7303, #7316, #7349)
  • XGBoost now uses double for GPU Hist node sum, which improves the accuracy of
    gpu_hist. (#7507)

Performance improvements

Most of the performance improvements are integrated into other refactors during feature
developments. The approx should see significant performance gain for many datasets as
mentioned in the previous section, while the hist tree method also enjoys improved
performance with the removal of the internal pruner along with some other
refactoring. Lastly, gpu_hist no longer synchronizes the device during training. (#7737)

General bug fixes

This section lists bug fixes that are not specific to any language binding.

  • The num_parallel_tree is now a model parameter instead of a training hyper-parameter,
    which fixes model IO with random forest. (#7751)
  • Fixes in CMake script for exporting configuration. (#7730)
  • XGBoost can now handle unsorted sparse input. This includes text file formats like
    libsvm and scipy sparse matrix where column index might not be sorted. (#7731)
  • Fix tree param feature type, this affects inputs with the number of columns greater than
    the maximum value of int32. (#7565)
  • Fix external memory with gpu_hist and subsampling. (#7481)
  • Check the number of trees in inplace predict, this avoids a potential segfault when an
    incorrect value for iteration_range is provided. (#7409)
  • Fix non-stable result in cox regression (#7756)

Changes in the Python package

Other than the changes in Dask, the XGBoost Python package gained some new features and
improvements along with small bug fixes.

  • Python 3.7 is required as the lowest Python version. (#7682)
  • Pre-built binary wheel for Apple Silicon. (#7621, #7612, #7747) Apple Silicon users will
    now be able to run pip install xgboost to install XGBoost.
  • MacOS users no longer need to install libomp from Homebrew, as the XGBoost wheel now
    bundles libomp.dylib library.
  • There are new parameters for users to specify the custom metric with new
    behavior. XGBoost can now output transformed prediction values when a custom objective is
    not supplied. See our explanation in the
    tutorial
    for details.
  • For the sklearn interface, following the estimator guideline from scikit-learn, all
    parameters in fit that are not related to input data are moved into the constructor
    and can be set by set_params. (#6751, #7420, #7375, #7369)
  • Apache arrow format is now supported, which can bring better performance to users'
    pipeline (#7512)
  • Pandas nullable types are now supported (#7760)
  • A new function get_group is introduced for DMatrix to allow users to get the group
    information in the custom objective function. (#7564)
  • More training parameters are exposed in the sklearn interface instead of relying on the
    **kwargs. (#7629)
  • A new attribute feature_names_in_ is defined for all sklearn estimators like
    XGBRegressor to follow the convention of sklearn. (#7526)
  • More work on Python type hint. (#7432, #7348, #7338, #7513, #7707)
  • Support the latest pandas Index type. (#7595)
  • Fix for Feature shape mismatch error on s390x platform (#7715)
  • Fix using feature names for constraints with multiple groups (#7711)
  • We clarified the behavior of the callback function when it contains mutable
    states. (#7685)
  • Lastly, there are some code cleanups and maintenance work. (#7585, #7426, #7634, #7665,
    #7667, #7377, #7360, #7498, #7438, #7667, #7752, #7749, #7751)

Changes in the Dask interface

  • Dask module now supports user-supplied host IP and port address of scheduler node.
    Please see introduction and
    ...
Read more

Release candidate of version 1.6.0

30 Mar 17:28
4bd5a33
Compare
Choose a tag to compare
Pre-release

Roadmap: #7726
Release note: #7746

1.5.2 Patch Release

17 Jan 15:58
742c19f
Compare
Choose a tag to compare

This is a patch release for compatibility with latest dependencies and bug fixes.

  • [dask] Fix asyncio with latest dask and distributed.
  • [R] Fix single sample SHAP prediction.
  • [Python] Update python classifier to indicate support for latest Python versions.
  • [Python] Fix with latest mypy and pylint.
  • Fix indexing type for bitfield, which may affect missing value and categorical data.
  • Fix num_boosted_rounds for linear model.
  • Fix early stopping with linear model.

1.5.1 Patch Release

23 Nov 09:49
Compare
Choose a tag to compare

This is a patch release for compatibility with the latest dependencies and bug fixes. Also, all GPU-compatible binaries are built with CUDA 11.0.

  • [Python] Handle missing values in dataframe with category dtype. (#7331)

  • [R] Fix R CRAN failures about prediction and some compiler warnings.

  • [JVM packages] Fix compatibility with latest Spark (#7438, #7376)

  • Support building with CTK11.5. (#7379)

  • Check user input for iteration in inplace predict.

  • Handle OMP_THREAD_LIMIT environment variable.

  • [doc] Fix broken links. (#7341)

Artifacts

You can verify the downloaded packages by running this on your Unix shell:

echo "<hash> <artifact>" | shasum -a 256 --check
3a6cc7526c0dff1186f01b53dcbac5c58f12781988400e2d340dda61ef8d14ca  xgboost_r_gpu_linux_afb9dfd4210e8b8db8fe03380f83b404b1721443.tar.gz
6f74deb62776f1e2fd030e1fa08b93ba95b32ac69cc4096b4bcec3821dd0a480  xgboost_r_gpu_win64_afb9dfd4210e8b8db8fe03380f83b404b1721443.tar.gz
565dea0320ed4b6f807dbb92a8a57e86ec16db50eff9a3f405c651d1f53a259d  xgboost.tar.gz

Release 1.5.0 stable

17 Oct 17:08
584b45a
Compare
Choose a tag to compare

This release comes with many exciting new features and optimizations, along with some bug
fixes. We will describe the experimental categorical data support and the external memory
interface independently. Package-specific new features will be listed in respective
sections.

Development on categorical data support

In version 1.3, XGBoost introduced an experimental feature for handling categorical data
natively, without one-hot encoding. XGBoost can fit categorical splits in decision
trees. (Currently, the generated splits will be of form x \in {v}, where the input is
compared to a single category value. A future version of XGBoost will generate splits that
compare the input against a list of multiple category values.)

Most of the other features, including prediction, SHAP value computation, feature
importance, and model plotting were revised to natively handle categorical splits. Also,
all Python interfaces including native interface with and without quantized DMatrix,
scikit-learn interface, and Dask interface now accept categorical data with a wide range
of data structures support including numpy/cupy array and cuDF/pandas/modin dataframe. In
practice, the following are required for enabling categorical data support during
training:

  • Use Python package.
  • Use gpu_hist to train the model.
  • Use JSON model file format for saving the model.

Once the model is trained, it can be used with most of the features that are available on
the Python package. For a quick introduction, see
https://xgboost.readthedocs.io/en/latest/tutorials/categorical.html

Related PRs: (#7011, #7001, #7042, #7041, #7047, #7043, #7036, #7054, #7053, #7065, #7213, #7228, #7220, #7221, #7231, #7306)

  • Next steps

    • Revise the CPU training algorithm to handle categorical data natively and generate categorical splits
    • Extend the CPU and GPU algorithms to generate categorical splits of form x \in S
      where the input is compared with multiple category values. split. (#7081)

External memory

This release features a brand-new interface and implementation for external memory (also
known as out-of-core training). (#6901, #7064, #7088, #7089, #7087, #7092, #7070,
#7216). The new implementation leverages the data iterator interface, which is currently
used to create DeviceQuantileDMatrix. For a quick introduction, see
https://xgboost.readthedocs.io/en/latest/tutorials/external_memory.html#data-iterator
. During the development of this new interface, lz4 compression is removed. (#7076).
Please note that external memory support is still experimental and not ready for
production use yet. All future development will focus on this new interface and users are
advised to migrate. (You are using the old interface if you are using a URL suffix to use
external memory.)

New features in Python package

  • Support numpy array interface and all numeric types from numpy in DMatrix
    construction and inplace_predict (#6998, #7003). Now XGBoost no longer makes data
    copy when input is numpy array view.
  • The early stopping callback in Python has a new min_delta parameter to control the
    stopping behavior (#7137)
  • Python package now supports calculating feature scores for the linear model, which is
    also available on R package. (#7048)
  • Python interface now supports configuring constraints using feature names instead of
    feature indices.
  • Typehint support for more Python code including scikit-learn interface and rabit
    module. (#6799, #7240)
  • Add tutorial for XGBoost-Ray (#6884)

New features in R package

  • In 1.4 we have a new prediction function in the C API which is used by the Python
    package. This release revises the R package to use the new prediction function as well.
    A new parameter iteration_range for the predict function is available, which can be
    used for specifying the range of trees for running prediction. (#6819, #7126)
  • R package now supports the nthread parameter in DMatrix construction. (#7127)

New features in JVM packages

  • Support GPU dataframe and DeviceQuantileDMatrix (#7195). Constructing DMatrix
    with GPU data structures and the interface for quantized DMatrix were first
    introduced in the Python package and are now available in the xgboost4j package.
  • JVM packages now support saving and getting early stopping attributes. (#7095) Here is a
    quick example in JAVA (#7252).

General new features

  • We now have a pre-built binary package for R on Windows with GPU support. (#7185)
  • CUDA compute capability 86 is now part of the default CMake build configuration with
    newly added support for CUDA 11.4. (#7131, #7182, #7254)
  • XGBoost can be compiled using system CUB provided by CUDA 11.x installation. (#7232)

Optimizations

The performance for both hist and gpu_hist has been significantly improved in 1.5
with the following optimizations:

  • GPU multi-class model training now supports prediction cache. (#6860)
  • GPU histogram building is sped up and the overall training time is 2-3 times faster on
    large datasets (#7180, #7198). In addition, we removed the parameter deterministic_histogram and now
    the GPU algorithm is always deterministic.
  • CPU hist has an optimized procedure for data sampling (#6922)
  • More performance optimization in regression and binary classification objectives on
    CPU (#7206)
  • Tree model dump is now performed in parallel (#7040)

Breaking changes

  • n_gpus was deprecated in 1.0 release and is now removed.
  • Feature grouping in CPU hist tree method is removed, which was disabled long
    ago. (#7018)
  • C API for Quantile DMatrix is changed to be consistent with the new external memory
    implementation. (#7082)

Notable general bug fixes

  • XGBoost no long changes global CUDA device ordinal when gpu_id is specified (#6891,
    #6987)
  • Fix gamma negative likelihood evaluation metric. (#7275)
  • Fix integer value of verbose_eal for xgboost.cv function in Python. (#7291)
  • Remove extra sync in CPU hist for dense data, which can lead to incorrect tree node
    statistics. (#7120, #7128)
  • Fix a bug in GPU hist when data size is larger than UINT32_MAX with missing
    values. (#7026)
  • Fix a thread safety issue in prediction with the softmax objective. (#7104)
  • Fix a thread safety issue in CPU SHAP value computation. (#7050) Please note that all
    prediction functions in Python are thread-safe.
  • Fix model slicing. (#7149, #7078)
  • Workaround a bug in old GCC which can lead to segfault during construction of
    DMatrix. (#7161)
  • Fix histogram truncation in GPU hist, which can lead to slightly-off results. (#7181)
  • Fix loading GPU linear model pickle files on CPU-only machine. (#7154)
  • Check input value is duplicated when CPU quantile queue is full (#7091)
  • Fix parameter loading with training continuation. (#7121)
  • Fix CMake interface for exposing C library by specifying dependencies. (#7099)
  • Callback and early stopping are explicitly disabled for the scikit-learn interface
    random forest estimator. (#7236)
  • Fix compilation error on x86 (32-bit machine) (#6964)
  • Fix CPU memory usage with extremely sparse datasets (#7255)
  • Fix a bug in GPU multi-class AUC implementation with weighted data (#7300)

Python package

Other than the items mentioned in the previous sections, there are some Python-specific
improvements.

  • Change development release postfix to dev (#6988)
  • Fix early stopping behavior with MAPE metric (#7061)
  • Fixed incorrect feature mismatch error message (#6949)
  • Add predictor to skl constructor. (#7000, #7159)
  • Re-enable feature validation in predict proba. (#7177)
  • scikit learn interface regression estimator now can pass the scikit-learn estimator
    check and is fully compatible with scikit-learn utilities. __sklearn_is_fitted__ is
    implemented as part of the changes (#7130, #7230)
  • Conform the latest pylint. (#7071, #7241)
  • Support latest panda range index in DMatrix construction. (#7074)
  • Fix DMatrix construction from pandas series. (#7243)
  • Fix typo and grammatical mistake in error message (#7134)
  • [dask] disable work stealing explicitly for training tasks (#6794)
  • [dask] Set dataframe index in predict. (#6944)
  • [dask] Fix prediction on df with latest dask. (#6969)
  • [dask] Fix dask predict on DaskDMatrix with iteration_range. (#7005)
  • [dask] Disallow importing non-dask estimators from xgboost.dask (#7133)

R package

Improvements other than new features on R package:

  • Optimization for updating R handles in-place (#6903)
  • Removed the magrittr dependency. (#6855, #6906, #6928)
  • The R package now hides all C++ symbols to avoid conflicts. (#7245)
  • Other maintenance including code cleanups, document updates. (#6863, #6915, #6930, #6966, #6967)

JVM packages

Improvements other than new features on JVM packages:

  • Constructors with implicit missing value are deprecated due to confusing behaviors. (#7225)
  • Reduce scala-compiler, scalatest dependency scopes (#6730)
  • Making the Java library loader emit helpful error messages on missing dependencies. (#6926)
  • JVM packages now use the Python tracker in XGBoost instead of dmlc. The one in XGBoost
    is shared between JVM packages and Python Dask and enjoys better maintenance (#7132)
  • Fix "key not found: train" error (#6842)
  • Fix model loading from stream (#7067)

General document improvements

  • Overhaul the installation documents. (#6877)
  • A few demos are added for AFT with dask (#6853), callback with dask (#6995), inference
    in C (#7151), process_type. (#7135)
  • Fix PDF format of document. (#7143)
  • Clarify the behavior of use_rmm. (#6808)
  • Clarify prediction function. (#6813)
  • Improve tutor...
Read more

Release candidate of version 1.5.0

26 Sep 05:35
1debabb
Compare
Choose a tag to compare
Pre-release

Roadmap: #6846
RC: #7260
Release note: #7271

1.4.2 Patch Release

13 May 11:35
522b897
Compare
Choose a tag to compare

This is a patch release for Python package with following fixes:

  • Handle the latest version of cupy.ndarray in inplace_predict. #6933
  • Ensure output array from predict_leaf is (n_samples, ) when there's only 1 tree. 1.4.0 outputs (n_samples, 1). #6889
  • Fix empty dataset handling with multi-class AUC. #6947
  • Handle object type from pandas in inplace_predict. #6927

You can verify the downloaded source code xgboost.tar.gz by running this on your unix shell:

echo "3ffd4a90cd03efde596e51cadf7f344c8b6c91aefd06cc92db349cd47056c05a *xgboost.tar.gz" | shasum -a 256 --check

1.4.1 Patch Release

20 Apr 00:37
a347ef7
Compare
Choose a tag to compare

This is a bug fix release.

  • Fix GPU implementation of AUC on some large datasets. (#6866)

You can verify the downloaded source code xgboost.tar.gz by running this on your unix shell:

echo "f3a37e5ddac10786e46423db874b29af413eed49fd9baed85035bbfee6fc6635 *xgboost.tar.gz" | shasum -a 256 --check