Skip to content

Releases: dmlc/xgboost

Release 1.4.0 stable

11 Apr 00:43
Compare
Choose a tag to compare

Introduction of pre-built binary package for R, with GPU support

Starting with release 1.4.0, users now have the option of installing {xgboost} without
having to build it from the source. This is particularly advantageous for users who want
to take advantage of the GPU algorithm (gpu_hist), as previously they'd have to build
{xgboost} from the source using CMake and NVCC. Now installing {xgboost} with GPU
support is as easy as: R CMD INSTALL ./xgboost_r_gpu_linux.tar.gz. (#6827)

See the instructions at https://xgboost.readthedocs.io/en/latest/build.html

Improvements on prediction functions

XGBoost has many prediction types including shap value computation and inplace prediction.
In 1.4 we overhauled the underlying prediction functions for C API and Python API with an
unified interface. (#6777, #6693, #6653, #6662, #6648, #6668, #6804)

  • Starting with 1.4, sklearn interface prediction will use inplace predict by default when
    input data is supported.
  • Users can use inplace predict with dart booster and enable GPU acceleration just
    like gbtree.
  • Also all prediction functions with tree models are now thread-safe. Inplace predict is
    improved with base_margin support.
  • A new set of C predict functions are exposed in the public interface.
  • A user-visible change is a newly added parameter called strict_shape. See
    https://xgboost.readthedocs.io/en/latest/prediction.html for more details.

Improvement on Dask interface

  • Starting with 1.4, the Dask interface is considered to be feature-complete, which means
    all of the models found in the single node Python interface are now supported in Dask,
    including but not limited to ranking and random forest. Also, the prediction function
    is significantly faster and supports shap value computation.

    • Most of the parameters found in single node sklearn interface are supported by
      Dask interface. (#6471, #6591)
    • Implements learning to rank. On the Dask interface, we use the newly added support of
      query ID to enable group structure. (#6576)
    • The Dask interface has Python type hints support. (#6519)
    • All models can be safely pickled. (#6651)
    • Random forest estimators are now supported. (#6602)
    • Shap value computation is now supported. (#6575, #6645, #6614)
    • Evaluation result is printed on the scheduler process. (#6609)
    • DaskDMatrix (and device quantile dmatrix) now accepts all meta-information. (#6601)
  • Prediction optimization. We enhanced and speeded up the prediction function for the
    Dask interface. See the latest Dask tutorial page in our document for an overview of
    how you can optimize it even further. (#6650, #6645, #6648, #6668)

  • Bug fixes

    • If you are using the latest Dask and distributed where distributed.MultiLock is
      present, XGBoost supports training multiple models on the same cluster in
      parallel. (#6743)
    • A bug fix for when using dask.client to launch async task, XGBoost might use a
      different client object internally. (#6722)
  • Other improvements on documents, blogs, tutorials, and demos. (#6389, #6366, #6687,
    #6699, #6532, #6501)

Python package

With changes from Dask and general improvement on prediction, we have made some
enhancements on the general Python interface and IO for booster information. Starting
from 1.4, booster feature names and types can be saved into the JSON model. Also some
model attributes like best_iteration, best_score are restored upon model load. On
sklearn interface, some attributes are now implemented as Python object property with
better documents.

  • Breaking change: All data parameters in prediction functions are renamed to X
    for better compliance to sklearn estimator interface guidelines.

  • Breaking change: XGBoost used to generate some pseudo feature names with DMatrix
    when inputs like np.ndarray don't have column names. The procedure is removed to
    avoid conflict with other inputs. (#6605)

  • Early stopping with training continuation is now supported. (#6506)

  • Optional import for Dask and cuDF are now lazy. (#6522)

  • As mentioned in the prediction improvement summary, the sklearn interface uses inplace
    prediction whenever possible. (#6718)

  • Booster information like feature names and feature types are now saved into the JSON
    model file. (#6605)

  • All DMatrix interfaces including DeviceQuantileDMatrix and counterparts in Dask
    interface (as mentioned in the Dask changes summary) now accept all the meta-information
    like group and qid in their constructor for better consistency. (#6601)

  • Booster attributes are restored upon model load so users don't have to call attr
    manually. (#6593)

  • On sklearn interface, all models accept base_margin for evaluation datasets. (#6591)

  • Improvements over the setup script including smaller sdist size and faster installation
    if the C++ library is already built (#6611, #6694, #6565).

  • Bug fixes for Python package:

    • Don't validate feature when number of rows is 0. (#6472)
    • Move metric configuration into booster. (#6504)
    • Calling XGBModel.fit() should clear the Booster by default (#6562)
    • Support _estimator_type. (#6582)
    • [dask, sklearn] Fix predict proba. (#6566, #6817)
    • Restore unknown data support. (#6595)
    • Fix learning rate scheduler with cv. (#6720)
    • Fixes small typo in sklearn documentation (#6717)
    • [python-package] Fix class Booster: feature_types = None (#6705)
    • Fix divide by 0 in feature importance when no split is found. (#6676)

JVM package

  • [jvm-packages] fix early stopping doesn't work even without custom_eval setting (#6738)
  • fix potential TaskFailedListener's callback won't be called (#6612)
  • [jvm] Add ability to load booster direct from byte array (#6655)
  • [jvm-packages] JVM library loader extensions (#6630)

R package

  • R documentation: Make construction of DMatrix consistent.
  • Fix R documentation for xgb.train. (#6764)

ROC-AUC

We re-implemented the ROC-AUC metric in XGBoost. The new implementation supports
multi-class classification and has better support for learning to rank tasks that are not
binary. Also, it has a better-defined average on distributed environments with additional
handling for invalid datasets. (#6749, #6747, #6797)

Global configuration.

Starting from 1.4, XGBoost's Python, R and C interfaces support a new global configuration
model where users can specify some global parameters. Currently, supported parameters are
verbosity and use_rmm. The latter is experimental, see rmm plugin demo and
related README file for details. (#6414, #6656)

Other New features.

  • Better handling for input data types that support __array_interface__. For some
    data types including GPU inputs and scipy.sparse.csr_matrix, XGBoost employs
    __array_interface__ for processing the underlying data. Starting from 1.4, XGBoost
    can accept arbitrary array strides (which means column-major is supported) without
    making data copies, potentially reducing a significant amount of memory consumption.
    Also version 3 of __cuda_array_interface__ is now supported. (#6776, #6765, #6459,
    #6675)
  • Improved parameter validation, now feeding XGBoost with parameters that contain
    whitespace will trigger an error. (#6769)
  • For Python and R packages, file paths containing the home indicator ~ are supported.
  • As mentioned in the Python changes summary, the JSON model can now save feature
    information of the trained booster. The JSON schema is updated accordingly. (#6605)
  • Development of categorical data support is continued. Newly added weighted data support
    and dart booster support. (#6508, #6693)
  • As mentioned in Dask change summary, ranking now supports the qid parameter for
    query groups. (#6576)
  • DMatrix.slice can now consume a numpy array. (#6368)

Other breaking changes

  • Aside from the feature name generation, there are 2 breaking changes:
    • Drop saving binary format for memory snapshot. (#6513, #6640)
    • Change default evaluation metric for binary:logitraw objective to logloss (#6647)

CPU Optimization

  • Aside from the general changes on predict function, some optimizations are applied on
    CPU implementation. (#6683, #6550, #6696, #6700)
  • Also performance for sampling initialization in hist is improved. (#6410)

Notable fixes in the core library

These fixes do not reside in particular language bindings:

  • Fixes for gamma regression. This includes checking for invalid input values, fixes for
    gamma deviance metric, and better floating point guard for gamma negative log-likelihood
    metric. (#6778, #6537, #6761)
  • Random forest with gpu_hist might generate low accuracy in previous versions. (#6755)
  • Fix a bug in GPU sketching when data size exceeds limit of 32-bit integer. (#6826)
  • Memory consumption fix for row-major adapters (#6779)
  • Don't estimate sketch batch size when rmm is used. (#6807) (#6830)
  • Fix in-place predict with missing value. (#6787)
  • Re-introduce double buffer in UpdatePosition, to fix perf regression in gpu_hist (#6757)
  • Pass correct split_type to GPU predictor (#6491)
  • Fix DMatrix feature names/types IO. (#6507)
  • Use view for SparsePage exclusively to avoid some data access races. (#6590)
  • Check for invalid data. (#6742)
  • Fix relocatable include in CMakeList (#6734) (#6737)
  • Fix DMatrix slice with feature types. (#6689)

Other deprecation notices:

  • This release will be the last release to support CUDA 10.0. (#6642)

  • Starting in the next release, the Python package will require Pip 19.3+ due to the use
    of manylinux2014 tag. Also, CentOS 6, RHEL 6 and other old distributions will not be
    supported.

Known issue:

MacOS build of the JVM packages doesn't support multi-threading out of the box. To enable
mul...

Read more

1.3.3 Patch Release

20 Jan 13:52
000292c
Compare
Choose a tag to compare
  • Fix regression on best_ntree_limit. (#6616)

1.3.2 Patch Release

13 Jan 14:21
Compare
Choose a tag to compare
  • Fix compatibility with newer scikit-learn. (#6555)
  • Fix wrong best_ntree_limit in multi-class. (#6569)
  • Ensure that Rabit can be compiled on Solaris (#6578)
  • Fix best_ntree_limit for linear and dart. (#6579)
  • Remove duplicated DMatrix creation in scikit-learn interface. (#6592)
  • Fix evals_result in XGBRanker. (##6594)

1.3.1 Patch Release

22 Dec 13:38
a78d0d4
Compare
Choose a tag to compare
  • Enable loading model from <1.0.0 trained with objective='binary:logitraw' (#6517)
  • Fix handling of print period in EvaluationMonitor (#6499)
  • Fix a bug in metric configuration after loading model. (#6504)
  • Fix save_best early stopping option (#6523)
  • Remove cupy.array_equal, since it's not compatible with cuPy 7.8 (#6528)

You can verify the downloaded source code xgboost.tar.gz by running this on your unix shell:

echo "fd51e844dd0291fd9e7129407be85aaeeda2309381a6e3fc104938b27fb09279 *xgboost.tar.gz" | shasum -a 256 --check

Release 1.3.0 stable

09 Dec 00:29
Compare
Choose a tag to compare

XGBoost4J-Spark: Exceptions should cancel jobs gracefully instead of killing SparkContext (#6019).

  • By default, exceptions in XGBoost4J-Spark causes the whole SparkContext to shut down, necessitating the restart of the Spark cluster. This behavior is often a major inconvenience.
  • Starting from 1.3.0 release, XGBoost adds a new parameter killSparkContextOnWorkerFailure to optionally prevent killing SparkContext. If this parameter is set, exceptions will gracefully cancel training jobs instead of killing SparkContext.

GPUTreeSHAP: GPU acceleration of the TreeSHAP algorithm (#6038, #6064, #6087, #6099, #6163, #6281, #6332)

  • SHAP (SHapley Additive exPlanations) is a game theoretic approach to explain predictions of machine learning models. It computes feature importance scores for individual examples, establishing how each feature influences a particular prediction. TreeSHAP is an optimized SHAP algorithm specifically designed for decision tree ensembles.
  • Starting with 1.3.0 release, it is now possible to leverage CUDA-capable GPUs to accelerate the TreeSHAP algorithm. Check out the demo notebook.
  • The CUDA implementation of the TreeSHAP algorithm is hosted at rapidsai/GPUTreeSHAP. XGBoost imports it as a Git submodule.

New style Python callback API (#6199, #6270, #6320, #6348, #6376, #6399, #6441)

  • The XGBoost Python package now offers a re-designed callback API. The new callback API lets you design various extensions of training in idomatic Python. In addition, the new callback API allows you to use early stopping with the native Dask API (xgboost.dask). Check out the tutorial and the demo.

Enable the use of DeviceQuantileDMatrix / DaskDeviceQuantileDMatrix with large data (#6201, #6229, #6234).

  • DeviceQuantileDMatrix can achieve memory saving by avoiding extra copies of the training data, and the saving is bigger for large data. Unfortunately, large data with more than 2^31 elements was triggering integer overflow bugs in CUB and Thrust. Tracking issue: #6228.
  • This release contains a series of work-arounds to allow the use of DeviceQuantileDMatrix with large data:
    • Loop over copy_if (#6201)
    • Loop over thrust::reduce (#6229)
    • Implement the inclusive scan algorithm in-house, to handle large offsets (#6234)

Support slicing of tree models (#6302)

  • Accessing the best iteration of a model after the application of early stopping used to be error-prone, need to manually pass the ntree_limit argument to the predict() function.
  • Now we provide a simple interface to slice tree models by specifying a range of boosting rounds. The tree ensemble can be split into multiple sub-ensembles via the slicing interface. Check out an example.
  • In addition, the early stopping callback now supports save_best option. When enabled, XGBoost will save (persist) the model at the best boosting round and discard the trees that were fit subsequent to the best round.

Weighted subsampling of features (columns) (#5962)

  • It is now possible to sample features (columns) via weighted subsampling, in which features with higher weights are more likely to be selected in the sample. Weighted subsampling allows you to encode domain knowledge by emphasizing a particular set of features in the choice of tree splits. In addition, you can prevent particular features from being used in any splits, by assigning them zero weights.
  • Check out the demo.

Improved integration with Dask

  • Support reverse-proxy environment such as Google Kubernetes Engine (#6343, #6475)
  • An XGBoost training job will no longer use all available workers. Instead, it will only use the workers that contain input data (#6343).
  • The new callback API works well with the Dask training API.
  • The predict() and fit() function of DaskXGBClassifier and DaskXGBRegressor now accept a base margin (#6155).
  • Support more meta data in the Dask API (#6130, #6132, #6333).
  • Allow passing extra keyword arguments as kwargs in predict() (#6117)
  • Fix typo in dask interface: sample_weights -> sample_weight (#6240)
  • Allow empty data matrix in AFT survival, as Dask may produce empty partitions (#6379)
  • Speed up prediction by overlapping prediction jobs in all workers (#6412)

Experimental support for direct splits with categorical features (#6028, #6128, #6137, #6140, #6164, #6165, #6166, #6179, #6194, #6219)

  • Currently, XGBoost requires users to one-hot-encode categorical variables. This has adverse performance implications, as the creation of many dummy variables results into higher memory consumption and may require fitting deeper trees to achieve equivalent model accuracy.
  • The 1.3.0 release of XGBoost contains an experimental support for direct handling of categorical variables in test nodes. Each test node will have the condition of form feature_value \in match_set, where the match_set on the right hand side contains one or more matching categories. The matching categories in match_set represent the condition for traversing to the right child node. Currently, XGBoost will only generate categorical splits with only a single matching category ("one-vs-rest split"). In a future release, we plan to remove this restriction and produce splits with multiple matching categories in match_set.
  • The categorical split requires the use of JSON model serialization. The legacy binary serialization method cannot be used to save (persist) models with categorical splits.
  • Note. This feature is currently highly experimental. Use it at your own risk. See the detailed list of limitations at #5949.

Experimental plugin for RAPIDS Memory Manager (#5873, #6131, #6146, #6150, #6182)

  • RAPIDS Memory Manager library (rapidsai/rmm) provides a collection of efficient memory allocators for NVIDIA GPUs. It is now possible to use XGBoost with memory allocators provided by RMM, by enabling the RMM integration plugin. With this plugin, XGBoost is now able to share a common GPU memory pool with other applications using RMM, such as the RAPIDS data science packages.
  • See the demo for a working example, as well as directions for building XGBoost with the RMM plugin.
  • The plugin will be soon considered non-experimental, once #6297 is resolved.

Experimental plugin for oneAPI programming model (#5825)

  • oneAPI is a programming interface developed by Intel aimed at providing one programming model for many types of hardware such as CPU, GPU, FGPA and other hardware accelerators.
  • XGBoost now includes an experimental plugin for using oneAPI for the predictor and objective functions. The plugin is hosted in the directory plugin/updater_oneapi.
  • Roadmap: #5442

Pickling the XGBoost model will now trigger JSON serialization (#6027)

  • The pickle will now contain the JSON string representation of the XGBoost model, as well as related configuration.

Performance improvements

  • Various performance improvement on multi-core CPUs
    • Optimize DMatrix build time by up to 3.7x. (#5877)
    • CPU predict performance improvement, by up to 3.6x. (#6127)
    • Optimize CPU sketch allreduce for sparse data (#6009)
    • Thread local memory allocation for BuildHist, leading to speedup up to 1.7x. (#6358)
    • Disable hyperthreading for DMatrix creation (#6386). This speeds up DMatrix creation by up to 2x.
    • Simple fix for static shedule in predict (#6357)
  • Unify thread configuration, to make it easy to utilize all CPU cores (#6186)
  • [jvm-packages] Clean the way deterministic paritioning is computed (#6033)
  • Speed up JSON serialization by implementing an intrusive pointer class (#6129). It leads to 1.5x-2x performance boost.

API additions

  • [R] Add SHAP summary plot using ggplot2 (#5882)
  • Modin DataFrame can now be used as input (#6055)
  • [jvm-packages] Add getNumFeature method (#6075)
  • Add MAPE metric (#6119)
  • Implement GPU predict leaf. (#6187)
  • Enable cuDF/cuPy inputs in XGBClassifier (#6269)
  • Document tree method for feature weights. (#6312)
  • Add fail_on_invalid_gpu_id parameter, which will cause XGBoost to terminate upon seeing an invalid value of gpu_id (#6342)

Breaking: the default evaluation metric for classification is changed to logloss / mlogloss (#6183)

  • The default metric used to be accuracy, and it is not statistically consistent to perform early stopping with the accuracy metric when we are really optimizing the log loss for the binary:logistic objective.
  • For statistical consistency, the default metric for classification has been changed to logloss. Users may choose to preserve the old behavior by explicitly specifying eval_metric.

Breaking: skmaker is now removed (#5971)

  • The skmaker updater has not been documented nor tested.

Breaking: the JSON model format no longer stores the leaf child count (#6094).

  • The leaf child count field has been deprecated and is not used anywhere in the XGBoost codebase.

Breaking: XGBoost now requires MacOS 10.14 (Mojave) and later.

  • Homebrew has dropped support for MacOS 10.13 (High Sierra), so we are not able to install the OpenMP runtime (libomp) from Homebrew on MacOS 10.13. Please use MacOS 10.14 (Mojave) or later.

Deprecation notices

  • The use of LabelEncoder in XGBClassifier is now deprecated and will be re...
Read more

Release Candidate of version 1.3.0

23 Nov 16:19
Compare
Choose a tag to compare
Pre-release

#6422

R package: xgboost_1.3.0.1.tar.gz

1.2.1 Patch Release

14 Oct 01:14
bcb15a9
Compare
Choose a tag to compare

This patch release applies the following patches to 1.2.0 release:

  • Hide C++ symbols from dmlc-core (#6188)

Release 1.2.0 stable

23 Aug 02:51
Compare
Choose a tag to compare

XGBoost4J-Spark now supports the GPU algorithm (#5171)

  • Now XGBoost4J-Spark is able to leverage NVIDIA GPU hardware to speed up training.
  • There is on-going work for accelerating the rest of the data pipeline with NVIDIA GPUs (#5950, #5972).

XGBoost now supports CUDA 11 (#5808)

  • It is now possible to build XGBoost with CUDA 11. Note that we do not yet distribute pre-built binaries built with CUDA 11; all current distributions use CUDA 10.0.

Better guidance for persisting XGBoost models in an R environment (#5940, #5964)

  • Users are strongly encouraged to use xgb.save() and xgb.save.raw() instead of saveRDS(). This is so that the persisted models can be accessed with future releases of XGBoost.
  • The previous release (1.1.0) had problems loading models that were saved with saveRDS(). This release adds a compatibility layer to restore access to the old RDS files. Note that this is meant to be a temporary measure; users are advised to stop using saveRDS() and migrate to xgb.save() and xgb.save.raw().

New objectives and metrics

  • The pseudo-Huber loss reg:pseudohubererror is added (#5647). The corresponding metric is mphe. Right now, the slope is hard-coded to 1.
  • The Accelerated Failure Time objective for survival analysis (survival:aft) is now accelerated on GPUs (#5714, #5716). The survival metrics aft-nloglik and interval-regression-accuracy are also accelerated on GPUs.

Improved integration with scikit-learn

  • Added n_features_in_ attribute to the scikit-learn interface to store the number of features used (#5780). This is useful for integrating with some scikit-learn features such as StackingClassifier. See this link for more details.
  • XGBoostError now inherits ValueError, which conforms scikit-learn's exception requirement (#5696).

Improved integration with Dask

  • The XGBoost Dask API now exposes an asynchronous interface (#5862). See the document for details.
  • Zero-copy ingestion of GPU arrays via DaskDeviceQuantileDMatrix (#5623, #5799, #5800, #5803, #5837, #5874, #5901): Previously, the Dask interface had to make 2 data copies: one for concatenating the Dask partition/block into a single block and another for internal representation. To save memory, we introduce DaskDeviceQuantileDMatrix. As long as Dask partitions are resident in the GPU memory, DaskDeviceQuantileDMatrix is able to ingest them directly without making copies. This matrix type wraps DeviceQuantileDMatrix.
  • The prediction function now returns GPU Series type if the input is from Dask-cuDF (#5710). This is to preserve the input data type.

Robust handling of external data types (#5689, #5893)

  • As we support more and more external data types, the handling logic has proliferated all over the code base and became hard to keep track. It also became unclear how missing values and threads are handled. We refactored the Python package code to collect all data handling logic to a central location, and now we have an explicit list of of all supported data types.

Improvements in GPU-side data matrix (DeviceQuantileDMatrix)

  • The GPU-side data matrix now implements its own quantile sketching logic, so that data don't have to be transported back to the main memory (#5700, #5747, #5760, #5846, #5870, #5898). The GK sketching algorithm is also now better documented.
    • Now we can load extremely sparse dataset like URL, although performance is still sub-optimal.
  • The GPU-side data matrix now exposes an iterative interface (#5783), so that users are able to construct a matrix from a data iterator. See the Python demo.

New language binding: Swift (#5728)

Robust model serialization with JSON (#5772, #5804, #5831, #5857, #5934)

  • We continue efforts from the 1.0.0 release to adopt JSON as the format to save and load models robustly.
  • JSON model IO is significantly faster and produces smaller model files.
  • Round-trip reproducibility is guaranteed, via the introduction of an efficient float-to-string conversion algorithm known as the Ryū algorithm. The conversion is locale-independent, producing consistent numeric representation regardless of the locale setting of the user's machine.
  • We fixed an issue in loading large JSON files to memory.
  • It is now possible to load a JSON file from a remote source such as S3.

Performance improvements

  • CPU hist tree method optimization
    • Skip missing lookup in hist row partitioning if data is dense. (#5644)
    • Specialize training procedures for CPU hist tree method on distributed environment. (#5557)
    • Add single point histogram for CPU hist. Previously gradient histogram for CPU hist is hard coded to be 64 bit, now users can specify the parameter single_precision_histogram to use 32 bit histogram instead for faster training performance. (#5624, #5811)
  • GPU hist tree method optimization
    • Removed some unnecessary synchronizations and better memory allocation pattern. (#5707)
    • Optimize GPU Hist for wide dataset. Previously for wide dataset the atomic operation is performed on global memory, now it can run on shared memory for faster histogram building. But there's a known small regression on GeForce cards with dense data. (#5795, #5926, #5948, #5631)

API additions

  • Support passing fmap to importance plot (#5719). Now importance plot can show actual names of features instead of default ones.
  • Support 64bit seed. (#5643)
  • A new C API XGBoosterGetNumFeature is added for getting number of features in booster (#5856).
  • Feature names and feature types are now stored in C++ core and saved in binary DMatrix (#5858).

Breaking: The predict() method of DaskXGBClassifier now produces class predictions (#5986). Use predict_proba() to obtain probability predictions.

  • Previously, DaskXGBClassifier.predict() produced probability predictions. This is inconsistent with the behavior of other scikit-learn classifiers, where predict() returns class predictions. We make a breaking change in 1.2.0 release so that DaskXGBClassifier.predict() now correctly produces class predictions and thus behave like other scikit-learn classifiers. Furthermore, we introduce the predict_proba() method for obtaining probability predictions, again to be in line with other scikit-learn classifiers.

Breaking: Custom evaluation metric now receives raw prediction (#5954)

  • Previously, the custom evaluation metric received a transformed prediction result when used with a classifier. Now the custom metric will receive a raw (untransformed) prediction and will need to transform the prediction itself. See demo/guide-python/custom_softmax.py for an example.
  • This change is to make the custom metric behave consistently with the custom objective, which already receives raw prediction (#5564).

Breaking: XGBoost4J-Spark now requires Spark 3.0 and Scala 2.12 (#5836, #5890)

  • Starting with version 3.0, Spark can manage GPU resources and allocate them among executors.
  • Spark 3.0 dropped support for Scala 2.11 and now only supports Scala 2.12. Thus, XGBoost4J-Spark also only supports Scala 2.12.

Breaking: XGBoost Python package now requires Python 3.6 and later (#5715)

  • Python 3.6 has many useful features such as f-strings.

Breaking: XGBoost now adopts the C++14 standard (#5664)

  • Make sure to use a sufficiently modern C++ compiler that supports C++14, such as Visual Studio 2017, GCC 5.0+, and Clang 3.4+.

Bug-fixes

  • Fix a data race in the prediction function (#5853). As a byproduct, the prediction function now uses a thread-local data store and became thread-safe.
  • Restore capability to run prediction when the test input has fewer features than the training data (#5955). This capability is necessary to support predicting with LIBSVM inputs. The previous release (1.1) had broken this capability, so we restore it in this version with better tests.
  • Fix OpenMP build with CMake for R package, to support CMake 3.13 (#5895).
  • Fix Windows 2016 build (#5902, #5918).
  • Fix edge cases in scikit-learn interface with Pandas input by disabling feature validation. (#5953)
  • [R] Enable weighted learning to rank (#5945)
  • [R] Fix early stopping with custom objective (#5923)
  • Fix NDK Build (#5886)
  • Add missing explicit template specializations for greater portability (#5921)
  • Handle empty rows in data iterators correctly (#5929). This bug affects file loader and JVM data frames.
  • Fix IsDense (#5702)
  • [jvm-packages] Fix wrong method name setAllowZeroForMissingValue (#5740)
  • Fix shape inference for Dask predict (#5989)

Usability Improvements, Documentation

  • [Doc] Document that CUDA 10.0 is required (#5872)
  • Refactored command line interface (CLI). Now CLI is able to handle user errors and output basic document. (#5574)
  • Better error handling in Python: use raise from syntax to preserve full stacktrace (#5787).
  • The JSON model dump now has a formal schema (#5660, #5818). The benefit is to prevent dump_model() function from breaking. See this document to understand the difference between saving and dumping models.
  • Add a reference to the GPU external memory paper (#5684)
  • Document more objective parameters in the R package (#5682)
  • Document the existence of pre-built binary wheels for MacOS (#5711)
  • Remove `m...
Read more

Release Candidate 2 of version 1.2.0

12 Aug 21:39
Compare
Choose a tag to compare
Pre-release

#5970

R package: xgboost_1.2.0.1.tar.gz (Manual: xgboost_1.2.0.1-manual.pdf)

Release Candidate of version 1.2.0

02 Aug 11:18
Compare
Choose a tag to compare
Pre-release

#5970

R package: xgboost_1.2.0.1.tar.gz (Manual: xgboost_1.2.0.1-manual.pdf)