chore(examples): Use QuantileDMatrix for histogram tree method in XGBoost example#3375
chore(examples): Use QuantileDMatrix for histogram tree method in XGBoost example#3375sunalawa wants to merge 2 commits intokubeflow:masterfrom
Conversation
…oost example Replace DMatrix with QuantileDMatrix in distributed XGBoost training example. This reduces memory usage and aligns with XGBoost best practices for distributed workloads Fixes kubeflow#3300
|
Check out this pull request on See visual diffs & provide feedback on Jupyter Notebooks. Powered by ReviewNB |
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
|
🎉 Welcome to the Kubeflow Trainer! 🎉 Thanks for opening your first PR! We're happy to have you as part of our community 🚀 Here's what happens next:
Join the community:
Feel free to ask questions in the comments if you need any help or clarification! |
There was a problem hiding this comment.
Pull request overview
Updates the distributed XGBoost training example notebook to use QuantileDMatrix instead of DMatrix, aiming to reduce memory usage and align the example with histogram-based training best practices.
Changes:
- Replace
xgb.DMatrixwithxgb.QuantileDMatrixfor training and validation data construction. - Update the notebook note to reflect
QuantileDMatrixconstruction requirements within the communicator context.
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> Signed-off-by: Suyash Nalawade <81951809+sunalawa@users.noreply.github.com>
Replace DMatrix with QuantileDMatrix in distributed XGBoost training example.
This reduces memory usage and aligns with XGBoost best practices for distributed workloads
What this PR does / why we need it:
Updates the distributed XGBoost example to use QuantileDMatrix instead of DMatrix.
This reduces memory usage and follows XGBoost best practices for distributed training workloads.
Which issue(s) this PR fixes
Fixes #3300
Checklist: