You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
dfs.disk.balancer.block.tolerance.percent is documented and used on the DataNode when executing a plan, but the plan command did not set tolerance percentage on plan steps. So the value sent to the DataNode was effectively the step default (0) unless the DataNode fell back to its own config. This change reads the config in the plan command and sets tolerancePercent on each step so the plan propagated to the DataNode uses the same value.
Change
PlanCommand.java: In setPlanParams(), read dfs.disk.balancer.block.tolerance.percent from configuration (default 10) and call step.setTolerancePercent(tolerancePercent) for each step in each plan. So generated plans include tolerance and the DataNode receives it (it already uses step.getTolerancePercent() when building work items).
TestNodePlan.java: Add testPlanStepTolerancePercentInJson(): build a NodePlan with a MoveStep that has setTolerancePercent(15), serialize to JSON, parse back, and assert the step’s getTolerancePercent() is 15 (HDFS-17872).
Rebased branch is already clean, and TestNodePlan passes locally. The remaining Yetus unit failure is in TestBlockRecoveryCauseStandbyNameNodeCrash, which looks unrelated to this DiskBalancer change. Ready for CI or another look.
Rebased this onto current apache/trunk, removed the old Trigger CI-only history, and force-pushed it back as a single clean JIRA commit. Local validation passed with JAVA_HOME=/opt/homebrew/opt/openjdk@17/libexec/openjdk.jdk/Contents/Home /opt/homebrew/bin/mvn -Dmaven.repo.local=/tmp/codex-m2 test -pl hadoop-hdfs-project/hadoop-hdfs -am -Dtest=TestNodePlan -DskipTests=false (5 tests, 0 failures).
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
dfs.disk.balancer.block.tolerance.percentis documented and used on the DataNode when executing a plan, but the plan command did not set tolerance percentage on plan steps. So the value sent to the DataNode was effectively the step default (0) unless the DataNode fell back to its own config. This change reads the config in the plan command and setstolerancePercenton each step so the plan propagated to the DataNode uses the same value.Change
setPlanParams(), readdfs.disk.balancer.block.tolerance.percentfrom configuration (default 10) and callstep.setTolerancePercent(tolerancePercent)for each step in each plan. So generated plans include tolerance and the DataNode receives it (it already usesstep.getTolerancePercent()when building work items).testPlanStepTolerancePercentInJson(): build a NodePlan with a MoveStep that hassetTolerancePercent(15), serialize to JSON, parse back, and assert the step’sgetTolerancePercent()is 15 (HDFS-17872).JIRA
Fixes HDFS-17872