Skip to content

[Bug report] CI: Cron Integration Test execute failed for task ':authorizations:authorization-ranger:test'. #6406

Open
@danhuawang

Description

@danhuawang

Version

main branch

Describe what's wrong

Cron Integration Test execute failed:
https://github.com/apache/gravitino/actions/runs/13164696252/job/36741893632

Error as following:

> Task :authorizations:authorization-ranger:test

RangerIcebergE2EIT > testReadWriteTableWithMetalakeLevelRole() FAILED
    org.apache.spark.sql.AnalysisException: LEGACY store assignment policy is disallowed in Spark data source V2. Please set the configuration spark.sql.storeAssignmentPolicy to other values.
        at app//org.apache.spark.sql.errors.QueryCompilationErrors$.legacyStoreAssignmentPolicyError(QueryCompilationErrors.scala:270)
        at app//org.apache.spark.sql.catalyst.analysis.ResolveRowLevelCommandAssignments$.org$apache$spark$sql$catalyst$analysis$ResolveRowLevelCommandAssignments$$validateStoreAssignmentPolicy(ResolveRowLevelCommandAssignments.scala:66)
        at app//org.apache.spark.sql.catalyst.analysis.ResolveRowLevelCommandAssignments$$anonfun$apply$2.applyOrElse(ResolveRowLevelCommandAssignments.scala:43)
        at app//org.apache.spark.sql.catalyst.analysis.ResolveRowLevelCommandAssignments$$anonfun$apply$2.applyOrElse(ResolveRowLevelCommandAssignments.scala:41)
        at app//org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.$anonfun$resolveOperatorsDownWithPruning$2(AnalysisHelper.scala:170)
        at app//org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(origin.scala:76)
        at app//org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.$anonfun$resolveOperatorsDownWithPruning$1(AnalysisHelper.scala:170)
        at app//org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:323)
        at app//org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsDownWithPruning(AnalysisHelper.scala:168)
        at app//org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsDownWithPruning$(AnalysisHelper.scala:164)
        at app//org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperatorsDownWithPruning(LogicalPlan.scala:32)
        at app//org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsWithPruning(AnalysisHelper.scala:99)
        at app//org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsWithPruning$(AnalysisHelper.scala:96)
        at app//org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperatorsWithPruning(LogicalPlan.scala:32)
        at app//org.apache.spark.sql.catalyst.analysis.ResolveRowLevelCommandAssignments$.apply(ResolveRowLevelCommandAssignments.scala:41)
        at app//org.apache.spark.sql.catalyst.analysis.ResolveRowLevelCommandAssignments$.apply(ResolveRowLevelCommandAssignments.scala:38)
        at app//org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$2(RuleExecutor.scala:222)
        at app//scala.collection.LinearSeqOptimized.foldLeft(LinearSeqOptimized.scala:126)
        at app//scala.collection.LinearSeqOptimized.foldLeft$(LinearSeqOptimized.scala:122)
        at app//scala.collection.immutable.List.foldLeft(List.scala:91)
        at app//org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$1(RuleExecutor.scala:219)
        at app//org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$1$adapted(RuleExecutor.scala:211)
        at app//scala.collection.immutable.List.foreach(List.scala:431)
        at app//org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:211)
        at app//org.apache.spark.sql.catalyst.analysis.Analyzer.org$apache$spark$sql$catalyst$analysis$Analyzer$$executeSameContext(Analyzer.scala:226)
        at app//org.apache.spark.sql.catalyst.analysis.Analyzer.$anonfun$execute$1(Analyzer.scala:222)
        at app//org.apache.spark.sql.catalyst.analysis.AnalysisContext$.withNewAnalysisContext(Analyzer.scala:173)
        at app//org.apache.spark.sql.catalyst.analysis.Analyzer.execute(Analyzer.scala:222)
        at app//org.apache.spark.sql.catalyst.analysis.Analyzer.execute(Analyzer.scala:188)
        at app//org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$executeAndTrack$1(RuleExecutor.scala:182)
        at app//org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:89)
        at app//org.apache.spark.sql.catalyst.rules.RuleExecutor.executeAndTrack(RuleExecutor.scala:182)
        at app//org.apache.spark.sql.catalyst.analysis.Analyzer.$anonfun$executeAndCheck$1(Analyzer.scala:209)
        at app//org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.markInAnalyzer(AnalysisHelper.scala:330)
        at app//org.apache.spark.sql.catalyst.analysis.Analyzer.executeAndCheck(Analyzer.scala:208)
        at app//org.apache.spark.sql.execution.QueryExecution.$anonfun$analyzed$1(QueryExecution.scala:77)
        at app//org.apache.spark.sql.catalyst.QueryPlanningTracker.measurePhase(QueryPlanningTracker.scala:138)
        at app//org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$2(QueryExecution.scala:219)
        at app//org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:546)
        at app//org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$1(QueryExecution.scala:219)
        at app//org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:900)
        at app//org.apache.spark.sql.execution.QueryExecution.executePhase(QueryExecution.scala:218)
        at app//org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:77)
        at app//org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:74)
        at app//org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:66)
        at app//org.apache.spark.sql.Dataset$.$anonfun$ofRows$2(Dataset.scala:99)
        at app//org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:900)
        at app//org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:97)
        at app//org.apache.spark.sql.SparkSession.$anonfun$sql$4(SparkSession.scala:691)
        at app//org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:900)
        at app//org.apache.spark.sql.SparkSession.sql(SparkSession.scala:682)
        at app//org.apache.spark.sql.SparkSession.sql(SparkSession.scala:713)
        at app//org.apache.spark.sql.SparkSession.sql(SparkSession.scala:744)
        at app//org.apache.gravitino.authorization.ranger.integration.test.RangerIcebergE2EIT.checkUpdateSQLWithReadWritePrivileges(RangerIcebergE2EIT.java:115)

Error message and/or stacktrace

N/A

How to reproduce

Reproduced by CI: https://github.com/apache/gravitino/actions/runs/13164696252/job/36741893632

Additional context

No response

Activity

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions