-
Notifications
You must be signed in to change notification settings - Fork 3.6k
[PDI-20383] - Adding runconfig options to be passed through pan command #10075
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
Analysis Details49 IssuesCoverage and DuplicationsProject ID: org.pentaho.di:pdi |
Note:Frogbot also supports Contextual Analysis, Secret Detection, IaC and SAST Vulnerabilities Scanning. This features are included as part of the JFrog Advanced Security package, which isn't enabled on your system. |
This comment has been minimized.
This comment has been minimized.
jonjarvis
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here's a first pass review of this refactoring. I don't believe this will work for the Spark use case. Also, it might make sense to refactor the different execution models (local, remote, clustered) into their own classes entirely and make them available through a @Serviceprovider instead of via an ExtensionPoint.
| /** | ||
| * Helper method to check if a string is empty. | ||
| */ | ||
| private boolean isEmpty(String str) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use org.pentaho.di.core.util.Utils.isEmpty( String ) instead of re-writing function
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good catch. I will update the code with your recommendation
| * Created by bmorrise on 3/16/17. | ||
| */ | ||
| @ExtensionPoint( id = "RunConfigurationRunExtensionPoint", extensionPointId = "SpoonTransBeforeStart", | ||
| @ExtensionPoint( id = "RunConfigurationRunExtensionPoint", extensionPointId = "TransBeforeStart", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you rename this, you're also going to have to edit this:
pentaho-kettle/core/src/main/java/org/pentaho/di/core/extension/KettleExtensionPoint.java
Line 55 in 0515e87
| SpoonTransBeforeStart( "SpoonTransBeforeStart", "Right before the transformation is started" ), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will update the code to reflect the new name for the extension point
| } | ||
|
|
||
|
|
||
| // Running Remotely |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This appears to be a dead comment because of the refactoring
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this one dead code now? Is it fully replaced by the "EnhancedPanCommandExecutor" ?
| try { | ||
| // Trigger the extension point that handles run configurations | ||
| ExtensionPointHandler.callExtensionPoint( getLog(), KettleExtensionPoint.SpoonTransBeforeStart.id, | ||
| new Object[] { executionConfiguration, trans.getTransMeta(), trans.getTransMeta(), repository } ); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe this is the extension point that will execute under a run configuration if it exists. The behavior below this will have to change such that it doesn't execute if the extension point calls "execute" to run on spark. Otherwise, the lines of code following this try block will always execute a transformation locally.
|
|
||
| try { | ||
| ExtensionPointHandler.callExtensionPoint(log, KettleExtensionPoint.SpoonTransBeforeStart.id, new Object[] { | ||
| executionConfiguration, transMeta, transMeta, repository |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe this is the extension point that actually executes under a run configuration (if it exists). So for the spark use case, this is what would actually "execute" the Spark transformation.
| return executeClustered(transMeta, executionConfiguration); | ||
|
|
||
| } else { | ||
| throw new KettleException(BaseMessages.getString(PKG, "PanTransformationDelegate.Error.NoExecutionTypeSpecified")); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
wouldn't this exception always get thrown for a spark run configuration?
| for (String key : paramMap.keySet()) { | ||
| transMeta.setParameterValue(key, Const.NVL(paramMap.get(key), "")); | ||
| } | ||
| transMeta.activateParameters(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do the parameters have to be activated for the spark execution model?
❌ Build failed in 1m 14sBuild command: mvn clean verify -B -e -Daudit -Djs.no.sandbox❗ No tests found!Errors:Filtered log (click to expand)
ℹ️ This is an automatic message |







No description provided.