Skip to content

feat: Upload input artifacts when submitting workflows from the UI#15237

Open
panicboat wants to merge 44 commits intoargoproj:mainfrom
panicboat:feat/large-file-upload
Open

feat: Upload input artifacts when submitting workflows from the UI#15237
panicboat wants to merge 44 commits intoargoproj:mainfrom
panicboat:feat/large-file-upload

Conversation

@panicboat
Copy link
Copy Markdown
Contributor

@panicboat panicboat commented Jan 13, 2026

Fixes #12656

Motivation

Currently, when submitting a workflow from the UI, users cannot provide input artifacts directly.
The only way to use input artifacts is to pre-upload files to the artifact repository and manually specify the key in the WorkflowTemplate.
This creates a poor user experience, especially for ad-hoc workflow executions that require different input files each time.
This PR enables users to upload files directly through the UI when submitting a workflow, making input artifacts truly usable from the web interface.

Modifications

Backend:

  • Added SaveStream method to the ArtifactDriver interface for streaming uploads
  • Implemented SaveStream in all artifact drivers (S3, GCS, Azure, OSS, HDFS, HTTP, Git, Raw, Plugin)
  • Added new API endpoint POST /upload-artifacts/{namespace}/{workflowTemplateName}/{artifactName} for file uploads
  • Added Artifacts field to SubmitOpts struct to support artifact key overrides during workflow submission
  • Modified workflow submission logic to copy artifact configuration from WorkflowTemplate and override the key with the uploaded file's location
  • Added artifact merge logic in JoinWorkflowSpec to ensure workflow artifact overrides take precedence over template defaults

Frontend:

  • Created ArtifactsInput component for file upload UI with progress indicator
  • Integrated artifact upload into SubmitWorkflowPanel
  • Added upload-artifacts to webpack proxy configuration for development

Architecture

sequenceDiagram
    participant User
    participant UI as Argo UI
    participant Server as Argo Server
    participant S3 as S3/GCS/Azure
    participant K8s as Kubernetes

    User->>UI: Select file
    UI->>Server: POST /upload-artifacts/{ns}/{tmpl}/{artifact}
    Server->>K8s: Get WorkflowTemplate
    K8s-->>Server: Template config
    Server->>S3: Upload file (new key)
    S3-->>Server: Success
    Server-->>UI: {name, key, location}
    
    User->>UI: Click Submit
    UI->>Server: POST /api/v1/workflows/{ns}/submit
    Note right of UI: artifacts: ["name=newKey"]
    Server->>K8s: Get WorkflowTemplate
    Server->>Server: Override artifact key
    Server->>K8s: Create Workflow
    K8s-->>Server: Workflow
    Server-->>UI: Success
Loading
Data flow
flowchart TD
    subgraph Frontend
        A[Select File] --> B[XHR Upload]
        B --> C{Success?}
        C -->|Yes| D[Store in uploadedArtifacts]
        C -->|No| E[Show Error]
        D --> F[Submit Button]
        F --> G[Build artifactOverrides]
        G --> H[POST /submit]
    end

    subgraph Backend Upload
        B --> I[UploadInputArtifact]
        I --> J[Get WorkflowTemplate]
        J --> K[Get artifact config]
        K --> L[Generate new key]
        L --> M[SaveStream]
        M --> N[Return response]
    end

    subgraph Backend Submit
        H --> O[SubmitWorkflow]
        O --> P[Get WorkflowTemplate]
        P --> Q[Copy artifact config]
        Q --> R[Override key]
        R --> S[Create Workflow]
    end
Loading

Verification

  1. Create a WorkflowTemplate with an input artifact defined in arguments.artifacts
  2. Open the WorkflowTemplate in the UI and click "Submit"
  3. Upload a file using the new file input field
  4. Submit the workflow
  5. Verify the workflow runs successfully using the uploaded file
apiVersion: argoproj.io/v1alpha1
kind: WorkflowTemplate
metadata:
  name: my-template
  namespace: argo
spec:
  entrypoint: argosay
  arguments:
    artifacts:
      - name: my-uploaded-file
        s3:
          bucket: panicboat-sandbox-723535945756
          key: input-file/panicboat.zip  # Override artifact key
          accessKeySecret:
            name: debug-s3-creds
            key: accessKey
          secretKeySecret:
            name: debug-s3-creds
            key: secretKey
  templates:
    - name: argosay
      inputs:
        artifacts:
          - name: my-uploaded-file
            path: /tmp/input-file
      container:
        image: alpine:latest
        command: [cat]
        args: ["/tmp/input-file"]
2026-01-15.11.43.54.mov

Scope

ClusterWorkflowTemplate and CLI support is out of scope for this PR.

This pull request focuses on using the workflow template web UI upload feature. ClusterWorkflowTemplate and CLI users can continue using the existing workflow: pre-upload files to the artifact repository and specify the key manually.

ClusterWorkflowTemplate and CLI support (e.g., argo submit --from wftmpl/name --input-artifact name=./file.zip) can be added in a future PR, reusing the /upload-artifacts/ endpoint introduced here.

Documentation

Added feature documentation in .features/pending/input-artifact-upload.md.
Users can discover this feature when they:

  1. View a WorkflowTemplate with input artifacts defined
  2. Click "Submit" button
  3. See the file upload fields for each defined artifact

Summary by CodeRabbit

Release Notes

  • New Features

    • Added support for uploading input artifacts directly from the UI when submitting workflows with input artifact requirements. Users can now upload files that will be stored in the configured artifact repository.
    • Enabled artifact location overrides during workflow submission, allowing users to specify custom storage paths for input artifacts.
  • Documentation

    • Added example workflow demonstrating input artifact usage with artifact repository integration.

@panicboat panicboat changed the title feat: Accept CSV and other files as input values for workflow feat: Upload input artifacts when submitting workflows from the UI Jan 14, 2026
@panicboat panicboat marked this pull request as ready for review January 26, 2026 01:19
@panicboat panicboat force-pushed the feat/large-file-upload branch 3 times, most recently from b3e076c to 9fd62ba Compare March 10, 2026 08:51
@Joibel
Copy link
Copy Markdown
Member

Joibel commented Mar 10, 2026

@coderabbitai review

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Mar 10, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Mar 10, 2026

📝 Walkthrough

Walkthrough

This PR implements input artifact upload functionality, allowing users to upload files from the UI when submitting workflows that define input artifacts in their WorkflowTemplate specifications. The feature includes backend API server enhancements for artifact uploads, streaming support across all artifact drivers, workflow submission modifications to apply artifact overrides, and a new React component for artifact file input in the submission panel.

Changes

Cohort / File(s) Summary
Feature Documentation
.features/pending/input-artifact-upload.md, .spelling, docs/fields.md, examples/workflow-template/input-artifacts.yaml
Added feature documentation, updated spell-check dictionary with workflow-related identifiers, enhanced docs/fields.md with example references, and introduced example WorkflowTemplate YAML demonstrating input artifact usage from S3.
API & Data Model
api/jsonschema/schema.json, pkg/apis/workflow/v1alpha1/common.go, pkg/apis/workflow/v1alpha1/generated.proto, pkg/apis/api-rules/violation_exceptions.list, ui/src/shared/models/submit-opts.ts
Added new artifacts field to SubmitOpts for overriding input artifact locations (format: name=s3://bucket/key), updated JSON schema and proto definitions, and extended violation exceptions list.
Server Initialization & Routing
server/apiserver/argoserver.go, pkg/apiclient/argo-kube-client.go, ui/webpack.config.js
Updated workflow server initialization to pass artifact repository dependency, added new /upload-artifacts/ HTTP endpoint route, and configured webpack proxy for artifact upload requests.
Artifact Upload Handler
server/artifacts/artifact_server.go, server/artifacts/artifact_server_test.go
Implemented UploadInputArtifact HTTP handler supporting multipart file uploads with UUID generation, artifact resolution, artifact driver storage, and JSON response payloads; includes comprehensive test coverage for success and error paths.
Workflow Submission Logic
server/workflow/workflow_server.go, server/workflow/workflow_server_test.go
Enhanced NewServer constructor to accept artifact repository dependency, added artifact override parsing/application logic during SubmitWorkflow when using WorkflowTemplateRef, and added test cases for artifact override scenarios with and without default repository resolution.
Artifact Driver Interface & Common Implementations
workflow/artifacts/common/common.go, workflow/artifacts/logging/driver.go, workflow/artifacts/logging/driver_test.go
Added SaveStream interface method to ArtifactDriver to support streaming uploads from io.Reader, implemented logging wrapper for SaveStream, and added comprehensive logging driver tests.
S3 Artifact Driver
workflow/artifacts/s3/s3.go, workflow/artifacts/s3/s3_test.go
Added PutStream method to Client interface and s3client implementation for streaming uploads, implemented SaveStream on ArtifactDriver with backoff retry logic, and introduced S3Client test interface with streaming test coverage.
GCS Artifact Driver
workflow/artifacts/gcs/gcs.go, workflow/artifacts/gcs/gcs_test.go
Implemented SaveStream method for streaming GCS uploads with transient error handling, added tests for credential errors, and file listing utility tests.
Azure Artifact Driver
workflow/artifacts/azure/azure.go, workflow/artifacts/azure/azure_test.go
Added SaveStream for streaming uploads to Azure Blob Storage, implemented Windows path normalization for blob names, and added tests for upload task generation and error handling.
HDFS Artifact Driver
workflow/artifacts/hdfs/hdfs.go, workflow/artifacts/hdfs/hdfs_test.go
Implemented SaveStream via temporary file approach, changed path validation to Unix-style semantics, and added comprehensive validation and operation tests (delete, list, directory checks).
HTTP & Artifactory Driver
workflow/artifacts/http/http.go
Added SaveStream for streaming uploads to HTTP/Artifactory endpoints with URL-specific handling and header/auth support.
OSS Artifact Driver
workflow/artifacts/oss/oss.go, workflow/artifacts/oss/oss_test.go
Implemented SaveStream with OSS client initialization and bucket.PutObject, added error code classification and transient error tests.
Git, Raw & Plugin Drivers
workflow/artifacts/git/git.go, workflow/artifacts/raw/raw.go, workflow/artifacts/plugin/plugin.go
Added SaveStream stubs returning unsupported errors (git, raw) or delegating via temp file approach (plugin).
Workflow Merge & Submit Utilities
workflow/util/merge.go, workflow/util/merge_test.go, workflow/util/util.go, workflow/util/util_test.go
Added artifact merging logic matching parameter precedence in JoinWorkflowSpec, implemented artifact override parsing (NAME=KEY format) in ApplySubmitOpts, and added test coverage for artifact override scenarios.
Controller & Integration
workflow/controller/controller_test.go
Updated WorkflowTaskResult creation to include namespace and directly insert into informer indexer for immediate reconciliation.
Frontend Components
ui/src/shared/components/artifacts-input/artifacts-input.tsx, ui/src/shared/components/artifacts-input/artifacts-input.scss, ui/src/shared/components/artifacts-input/index.ts
Introduced new ArtifactsInput React component with file dropzone/selection, progress tracking, and streaming upload via XMLHttpRequest to /upload-artifacts/ endpoint; included BEM-style SCSS for dropzone, upload states, and progress visualization.
Workflow Submission UI
ui/src/workflows/components/submit-workflow-panel.tsx, ui/src/workflow-templates/workflow-template-details.tsx
Extended SubmitWorkflowPanel to render Input Artifacts section, manage uploaded artifacts state, integrate artifact overrides into submission payload, and pass workflowArtifacts prop from template definitions using optional chaining.

Sequence Diagram(s)

sequenceDiagram
    participant UI as User/Browser
    participant Client as ArtifactsInput<br/>(React Component)
    participant Server as Argo Server<br/>(/upload-artifacts)
    participant Handler as ArtifactServer<br/>.UploadInputArtifact
    participant Repo as Artifact<br/>Repository
    participant Driver as Artifact<br/>Driver

    UI->>Client: Select & upload file
    Client->>Server: POST file (multipart)<br/>with namespace, templateName, artifactName
    Server->>Handler: Route to UploadInputArtifact
    Handler->>Handler: Authenticate & parse request
    Handler->>Repo: Load WorkflowTemplate<br/>& artifact config
    Repo-->>Handler: Template & artifact details
    Handler->>Handler: Generate UUID for artifact key
    Handler->>Driver: SaveStream(reader,<br/>outputArtifact)
    Driver->>Driver: Get storage client<br/>(S3, GCS, etc.)
    Driver->>Repo: Upload file stream<br/>to configured storage
    Repo-->>Driver: Upload complete
    Driver-->>Handler: Return success
    Handler->>Client: JSON response<br/>(name, key, location)
    Client->>Client: Update state & display<br/>uploaded artifact key
    Client->>UI: Show success indicator
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

  • chore: enable revive linter #15459 — Modifies server/workflow/workflow_server.go and updates workflow server constructor/interface similar to this PR's artifactRepositories dependency injection changes.
  • chore: enable errorlint linter #15345 — Touches overlapping artifact upload and artifact driver code paths including server/artifacts/artifact_server.go and multiple workflow/artifacts/* implementations.

Suggested reviewers

  • isubasinghe
🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 34.85% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Title check ✅ Passed The PR title clearly and concisely describes the main feature: enabling users to upload input artifacts when submitting workflows from the UI.
Description check ✅ Passed The PR description is comprehensive with motivation, modifications, verification steps, and documentation. However, it does not use the standard template format with the required 'Fixes #' syntax in the title and description consistently.
Linked Issues check ✅ Passed The PR successfully addresses issue #12656 by implementing file upload functionality for input artifacts through the UI, allowing non-developers to submit workflows with file inputs instead of manual string copying.
Out of Scope Changes check ✅ Passed All changes are within scope of the PR objectives. Backend adds SaveStream methods and upload endpoint, frontend adds ArtifactsInput component, and tests cover the new functionality. ClusterWorkflowTemplate and CLI support are explicitly noted as out of scope.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Warning

There were issues while running some tools. Please review the errors and either fix the tool's configuration or disable the tool if it's a critical failure.

🔧 golangci-lint (2.5.0)

Error: unknown linters: 'modernize', run 'golangci-lint help linters' to see the list of supported linters
The command is terminated due to an error: unknown linters: 'modernize', run 'golangci-lint help linters' to see the list of supported linters


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 13

🧹 Nitpick comments (9)
workflow/controller/controller_test.go (1)

477-490: Seed the informer with the persisted object, not the pre-create stub.

GetIndexer().Add(taskResult) caches the local pre-create instance, so the store can miss API-populated metadata like resourceVersion and uid. Add the object returned from Create instead to keep the test cache faithful.

♻️ Proposed fix
-		_, err := woc.controller.wfclientset.ArgoprojV1alpha1().WorkflowTaskResults(woc.wf.Namespace).
+		createdTaskResult, err := woc.controller.wfclientset.ArgoprojV1alpha1().WorkflowTaskResults(woc.wf.Namespace).
 			Create(
 				ctx,
 				taskResult,
 				metav1.CreateOptions{},
 			)
 		if err != nil {
 			panic(err)
 		}
@@
-		if err := woc.controller.taskResultInformer.GetIndexer().Add(taskResult); err != nil {
+		if err := woc.controller.taskResultInformer.GetIndexer().Add(createdTaskResult); err != nil {
 			panic(err)
 		}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@workflow/controller/controller_test.go` around lines 477 - 490, The test
seeds the informer with the pre-create stub "taskResult" which lacks API-set
metadata; replace the stub with the persisted object returned by the Create
call. After calling
woc.controller.wfclientset.ArgoprojV1alpha1().WorkflowTaskResults(...).Create(...),
capture the returned object (the created TaskResult) and call
woc.controller.taskResultInformer.GetIndexer().Add(createdTaskResult) instead of
Add(taskResult) so the informer's cache contains the API-populated
resourceVersion/uid.
workflow/artifacts/oss/oss_test.go (2)

72-97: Consider referencing the source list to avoid drift.

TestOssTransientErrorCodes duplicates the list of transient error codes from the implementation. If ossTransientErrorCodes in the source file changes, this test's expected list could become stale. Consider either:

  1. Iterating directly over the exported/package-level ossTransientErrorCodes slice (as TestIsTransientOSSErr does on line 16), or
  2. Adding a test that asserts the expected codes match the actual ossTransientErrorCodes slice.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@workflow/artifacts/oss/oss_test.go` around lines 72 - 97,
TestOssTransientErrorCodes duplicates the transient-code list and may drift from
the source; change it to reference the canonical ossTransientErrorCodes slice
directly (or assert equality against it) instead of hardcoding values: update
TestOssTransientErrorCodes to iterate over ossTransientErrorCodes (the
package-level variable used by isTransientOSSErr) and call isTransientOSSErr for
each entry (or add an assertion that the hardcoded expected list equals
ossTransientErrorCodes), so the test stays in sync with the implementation.

99-103: Consider adding more meaningful assertions.

This test only verifies that maxObjectSize equals 5GB, which is essentially testing a constant. While this documents the expected value, consider adding tests that verify the behavior dependent on this threshold (e.g., mocking file sizes to verify simple vs. multipart upload path selection) if such logic exists.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@workflow/artifacts/oss/oss_test.go` around lines 99 - 103, Test only asserts
the constant maxObjectSize which isn't sufficient; extend
TestPutFileSimpleVsMultipart to exercise the upload-path selection logic by
invoking the function(s) that decide simple vs multipart upload (locate the code
that uses maxObjectSize, e.g., the uploader method or helper that checks file
size) with mocked file sizes just under and just over maxObjectSize and assert
the expected path is chosen (e.g., verify that PutFileSimple is called for size
< maxObjectSize and PutFileMultipart for size >= maxObjectSize), or mock the
storage client to observe which upload routine runs when calling the public
upload entrypoint used in production.
ui/src/shared/models/submit-opts.ts (1)

5-7: Consider expanding the format documentation.

The comment mentions s3:// and gcs:// schemes, but the backend supports additional artifact types (Azure, OSS, HDFS, etc.). Consider updating to be more inclusive, e.g., name=<scheme>://bucket/key or listing additional supported schemes.

📝 Suggested comment improvement
-    // Artifacts to override for the workflow
-    // Format: name=s3://bucket/key or name=gcs://bucket/key
+    // Artifacts to override for the workflow
+    // Format: name=<scheme>://bucket/key (e.g., s3://, gcs://, azure://, oss://) or name=key
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@ui/src/shared/models/submit-opts.ts` around lines 5 - 7, Update the
JSDoc/comment for the artifacts field to describe a generic scheme format and
include additional supported schemes; specifically change the comment above the
artifacts?: string[] declaration to say something like "Format:
name=<scheme>://bucket/key (e.g., s3://, gcs://, azure://, oss://, hdfs://,
file://)" or similar so it covers backends beyond S3/GCS and documents that
<scheme> is the storage provider.
workflow/artifacts/azure/azure.go (1)

307-323: Consider wrapping the UploadStream error for consistency.

The error from UploadStream (line 321-322) is returned directly without context. Other methods in this file wrap errors with descriptive messages (e.g., lines 289-290, 295-296, 300-301). For consistency and better debugging, consider wrapping this error.

♻️ Suggested fix
 	blobClient := containerClient.NewBlockBlobClient(outputArtifact.Azure.Blob)
 	_, err = blobClient.UploadStream(ctx, reader, nil)
-	return err
+	if err != nil {
+		return fmt.Errorf("unable to upload stream to blob %s: %w", outputArtifact.Azure.Blob, err)
+	}
+	return nil
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@workflow/artifacts/azure/azure.go` around lines 307 - 323, In SaveStream,
wrap the error returned from blobClient.UploadStream with a descriptive message
instead of returning it raw; update the return to use fmt.Errorf("unable to
upload Azure blob %s to container %s: %w", outputArtifact.Azure.Blob,
outputArtifact.Azure.Container, err) (or similar) so failures from
blobClient.UploadStream are consistent with other wrapped errors in this file.
server/workflow/workflow_server.go (2)

834-847: Consider removing verbose debug logging before merge.

The anonymous function for artifactsCount adds complexity for a debug log. This level of detail may be excessive for production. Consider simplifying or using conditional logging.

💡 Suggested simplification
 	logger.WithFields(logging.Fields{
-		"submitOptions":    req.SubmitOptions,
 		"hasSubmitOptions": req.SubmitOptions != nil,
-		"artifactsCount": func() int {
-			if req.SubmitOptions != nil {
-				return len(req.SubmitOptions.Artifacts)
-			}
-			return -1
-		}(),
+		"artifactsCount":   len(req.GetSubmitOptions().GetArtifacts()),
 		"hasWorkflowTemplateRef": wf.Spec.WorkflowTemplateRef != nil,
 	}).Debug(ctx, "SubmitWorkflow: checking conditions")

Note: This assumes proto-generated getters exist. If not, keep the nil-safe approach but simplify formatting.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/workflow/workflow_server.go` around lines 834 - 847, The debug block
is overly verbose and uses an unnecessary anonymous function for
"artifactsCount"; simplify logging by removing the closure and either omit
"artifactsCount" entirely or compute it in a nil-safe way before the logger call
(e.g. set a local int var artifactsCount = -1; if req.SubmitOptions != nil {
artifactsCount = len(req.SubmitOptions.Artifacts) }) and then call
logging.RequireLoggerFromContext(ctx).WithFields(...).Debug(ctx, ...) including
only the needed fields (e.g. "hasSubmitOptions", "artifactsCount" or drop it)
and keep the wf.Spec.WorkflowTemplateRef check as-is to avoid the inline lambda
complexity.

882-887: Silently skipping malformed artifact overrides may hide user errors.

If an override string doesn't contain =, it's silently ignored. Consider logging a warning or returning an error so users know their input was not applied.

💡 Suggested improvement
 		for _, artifactStr := range req.SubmitOptions.Artifacts {
 			parts := strings.SplitN(artifactStr, "=", 2)
 			if len(parts) == 2 {
 				overrides[parts[0]] = parts[1]
+			} else {
+				logger.WithField("artifact", artifactStr).Warn(ctx, "Ignoring malformed artifact override (expected format: name=key)")
 			}
 		}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/workflow/workflow_server.go` around lines 882 - 887, The loop parsing
req.SubmitOptions.Artifacts silently ignores entries without "=", which can hide
user mistakes; update the parsing in the artifact override handling (the loop
over req.SubmitOptions.Artifacts that fills overrides) to detect malformed
artifactStr values and either log a warning that includes the offending
artifactStr and context (e.g., which request/user) or return a validation error
to the caller; specifically, replace the current unconditional skip with a
branch that calls the server's logger (or returns an error from the submit
handler) when len(parts) != 2 so users are informed their override was not
applied.
workflow/artifacts/hdfs/hdfs_test.go (1)

182-188: Consider asserting error type unconditionally.

The if ok check means the test silently passes if the error is not an ArgoError. This could mask regressions if the error type changes.

💡 Suggested improvement
 	// Verify it's a CodeNotImplemented error
 	var argoErr argoerrors.ArgoError
 	ok := errors.As(err, &argoErr)
-	if ok {
-		assert.Equal(t, argoerrors.CodeNotImplemented, argoErr.Code())
-	}
+	require.True(t, ok, "expected ArgoError type")
+	assert.Equal(t, argoerrors.CodeNotImplemented, argoErr.Code())
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@workflow/artifacts/hdfs/hdfs_test.go` around lines 182 - 188, The test
currently uses errors.As(err, &argoErr) with an if ok guard which allows the
test to silently pass if the error isn't an ArgoError; replace the conditional
with an unconditional assertion that the error is an ArgoError (e.g.,
require.True(t, errors.As(err, &argoErr)) or assert.True(t, ok) immediately
after calling errors.As) and then assert the Code() equals
argoerrors.CodeNotImplemented on argoErr; update the call sites around errors.As
and argoErr to ensure the test fails if the type assertion fails.
ui/src/shared/components/artifacts-input/artifacts-input.tsx (1)

14-18: Consider typing location more specifically.

location: any loses type safety. If the backend response structure is known, define it explicitly or import the type from a shared location.

💡 Suggested improvement
 export interface ArtifactUploadResponse {
     name: string;
     key: string;
-    location: any;
+    location: Record<string, unknown>; // Or a more specific ArtifactLocation type
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@ui/src/shared/components/artifacts-input/artifacts-input.tsx` around lines 14
- 18, The ArtifactUploadResponse interface uses location: any which removes type
safety; replace it with a specific type (e.g., define an interface like
ArtifactLocation { url: string; bucket?: string; region?: string; ... } or
import an existing backend/shared type) and update ArtifactUploadResponse to use
location: ArtifactLocation (or the imported type), then adjust any consumers of
ArtifactUploadResponse to match the new shape.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@server/artifacts/artifact_server.go`:
- Around line 193-195: The newKey construction uses user-controlled
header.Filename directly (see artifactCopy.GetKey() usage and newKey variable);
sanitize header.Filename first to prevent path traversal by replacing/backing
out path separators and resolving to a safe basename (e.g., use filepath.Base
and strip any remaining ".." or path separators, reject empty or dot-only names,
and optionally normalize to a safe character set), then build newKey from the
sanitized filename before storing or using it.
- Around line 243-247: The generateUUID function currently ignores the error
from rand.Read which can produce an all-zero UUID on failure; change
generateUUID to return (string, error) and propagate the rand.Read error: check
the error from rand.Read(b), return a descriptive error if non-nil, otherwise
hex-encode and return the UUID; then update the caller that invokes generateUUID
(the site referenced in the diff) to handle the returned error (propagate it,
log and return, or fail the operation as appropriate) so failures are not
silently ignored.

In `@ui/src/shared/components/artifacts-input/artifacts-input.tsx`:
- Line 79: The URL constructed for the POST in artifacts-input.tsx uses raw
template variables and can break with special characters; update the xhr.open
call so namespace, workflowTemplateName, and artifactName are each wrapped with
encodeURIComponent (e.g., use encodeURIComponent(namespace),
encodeURIComponent(workflowTemplateName), encodeURIComponent(artifactName))
before building the `/upload-artifacts/...` path to ensure the request URL is
correctly encoded.
- Around line 107-110: The dropzone text claims drag-and-drop support but no
drag handlers are implemented; either add proper drag handlers (onDrop,
onDragOver/onDragEnter, and corresponding handlers that call the existing
handleFileSelect or a new handler to accept dropped files) on the
div.artifacts-input__dropzone, or simply remove the "or drag and drop" copy and
change the span text to "Click to select a file" (or similar) to reflect the
current behavior; reference the div with className 'artifacts-input__dropzone'
and the existing handleFileSelect function when making the change.

In `@ui/src/workflows/components/submit-workflow-panel.tsx`:
- Around line 72-86: The submit path uses uploadedArtifacts directly and can run
before pending uploads complete; modify the submit flow in
submit-workflow-panel.tsx (the block that builds artifactOverrides and calls
services.workflows.submit) to first wait for any in-flight uploads to finish
(e.g., await Promise.all of upload promise list or wait for onUploadComplete to
resolve) or disallow clicking by blocking until all uploads complete, and only
then build artifactOverrides from the final uploadedArtifacts and set
isSubmitting; ensure you reference the same upload completion mechanism used
elsewhere (onUploadComplete/uploadPromises/uploadState) so the artifacts array
passed to services.workflows.submit includes newly completed uploads.

In `@workflow/artifacts/gcs/gcs_test.go`:
- Around line 109-113: TestListFileRelPathsNonExistent uses a hard-coded POSIX
path; update the test to create an OS-neutral non-existent path by using
t.TempDir() and building the missing path with filepath.Join (e.g.,
filepath.Join(t.TempDir(), "nonexistent")) before calling listFileRelPaths so
the test works on Windows and POSIX systems; reference
TestListFileRelPathsNonExistent and listFileRelPaths and ensure the constructed
path does not get created so require.Error(t, err) still holds.

In `@workflow/artifacts/gcs/gcs.go`:
- Around line 244-267: SaveStream currently retries uploads using
waitutil.Backoff while consuming the provided io.Reader via io.Copy, which
breaks retries because the reader cannot be rewound; fix by buffering the
incoming stream to a temp file before entering the retry loop: create a temp
file (os.CreateTemp), copy the entire io.Reader into it once, close it, and then
in the retry closure open the temp file (os.Open), seek to start (or rely on
fresh open), use that file as the source for io.Copy to the GCS writer, close
the opened file each attempt, and remove the temp file after success/final
failure; keep the SaveStream signature unchanged and reference SaveStream,
io.Reader, io.Copy, waitutil.Backoff, newGCSClient, and outputArtifact.GCS.Key
when implementing.

In `@workflow/artifacts/http/http.go`:
- Around line 143-179: SaveStream currently constructs the request body from a
non-rewindable io.Reader so req.GetBody is not set and 307/308 redirects cannot
be followed; mirror the approach used in Save by fully buffering the reader into
a byte slice (e.g., io.ReadAll) before creating the http.Request, set the
request Body to a new bytes.Reader/io.NopCloser over that buffer, and assign
req.GetBody to a function that returns a fresh io.ReadCloser (bytes.NewReader
wrapped) so the client can replay the body on redirects; update the branch that
handles outputArtifact.HTTP (and the Artifactory branch if appropriate) to use
the buffered payload before calling h.Client.Do(req).

In `@workflow/artifacts/oss/oss.go`:
- Around line 280-302: The closure passed to waitutil.Backoff retries using the
same non-seekable reader, so subsequent attempts to
bucket.PutObject(outputArtifact.OSS.Key, reader) may upload zero/truncated data;
fix by materializing the stream before the retry loop (e.g., read into a byte
slice or use an io.ReadSeeker) and inside the closure create a fresh io.Reader
for each attempt (use bytes.NewReader(data) or seek to 0 on a ReadSeeker) so
PutObject always receives a full reader; update references around
waitutil.Backoff, PutObject, reader, and any callers of
setBucketLogging/ossDriver.newOSSClient/isTransientOSSErr accordingly.
- Around line 288-300: SaveStream currently skips the bucket creation and
lifecycle configuration that Save performs: before calling setBucketLogging,
bucket := osscli.Bucket(...) and bucket.PutObject(...) you must mirror Save’s
setup by invoking CreateBucketIfNotPresent (or equivalent) for
outputArtifact.OSS.Bucket, then apply the same lifecycle rule handling (the
function or logic used by Save to set lifecycle rules), and only after those
succeed call setBucketLogging and bucket.PutObject; update the SaveStream
function (and any helper it uses) to call CreateBucketIfNotPresent, apply the
lifecycle configuration, then proceed to setBucketLogging and PutObject so
stream uploads behave identically to Save.

In `@workflow/artifacts/s3/s3_test.go`:
- Around line 825-833: The tests are invoking the mock client's PutStream
directly (reader := strings.NewReader(tc.content); err :=
tc.s3client.PutStream(...)) which bypasses ArtifactDriver.SaveStream and
therefore doesn't exercise error wrapping, retry logic, or use of the
errMsg/done fields; change the test to call ArtifactDriver.SaveStream (or the
production SaveStream wrapper used by production code) with the same
bucket/key/reader and the done channel so the driver invokes the s3client and
performs its full logic, then assert against the returned error and the expected
errMsg/done outcomes instead of asserting the mock client's direct PutStream
return value; reference the s3client.PutStream mock, ArtifactDriver.SaveStream
method, and the test case fields errMsg and done when making these changes.

In `@workflow/artifacts/s3/s3.go`:
- Line 285: SaveStream currently retries s3cli.PutStream using the same
io.Reader without resetting it, causing partial uploads for non-seekable
readers; modify SaveStream to either (A) detect seekable readers via type
assertion to io.Seeker and call Seek(0,0) before each retry of s3cli.PutStream
(ensure you handle and log Seek errors), or (B) enforce only seekable readers by
returning an error if the provided reader does not implement io.Seeker, or (C)
buffer the entire reader into memory/temp file before calling s3cli.PutStream so
retries start from the buffered beginning; locate the retry loop around the call
to s3cli.PutStream and implement one of these fixes for SaveStream to guarantee
full uploads on retry.

In `@workflow/util/merge.go`:
- Around line 137-144: The current loop replaces the already-merged
targetWf.Spec.Arguments.Artifacts[index] with the raw artifact from
wfSpec/wftSpec, discarding lower-priority fields; instead, keep the existing
target artifact as the base and overlay missing/non-zero fields from the
higher-precedence artifact. In the loop that iterates
targetWf.Spec.Arguments.Artifacts (using wfArtifactsMap/wftArtifactsMap and
artifactsToMapByName), create a copy of the current target artifact, then copy
only the non-empty fields from art.DeepCopy() into that base (or perform a
shallow/field-wise merge) and assign the merged result back to
targetWf.Spec.Arguments.Artifacts[index] so key-only overrides preserve
bucket/endpoint and other inherited fields.

---

Nitpick comments:
In `@server/workflow/workflow_server.go`:
- Around line 834-847: The debug block is overly verbose and uses an unnecessary
anonymous function for "artifactsCount"; simplify logging by removing the
closure and either omit "artifactsCount" entirely or compute it in a nil-safe
way before the logger call (e.g. set a local int var artifactsCount = -1; if
req.SubmitOptions != nil { artifactsCount = len(req.SubmitOptions.Artifacts) })
and then call logging.RequireLoggerFromContext(ctx).WithFields(...).Debug(ctx,
...) including only the needed fields (e.g. "hasSubmitOptions", "artifactsCount"
or drop it) and keep the wf.Spec.WorkflowTemplateRef check as-is to avoid the
inline lambda complexity.
- Around line 882-887: The loop parsing req.SubmitOptions.Artifacts silently
ignores entries without "=", which can hide user mistakes; update the parsing in
the artifact override handling (the loop over req.SubmitOptions.Artifacts that
fills overrides) to detect malformed artifactStr values and either log a warning
that includes the offending artifactStr and context (e.g., which request/user)
or return a validation error to the caller; specifically, replace the current
unconditional skip with a branch that calls the server's logger (or returns an
error from the submit handler) when len(parts) != 2 so users are informed their
override was not applied.

In `@ui/src/shared/components/artifacts-input/artifacts-input.tsx`:
- Around line 14-18: The ArtifactUploadResponse interface uses location: any
which removes type safety; replace it with a specific type (e.g., define an
interface like ArtifactLocation { url: string; bucket?: string; region?: string;
... } or import an existing backend/shared type) and update
ArtifactUploadResponse to use location: ArtifactLocation (or the imported type),
then adjust any consumers of ArtifactUploadResponse to match the new shape.

In `@ui/src/shared/models/submit-opts.ts`:
- Around line 5-7: Update the JSDoc/comment for the artifacts field to describe
a generic scheme format and include additional supported schemes; specifically
change the comment above the artifacts?: string[] declaration to say something
like "Format: name=<scheme>://bucket/key (e.g., s3://, gcs://, azure://, oss://,
hdfs://, file://)" or similar so it covers backends beyond S3/GCS and documents
that <scheme> is the storage provider.

In `@workflow/artifacts/azure/azure.go`:
- Around line 307-323: In SaveStream, wrap the error returned from
blobClient.UploadStream with a descriptive message instead of returning it raw;
update the return to use fmt.Errorf("unable to upload Azure blob %s to container
%s: %w", outputArtifact.Azure.Blob, outputArtifact.Azure.Container, err) (or
similar) so failures from blobClient.UploadStream are consistent with other
wrapped errors in this file.

In `@workflow/artifacts/hdfs/hdfs_test.go`:
- Around line 182-188: The test currently uses errors.As(err, &argoErr) with an
if ok guard which allows the test to silently pass if the error isn't an
ArgoError; replace the conditional with an unconditional assertion that the
error is an ArgoError (e.g., require.True(t, errors.As(err, &argoErr)) or
assert.True(t, ok) immediately after calling errors.As) and then assert the
Code() equals argoerrors.CodeNotImplemented on argoErr; update the call sites
around errors.As and argoErr to ensure the test fails if the type assertion
fails.

In `@workflow/artifacts/oss/oss_test.go`:
- Around line 72-97: TestOssTransientErrorCodes duplicates the transient-code
list and may drift from the source; change it to reference the canonical
ossTransientErrorCodes slice directly (or assert equality against it) instead of
hardcoding values: update TestOssTransientErrorCodes to iterate over
ossTransientErrorCodes (the package-level variable used by isTransientOSSErr)
and call isTransientOSSErr for each entry (or add an assertion that the
hardcoded expected list equals ossTransientErrorCodes), so the test stays in
sync with the implementation.
- Around line 99-103: Test only asserts the constant maxObjectSize which isn't
sufficient; extend TestPutFileSimpleVsMultipart to exercise the upload-path
selection logic by invoking the function(s) that decide simple vs multipart
upload (locate the code that uses maxObjectSize, e.g., the uploader method or
helper that checks file size) with mocked file sizes just under and just over
maxObjectSize and assert the expected path is chosen (e.g., verify that
PutFileSimple is called for size < maxObjectSize and PutFileMultipart for size
>= maxObjectSize), or mock the storage client to observe which upload routine
runs when calling the public upload entrypoint used in production.

In `@workflow/controller/controller_test.go`:
- Around line 477-490: The test seeds the informer with the pre-create stub
"taskResult" which lacks API-set metadata; replace the stub with the persisted
object returned by the Create call. After calling
woc.controller.wfclientset.ArgoprojV1alpha1().WorkflowTaskResults(...).Create(...),
capture the returned object (the created TaskResult) and call
woc.controller.taskResultInformer.GetIndexer().Add(createdTaskResult) instead of
Add(taskResult) so the informer's cache contains the API-populated
resourceVersion/uid.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 140aa4f4-ba19-4e84-84b3-637f4f777d5b

📥 Commits

Reviewing files that changed from the base of the PR and between 59f1089 and 9fd62ba.

⛔ Files ignored due to path filters (5)
  • api/openapi-spec/swagger.json is excluded by !**/api/openapi-spec/*.json
  • pkg/apis/workflow/v1alpha1/generated.pb.go is excluded by !**/*.pb.go, !**/*.pb.go
  • pkg/apis/workflow/v1alpha1/openapi_generated.go is excluded by !**/*_generated.go
  • pkg/apis/workflow/v1alpha1/zz_generated.deepcopy.go is excluded by !**/zz_generated.*.go
  • sdks/java/client/docs/IoArgoprojWorkflowV1alpha1SubmitOpts.md is excluded by !**/sdks/java/client/**
📒 Files selected for processing (43)
  • .features/pending/input-artifact-upload.md
  • .spelling
  • api/jsonschema/schema.json
  • docs/fields.md
  • examples/workflow-template/input-artifacts.yaml
  • pkg/apiclient/argo-kube-client.go
  • pkg/apis/api-rules/violation_exceptions.list
  • pkg/apis/workflow/v1alpha1/common.go
  • pkg/apis/workflow/v1alpha1/generated.proto
  • server/apiserver/argoserver.go
  • server/artifacts/artifact_server.go
  • server/artifacts/artifact_server_test.go
  • server/workflow/workflow_server.go
  • server/workflow/workflow_server_test.go
  • ui/src/shared/components/artifacts-input/artifacts-input.scss
  • ui/src/shared/components/artifacts-input/artifacts-input.tsx
  • ui/src/shared/components/artifacts-input/index.ts
  • ui/src/shared/models/submit-opts.ts
  • ui/src/workflow-templates/workflow-template-details.tsx
  • ui/src/workflows/components/submit-workflow-panel.tsx
  • ui/webpack.config.js
  • workflow/artifacts/azure/azure.go
  • workflow/artifacts/azure/azure_test.go
  • workflow/artifacts/common/common.go
  • workflow/artifacts/gcs/gcs.go
  • workflow/artifacts/gcs/gcs_test.go
  • workflow/artifacts/git/git.go
  • workflow/artifacts/hdfs/hdfs.go
  • workflow/artifacts/hdfs/hdfs_test.go
  • workflow/artifacts/http/http.go
  • workflow/artifacts/logging/driver.go
  • workflow/artifacts/logging/driver_test.go
  • workflow/artifacts/oss/oss.go
  • workflow/artifacts/oss/oss_test.go
  • workflow/artifacts/plugin/plugin.go
  • workflow/artifacts/raw/raw.go
  • workflow/artifacts/s3/s3.go
  • workflow/artifacts/s3/s3_test.go
  • workflow/controller/controller_test.go
  • workflow/util/merge.go
  • workflow/util/merge_test.go
  • workflow/util/util.go
  • workflow/util/util_test.go

Comment on lines +288 to +300
bucketName := outputArtifact.OSS.Bucket
err = setBucketLogging(osscli, bucketName)
if err != nil {
return !isTransientOSSErr(ctx, err), err
}
bucket, err := osscli.Bucket(bucketName)
if err != nil {
return !isTransientOSSErr(ctx, err), err
}
err = bucket.PutObject(outputArtifact.OSS.Key, reader)
if err != nil {
return !isTransientOSSErr(ctx, err), err
}
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot Mar 10, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Keep SaveStream's setup behavior aligned with Save.

This path goes straight to Bucket/PutObject and skips CreateBucketIfNotPresent plus LifecycleRule handling that Save performs. Stream uploads can therefore fail against auto-create repos or silently miss lifecycle configuration even though file-based uploads work.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@workflow/artifacts/oss/oss.go` around lines 288 - 300, SaveStream currently
skips the bucket creation and lifecycle configuration that Save performs: before
calling setBucketLogging, bucket := osscli.Bucket(...) and bucket.PutObject(...)
you must mirror Save’s setup by invoking CreateBucketIfNotPresent (or
equivalent) for outputArtifact.OSS.Bucket, then apply the same lifecycle rule
handling (the function or logic used by Save to set lifecycle rules), and only
after those succeed call setBucketLogging and bucket.PutObject; update the
SaveStream function (and any helper it uses) to call CreateBucketIfNotPresent,
apply the lifecycle configuration, then proceed to setBucketLogging and
PutObject so stream uploads behave identically to Save.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed in 2022b92. OSS SaveStream now buffers the stream to a temp file before entering the retry loop, so the reader is fresh on each retry.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@panicboat, thank you for the update! The temp-file buffering is a good improvement — it ensures the io.Reader can be re-read on each retry attempt without exhausting the stream.

However, the original concern is still present in the updated code. Looking at the current SaveStream implementation in the file, it still skips two pieces of setup that Save performs:

  1. CreateBucketIfNotPresentSave conditionally creates the bucket if outputArtifact.OSS.CreateBucketIfNotPresent is set:

    if outputArtifact.OSS.CreateBucketIfNotPresent {
        exists, existsErr := osscli.IsBucketExist(bucketName)
        // ...
        if !exists {
            err = osscli.CreateBucket(bucketName)
        }
    }

    SaveStream has no equivalent, so stream uploads will fail against auto-create repos where file-based uploads work fine.

  2. LifecycleRuleSave calls setBucketLifecycleRule(osscli, outputArtifact.OSS) when a lifecycle rule is configured. SaveStream omits this entirely, so lifecycle policies won't be applied on stream-based uploads.

Could you add those two blocks inside the waitutil.Backoff closure of SaveStream, mirroring what Save does? For example, after setBucketLogging and before osscli.Bucket(...):

if outputArtifact.OSS.CreateBucketIfNotPresent {
    exists, existsErr := osscli.IsBucketExist(bucketName)
    if existsErr != nil {
        return !isTransientOSSErr(ctx, existsErr), fmt.Errorf("failed to check if bucket %s exists: %w", bucketName, existsErr)
    }
    if !exists {
        if createErr := osscli.CreateBucket(bucketName); createErr != nil {
            return !isTransientOSSErr(ctx, createErr), fmt.Errorf("failed to automatically create bucket %s: %w", bucketName, createErr)
        }
    }
}
// ...obtain bucket...
if outputArtifact.OSS.LifecycleRule != nil {
    if lcErr := setBucketLifecycleRule(osscli, outputArtifact.OSS); lcErr != nil {
        return !isTransientOSSErr(ctx, lcErr), lcErr
    }
}

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch. Fixed in 8033441 — added CreateBucketIfNotPresent and LifecycleRule setup to SaveStream to match Save behavior.

Comment on lines +825 to +833
reader := strings.NewReader(tc.content)
err := tc.s3client.PutStream(tc.bucket, tc.key, reader, int64(len(tc.content)))
if tc.errMsg == "" {
require.NoError(t, err)
} else {
require.Error(t, err)
// For this test, we're testing the mock client directly
// The actual error wrapping happens in the ArtifactDriver.SaveStream
}
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot Mar 10, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Exercise the real save-stream path here.

Both tests call PutStream on the mock directly, so they still pass if ArtifactDriver.SaveStream stops wrapping errors, skips retry handling, or never invokes the client at all. The unused errMsg/done fields are a symptom of that gap.

Also applies to: 861-867

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@workflow/artifacts/s3/s3_test.go` around lines 825 - 833, The tests are
invoking the mock client's PutStream directly (reader :=
strings.NewReader(tc.content); err := tc.s3client.PutStream(...)) which bypasses
ArtifactDriver.SaveStream and therefore doesn't exercise error wrapping, retry
logic, or use of the errMsg/done fields; change the test to call
ArtifactDriver.SaveStream (or the production SaveStream wrapper used by
production code) with the same bucket/key/reader and the done channel so the
driver invokes the s3client and performs its full logic, then assert against the
returned error and the expected errMsg/done outcomes instead of asserting the
mock client's direct PutStream return value; reference the s3client.PutStream
mock, ArtifactDriver.SaveStream method, and the test case fields errMsg and done
when making these changes.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Noted. The existing S3 tests cover the underlying client behavior. Adding SaveStream-specific integration tests is a good follow-up but out of scope for this PR.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@panicboat, understood — the existing tests do validate the mock client's PutStream behaviour, and tightening the coverage around ArtifactDriver.SaveStream's retry/error-wrapping logic can reasonably land in a follow-up. Would you like me to open a GitHub issue to track that work?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, please open an issue for that. Happy to address it in a follow-up PR.

Comment on lines +137 to +144
wfArtifactsMap := artifactsToMapByName(wfSpec)
wftArtifactsMap := artifactsToMapByName(wftSpec)
for index, artifact := range targetWf.Spec.Arguments.Artifacts {
if art, ok := wfArtifactsMap[artifact.Name]; ok {
targetWf.Spec.Arguments.Artifacts[index] = *art.DeepCopy()
} else if art, ok := wftArtifactsMap[artifact.Name]; ok {
targetWf.Spec.Arguments.Artifacts[index] = *art.DeepCopy()
}
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot Mar 10, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Preserve inherited artifact fields during precedence resolution.

targetWf.Spec.Arguments.Artifacts[index] is already the merged value. Replacing it with the raw wfSpec/wftSpec artifact drops lower-priority fields, so a key-only override can lose the bucket/endpoint copied from the template/defaults.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@workflow/util/merge.go` around lines 137 - 144, The current loop replaces the
already-merged targetWf.Spec.Arguments.Artifacts[index] with the raw artifact
from wfSpec/wftSpec, discarding lower-priority fields; instead, keep the
existing target artifact as the base and overlay missing/non-zero fields from
the higher-precedence artifact. In the loop that iterates
targetWf.Spec.Arguments.Artifacts (using wfArtifactsMap/wftArtifactsMap and
artifactsToMapByName), create a copy of the current target artifact, then copy
only the non-empty fields from art.DeepCopy() into that base (or perform a
shallow/field-wise merge) and assign the merged result back to
targetWf.Spec.Arguments.Artifacts[index] so key-only overrides preserve
bucket/endpoint and other inherited fields.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The current behavior is intentional. When an artifact override is provided, DeepCopy() replaces the entire artifact, which is the correct semantics for overriding uploaded artifact keys. Partial field overlay would risk mixing stale location fields with the new key.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@panicboat, understood — that's a clear and valid rationale. Full replacement ensures the uploaded artifact's location is authoritative and avoids any confusion from partially inherited stale location fields. I'll defer to this design.


✏️ Learnings added
Learnt from: panicboat
Repo: argoproj/argo-workflows PR: 15237
File: workflow/util/merge.go:137-144
Timestamp: 2026-04-06T10:00:46.495Z
Learning: In `workflow/util/merge.go` (`JoinWorkflowSpec`), the artifact precedence loop intentionally replaces `targetWf.Spec.Arguments.Artifacts[index]` with a full `DeepCopy()` of the higher-priority artifact (from `wfSpec` or `wftSpec`). This is by design: when an artifact key is overridden (e.g., after a UI upload), the entire artifact location should be replaced, not merged field-by-field, to avoid mixing stale bucket/endpoint fields from the template with the new key.

Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.

@panicboat
Copy link
Copy Markdown
Contributor Author

Addressed the nitpick comments from the review:

  • workflow_server.go:834-847 (verbose logging): Fixed in 1bf2106. Removed anonymous function and unnecessary fields. Logging now only fires when SubmitOptions is non-nil.
  • workflow_server.go:882-887 (malformed override): Fixed in 244cbd1. Malformed artifact override strings now produce a warning log.
  • artifacts-input.tsx:14-18 (location: any): Fixed in fa81eb6. Replaced with Record<string, unknown>.
  • artifacts-input.tsx:107-110 (drag-and-drop): Fixed in fa81eb6. Added onDragOver, onDragLeave, onDrop handlers to the dropzone element.
  • azure.go:307-323 (error wrapping): Fixed in 2022b92. UploadStream error is now wrapped with fmt.Errorf for consistent reporting.

@panicboat panicboat force-pushed the feat/large-file-upload branch 3 times, most recently from 154ed88 to d7bdeb0 Compare April 7, 2026 06:38
panicboat added 13 commits April 9, 2026 10:35
…loads

Signed-off-by: panicboat <panicboat@gmail.com>
Signed-off-by: panicboat <panicboat@gmail.com>
Signed-off-by: panicboat <panicboat@gmail.com>
Signed-off-by: panicboat <panicboat@gmail.com>
Signed-off-by: panicboat <panicboat@gmail.com>
Signed-off-by: panicboat <panicboat@gmail.com>
Signed-off-by: panicboat <panicboat@gmail.com>
Signed-off-by: panicboat <panicboat@gmail.com>
Signed-off-by: panicboat <panicboat@gmail.com>
Signed-off-by: panicboat <panicboat@gmail.com>
Signed-off-by: panicboat <panicboat@gmail.com>
Signed-off-by: panicboat <panicboat@gmail.com>
Signed-off-by: panicboat <panicboat@gmail.com>
panicboat added 28 commits April 9, 2026 10:35
Signed-off-by: panicboat <panicboat@gmail.com>
…line character

Signed-off-by: panicboat <panicboat@gmail.com>
Signed-off-by: panicboat <panicboat@gmail.com>
Signed-off-by: panicboat <panicboat@gmail.com>
Signed-off-by: panicboat <panicboat@gmail.com>
Signed-off-by: panicboat <panicboat@gmail.com>
Signed-off-by: panicboat <panicboat@gmail.com>
Signed-off-by: panicboat <panicboat@gmail.com>
Signed-off-by: panicboat <panicboat@gmail.com>
Signed-off-by: panicboat <panicboat@gmail.com>
Signed-off-by: panicboat <panicboat@gmail.com>
Signed-off-by: panicboat <panicboat@gmail.com>
Signed-off-by: panicboat <panicboat@gmail.com>
Signed-off-by: panicboat <panicboat@gmail.com>
Signed-off-by: panicboat <panicboat@gmail.com>
Signed-off-by: panicboat <panicboat@gmail.com>
Signed-off-by: panicboat <panicboat@gmail.com>
Signed-off-by: panicboat <panicboat@gmail.com>
Signed-off-by: panicboat <panicboat@gmail.com>
Signed-off-by: panicboat <panicboat@gmail.com>
…truncated uploads

Signed-off-by: panicboat <panicboat@gmail.com>
Signed-off-by: panicboat <panicboat@gmail.com>
Signed-off-by: panicboat <panicboat@gmail.com>
…load

Replace `location: any` with `Record<string, unknown>` for type safety,
extract upload logic into shared `uploadFile` function, and add drag-and-drop
event handlers to the dropzone element.

Signed-off-by: panicboat <panicboat@gmail.com>
Signed-off-by: panicboat <panicboat@gmail.com>
Signed-off-by: panicboat <panicboat@gmail.com>
Signed-off-by: panicboat <panicboat@gmail.com>
Signed-off-by: panicboat <panicboat@gmail.com>
@panicboat panicboat force-pushed the feat/large-file-upload branch from 8cc621a to acf237c Compare April 9, 2026 01:35
@panicboat
Copy link
Copy Markdown
Contributor Author

@Joibel @terrytangyuan
Thank you for reviewing this PR.
I’ve addressed the points raised by CodeRabbitAI, so I’d appreciate it if you could take another look when you have a moment.
If you notice anything else that needs attention, please feel free to let me know.

Also, on a separate note, the CI seems to be a bit unstable.
In some cases, the status doesn’t turn “green” unless I push a few empty commits.
If you’re okay with it, I’d be happy to work on resolving or mitigating this issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Accept CSV and other files as input values for workflow

2 participants