Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: structured logging in digger (backend) #1927

Merged
merged 3 commits into from
Mar 29, 2025

Conversation

aldoborrero
Copy link
Contributor

@aldoborrero aldoborrero commented Mar 27, 2025

Summary of Changes

@motatoes This PR introduces the first batch of changes to backend just replacing directly log with log/slog. I have to add support to log/slog in tests as well.

For now, my focus has been switching the log engine and adding in some places extra debugging where I saw it was necessary. The next steps in this PR are:

  1. Update ee to use slog
  2. Update lib to use slog
  3. Enable by default the TextHandler. In a separate PR, I would like to improve the CLIs to be able to select different logging options (i.e: JSON) and different levels.

I would appreciate any feedback and suggestions before proceeding with the next steps.

EDIT: I added slog as it doesn't introduce any external dependency to the codebase (which is good IMHO).

Related #1923

Summary by CodeRabbit

  • Chores
    • Upgraded the backend logging and error reporting system to enhance monitoring and troubleshooting.
    • Improved error handling across various components, contributing to a more robust and reliable system.

These internal improvements support faster diagnostics and streamlined maintenance, ensuring a stable and consistent experience for users without altering any visible functionality.

Copy link
Contributor

coderabbitai bot commented Mar 27, 2025

Walkthrough

This pull request updates the backend’s logging framework by replacing the standard log package with Go’s structured logging package log/slog. Across multiple modules—including bootstrap, CI backends, controllers, middleware, models, services, tasks, and utils—logging calls have been refactored to use structured methods (e.g., slog.Info, slog.Error, slog.Warn, and slog.Debug) with key–value pairs. Minor changes include updating function signatures and replacing some fatal logs with panics in test files. No core business logic or control flow has been altered.

Changes

File(s) Change Summary
backend/bootstrap/main.go
backend/ci_backends/github_actions.go
backend/ci_backends/provider.go
Replaced standard logging calls with structured slog methods; updated logger initialization and error reporting using key–value pairs.
backend/controllers/….go (cache.go, connections.go, dashboard.go, github.go, github_api.go, github_test.go, internal_users.go, jobs.go, orgs.go, policies.go, policies_api.go, projects.go, repos.go, runs.go) Updated error and informational logs to use slog (structured logging); replaced legacy log.Printf and log.Fatal calls with appropriate log levels and, in tests, switched from fatal logging to panic-based error handling.
backend/middleware/….go (basic.go, jwt.go, middleware.go) Transitioned all logging to slog for authentication and token error logging; enhanced log messages with additional context and replaced fatal exits with panic in critical failures.
backend/models/….go (runs.go, scheduler.go, scheduler_test.go, setup.go, storage.go, storage_test.go) Migrated logging to structured slog in JSON mapping, error handling, and connection processes; updated function signatures where needed and removed obsolete initialization functions in test files.
backend/segment/segment.go Changed logging statements from log.Printf to slog.Debug for improved visibility in client identification and tracking functions.
backend/services/….go (auth.go, messages.go, scheduler.go, spec.go) Replaced standard logging with structured calls in error reporting and process tracking; enhanced logging details for tokens, messages, and job scheduling steps.
backend/tasks/….go (runs.go, runs_test.go, tasks.go) Updated state machine and job scheduling logs to use slog with enhanced context; altered error handling by substituting fatal log calls with panic on errors.
backend/utils/….go (ai.go, allowlist.go, batch_utils.go, bitbucket.go, crypt.go, github.go, gitlab.go, graphs.go, pr_comment.go) Replaced standard logging with structured slog calls; improved error reporting and added helper functions for message handling; removed an unused function (contains) from the allowlist module.

Assessment against linked issues

Objective Addressed Explanation
Implement proper Log Levels and support Structured Logging capabilities (#1923)

Possibly related PRs

  • publish ai summaries #1864: Involves similar structured logging modifications in backend controllers, aligning with the logging enhancements in this PR.

Hi! I'm a little rabbit, hopping through code with glee,
Logging now sings with structure—simple, clear, and free.
No more jumbled messages; each key-value pair shines bright,
From panic leaps in tests to logs that guide our night.
A joyful hop in every commit, making debugging just right!
With every log, I celebrate—coding magic in plain sight!
🐇✨

✨ Finishing Touches
  • 📝 Generate Docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai plan to trigger planning for file edits and PR creation.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@aldoborrero aldoborrero changed the title feat: structured logging in diigger feat: structured logging in digger Mar 28, 2025
@motatoes
Copy link
Contributor

Hi @aldoborrero !! Thank you sir 🙇 🙇 🙇

This is a great leap in codebase clarity. It all looks good to me and your gameplan of controlling log levels and structure of logs is exactly what we need. changes look good to me, I checked out your branch and tested it out, it works great, the only thing we need to also work on (as a seperate piece), is the logging of GORM library which logs all queries by default. For context the logging even after setting level as "ERROR" looks like this:


{"time":"2025-03-28T20:53:04.054219+03:00","level":"ERROR","msg":"Error getting workflow ID from job","error":"workflow not found"}

2025/03/28 20:53:05 /Users/mohamedsayed/dev/digger/digger/backend/models/storage.go:908
[7.530ms] [rows:0] INSERT INTO "digger_batches" ("id","vcs","pr_number","comment_id","ai_summary_comment_id","status","branch_name","digger_config","github_installation_id","gitlab_project_id","repo_full_name","repo_owner","repo_name","batch_type","report_terraform_outputs","source_details","vcs_connection_id") VALUES ('ee958b35-80bc-494c-8c7d-0166868b1812','github',88,2762051772,'',1,'motatoes-patch-86','auto_merge: true
auto_merge_strategy: "merge"

projects:
  - name: dev
    dir: .
  - name: staging
    dir: .

workflows:
  default:
    plan:
      steps:
        - init
        - plan
        - run: infracost breakdown --path=. | tee -a $DIGGER_OUT
',61306309,0,'diggerhq/demo-opentofu','diggerhq','demo-opentofu','plan',false,'',NULL) ON CONFLICT DO NOTHING

Defnitely a seperate piece though, ability to somehow turn of query printing!

@motatoes
Copy link
Contributor

@aldoborrero I'm happy to merge this PR as it stands now and we can work on the other parts as seperate PRs.

Reason is its quite big and I think if we end up with conflicts it will be a pain to address so we might as well merge quickly as seperate pieces. You can work on the rest of the plan as separate pieces. WDYT?

Copy link
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PR Summary

This PR transitions the backend logging to structured logging with slog, ensuring all error and debug messages include key/value context. The changes aim to improve log clarity and filtering for more effective troubleshooting.

• backend/ci_backends/provider.go: Replaced log.Printf with slog.Error in GetCiBackend using key/value pairs.
• backend/bootstrap/main.go: Updated most log calls to slog but setupProfiler still uses log.Fatalf.
• backend/controllers/github.go: Logging remains unconverted in parts, and an error logging bug in handlePullRequestEvent was noted.
• backend/middleware/jwt.go: Introduced structured logging but a variable misassignment (token vs. dbToken) must be addressed.
• Multiple files uniformly integrate slog across error and info paths for clearer debugging context.

💡 (1/5) You can manually trigger the bot by mentioning @greptileai in a comment!

37 file(s) reviewed, 6 comment(s)
Edit PR Review Bot Settings | Greptile

@aldoborrero aldoborrero force-pushed the feat/logging-improvements branch from 3552627 to bfaef29 Compare March 29, 2025 09:42
@aldoborrero
Copy link
Contributor Author

Hi @aldoborrero !! Thank you sir 🙇 🙇 🙇

This is a great leap in codebase clarity. It all looks good to me and your gameplan of controlling log levels and structure of logs is exactly what we need. changes look good to me, I checked out your branch and tested it out, it works great, the only thing we need to also work on (as a seperate piece), is the logging of GORM library which logs all queries by default. For context the logging even after setting level as "ERROR" looks like this:


{"time":"2025-03-28T20:53:04.054219+03:00","level":"ERROR","msg":"Error getting workflow ID from job","error":"workflow not found"}

2025/03/28 20:53:05 /Users/mohamedsayed/dev/digger/digger/backend/models/storage.go:908
[7.530ms] [rows:0] INSERT INTO "digger_batches" ("id","vcs","pr_number","comment_id","ai_summary_comment_id","status","branch_name","digger_config","github_installation_id","gitlab_project_id","repo_full_name","repo_owner","repo_name","batch_type","report_terraform_outputs","source_details","vcs_connection_id") VALUES ('ee958b35-80bc-494c-8c7d-0166868b1812','github',88,2762051772,'',1,'motatoes-patch-86','auto_merge: true
auto_merge_strategy: "merge"

projects:
  - name: dev
    dir: .
  - name: staging
    dir: .

workflows:
  default:
    plan:
      steps:
        - init
        - plan
        - run: infracost breakdown --path=. | tee -a $DIGGER_OUT
',61306309,0,'diggerhq/demo-opentofu','diggerhq','demo-opentofu','plan',false,'',NULL) ON CONFLICT DO NOTHING

Defnitely a seperate piece though, ability to somehow turn of query printing!

I think the necessary parts to adapt as for now are:

  • Update gorm to tone down by default logs when using INFO level. Switch to use slog-gorm
  • Update gin to follow the previous strategy for gorm.
  • Then update libs
  • Then update ee
  • Review if we're missing anything.

@aldoborrero I'm happy to merge this PR as it stands now and we can work on the other parts as seperate PRs.

Reason is its quite big and I think if we end up with conflicts it will be a pain to address so we might as well merge quickly as seperate pieces. You can work on the rest of the plan as separate pieces. WDYT?

Yes. The change is quite big as the project is big. So I agree with merging it now to avoid any issues and will prepare the remaining PRs ASAP over the weekend.

@aldoborrero aldoborrero marked this pull request as ready for review March 29, 2025 09:55
@aldoborrero aldoborrero changed the title feat: structured logging in digger feat: structured logging in digger (backend) Mar 29, 2025
Copy link
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PR Summary

This PR comprehensively migrates backend logging to structured logging via slog. Overall, it improves observability and context in logs.

• In backend/bootstrap/main.go, log.Fatalf remains and needs conversion for full consistency.
• In backend/middleware/jwt.go the token handling variable misassignment (dbToken vs. tokenObj) must be resolved.
• Verify that sensitive fields (e.g., raw token values) aren’t logged inadvertently.
• Ensure asynchronous goroutine errors (e.g., in cache.go) are logged appropriately for later review.
• Confirm that tests maintain clarity of failure messages under the new logging setup.

43 file(s) reviewed, 11 comment(s)
Edit PR Review Bot Settings | Greptile

c.String(http.StatusInternalServerError, "Error occurred while fetching database")
c.Abort()
return nil, fmt.Errorf("could not fetch cli token")
}

log.Printf("Token: %v access level: %v", jobToken.Value, jobToken.Type)
slog.Debug("Token verified", "tokenValue", jobToken.Value, "accessLevel", jobToken.Type, "expiry", jobToken.Expiry)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

logic: Avoid logging raw token values; mask or obfuscate token details to prevent exposing sensitive data.

Suggested change
slog.Debug("Token verified", "tokenValue", jobToken.Value, "accessLevel", jobToken.Type, "expiry", jobToken.Expiry)
slog.Debug("Token verified", "tokenValue", "****", "accessLevel", jobToken.Type, "expiry", jobToken.Expiry)

Comment on lines 221 to 229
var dbToken models.Token

token, err := models.DB.GetToken(token)
if token == nil {
tokenObj, err := models.DB.GetToken(token)
if tokenObj == nil {
slog.Warn("Invalid bearer token", "token", token)
c.String(http.StatusForbidden, "Invalid bearer token")
c.Abort()
return
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

logic: Use tokenObj (the fetched token) instead of the uninitialized dbToken in the API token branch.

Suggested change
var dbToken models.Token
token, err := models.DB.GetToken(token)
if token == nil {
tokenObj, err := models.DB.GetToken(token)
if tokenObj == nil {
slog.Warn("Invalid bearer token", "token", token)
c.String(http.StatusForbidden, "Invalid bearer token")
c.Abort()
return
}
tokenObj, err := models.DB.GetToken(token)
if tokenObj == nil {
slog.Warn("Invalid bearer token", "token", token)
c.String(http.StatusForbidden, "Invalid bearer token")
c.Abort()
return
}

Comment on lines 1548 to 1553
if err := DB.GormDB.Where("job_token_id = ?", jobTokenId).First(&artefact).Error; err != nil {
slog.Error("failed to get job artefact",
"jobTokenId", jobTokenId,
"error", err)
return nil, err
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

logic: Found usage of 'DB.GormDB' instead of 'db.GormDB' in GetJobArtefact; confirm if this reference is intentional to use a global variable.

Suggested change
if err := DB.GormDB.Where("job_token_id = ?", jobTokenId).First(&artefact).Error; err != nil {
slog.Error("failed to get job artefact",
"jobTokenId", jobTokenId,
"error", err)
return nil, err
}
if err := db.GormDB.Where("job_token_id = ?", jobTokenId).First(&artefact).Error; err != nil {
slog.Error("failed to get job artefact",
"jobTokenId", jobTokenId,
"error", err)
return nil, err
}

Comment on lines 148 to 151
if err != nil {
slog.Error("Could not get jobs for batch", "batchId", b.ID.String(), "error", err)
return res, fmt.Errorf("could not unmarshall digger batch: %v", err)
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

style: Typo in error string: use 'unmarshal' instead of 'unmarshall'.

Suggested change
if err != nil {
slog.Error("Could not get jobs for batch", "batchId", b.ID.String(), "error", err)
return res, fmt.Errorf("could not unmarshall digger batch: %v", err)
}
if err != nil {
slog.Error("Could not get jobs for batch", "batchId", b.ID.String(), "error", err)
return res, fmt.Errorf("could not unmarshal digger batch: %v", err)
}

Comment on lines 19 to 24
e := os.Remove(dbName)
if e != nil {
if !strings.Contains(e.Error(), "no such file or directory") {
log.Fatal(e)
panic(e)
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

style: panic used for error handling in test setup; consider using t.Fatalf for clearer test error reporting

Comment on lines 107 to 109
job.Status = orchestrator_scheduler.DiggerJobQueuedForRun
models.DB.UpdateDiggerJob(job)
return nil
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

logic: DB.UpdateDiggerJob error in queuing branch is not checked.

Suggested change
job.Status = orchestrator_scheduler.DiggerJobQueuedForRun
models.DB.UpdateDiggerJob(job)
return nil
job.Status = orchestrator_scheduler.DiggerJobQueuedForRun
err := models.DB.UpdateDiggerJob(job)
if err != nil {
slog.Error("Failed to update job status", "jobId", job.DiggerJobID, "error", err)
return err
}
return nil

Comment on lines 42 to 46
repoName, err := ExtractCleanRepoName(repoUrl)
if err != nil {
log.Printf("warning could not parse url: %v", repoUrl)
slog.Warn("Could not parse repository URL", "url", repoUrl, "error", err)
return false
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

logic: Returning false immediately on URL parsing error may change behavior; verify this decision.

Comment on lines 110 to 113
if err != nil {
slog.Error("Failed to decode GITHUB_APP_PRIVATE_KEY_BASE64", "error", err)
return nil, nil, fmt.Errorf("error initialising github app installation: please set GITHUB_APP_PRIVATE_KEY_BASE64 env variable\n")
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

style: Consider removing the trailing newline in error messages (e.g., in fmt.Errorf at line 112) for consistency with other messages.

Comment on lines 87 to 103
func InitCommentReporter(prService ci.PullRequestService, prNumber int, commentMessage string) (*CommentReporter, error) {
slog.Debug("Initializing comment reporter",
"prNumber", prNumber,
"messageLength", len(commentMessage),
)

comment, err := prService.PublishComment(prNumber, commentMessage)
if err != nil {
slog.Error("Failed to publish comment", "prNumber", prNumber, "error", err)
return nil, fmt.Errorf("count not initialize comment reporter: %v", err)
}

//commentId, err := strconv.ParseInt(fmt.Sprintf("%v", comment.Id), 10, 64)
if err != nil {
log.Printf("could not convert to int64, %v", err)
slog.Error("Could not convert comment ID to int64", "commentId", comment.Id, "error", err)
return nil, fmt.Errorf("could not convert to int64, %v", err)
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

logic: Redundant error check: err is checked twice after PublishComment. Remove or refactor the block starting at line 100 if conversion is no longer performed.

Suggested change
func InitCommentReporter(prService ci.PullRequestService, prNumber int, commentMessage string) (*CommentReporter, error) {
slog.Debug("Initializing comment reporter",
"prNumber", prNumber,
"messageLength", len(commentMessage),
)
comment, err := prService.PublishComment(prNumber, commentMessage)
if err != nil {
slog.Error("Failed to publish comment", "prNumber", prNumber, "error", err)
return nil, fmt.Errorf("count not initialize comment reporter: %v", err)
}
//commentId, err := strconv.ParseInt(fmt.Sprintf("%v", comment.Id), 10, 64)
if err != nil {
log.Printf("could not convert to int64, %v", err)
slog.Error("Could not convert comment ID to int64", "commentId", comment.Id, "error", err)
return nil, fmt.Errorf("could not convert to int64, %v", err)
}
func InitCommentReporter(prService ci.PullRequestService, prNumber int, commentMessage string) (*CommentReporter, error) {
slog.Debug("Initializing comment reporter",
"prNumber", prNumber,
"messageLength", len(commentMessage),
)
comment, err := prService.PublishComment(prNumber, commentMessage)
if err != nil {
slog.Error("Failed to publish comment", "prNumber", prNumber, "error", err)
return nil, fmt.Errorf("count not initialize comment reporter: %v", err)
}

Comment on lines 279 to 284
if _, ok := impactedProjects[currentNode]; ok {
currentProject, ok := impactedProjects[currentNode]
if !ok {
slog.Error("Project not found", "projectName", currentNode)
return fmt.Errorf("project %s not found", currentNode)
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

logic: Redundant key check: first verifies existence, then re-retrieves the value. Consider removing the extra retrieval check.

Suggested change
if _, ok := impactedProjects[currentNode]; ok {
currentProject, ok := impactedProjects[currentNode]
if !ok {
slog.Error("Project not found", "projectName", currentNode)
return fmt.Errorf("project %s not found", currentNode)
}
if currentProject, ok := impactedProjects[currentNode]; ok {
// The ok check is redundant here since we already know it exists
// if !ok {
// slog.Error("Project not found", "projectName", currentNode)
// return fmt.Errorf("project %s not found", currentNode)
// }

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 7

♻️ Duplicate comments (1)
backend/middleware/basic.go (1)

17-24: ⚠️ Potential issue

Error handling in HTTP Basic Auth needs improvement.

While the error is properly logged, the function continues execution after encountering a configuration error, which could lead to unexpected behavior.

	if username == "" || password == "" {
		err := fmt.Errorf("configuration error: HTTP Basic Auth configured but username or password not set")
		slog.Error("Basic auth configuration error", "error", err)
		c.Error(err)
+		c.String(http.StatusInternalServerError, err.Error())
+		c.Abort()
+		return
	}
🧰 Tools
🪛 golangci-lint (1.64.8)

23-23: Error return value of c.Error is not checked

(errcheck)

🧹 Nitpick comments (12)
backend/services/auth.go (1)

64-64: Well-structured error logging

Good transition to structured logging with key-value pairs for better context.

Consider adding similar structured logging to other error paths in this file (e.g., lines 58, 71, 90-92, 98) for better observability and debugging.

backend/tasks/runs_test.go (1)

88-88: Consider using t.Fatal instead of panic in tests

Changing from log.Fatal to panic in test teardown changes the error handling behavior. While panic will be caught by the test framework, using t.Fatal would be more idiomatic for test failures and provides better error reporting.

-			panic(err)
+			t.Fatal(err)
backend/controllers/github.go (4)

1127-1144: Unused function detected.

The getBatchType function is defined but never used in the codebase, according to static analysis.

Consider either removing this unused function or adding a comment explaining why it's being kept for future use.

🧰 Tools
🪛 golangci-lint (1.64.8)

1127-1127: func getBatchType is unused

(unused)


1736-1737: Unused constant.

The MaxPatchSize constant is defined but never used.

-		const MaxPatchSize = 1024 * 1024 // 1MB limit

If this is intended for future use, consider adding a comment explaining its purpose.

🧰 Tools
🪛 golangci-lint (1.64.8)

1736-1736: const MaxPatchSize is unused

(unused)


2052-2057: Simplify string checking with standard library.

The current code can be simplified using strings.Contains.

-	if strings.Index(githubRepo, "/") == -1 {
+	if !strings.Contains(githubRepo, "/") {

This is more idiomatic Go and directly communicates the intent of checking if the string contains a specific character.

🧰 Tools
🪛 golangci-lint (1.64.8)

2052-2052: S1003: should use !strings.Contains(githubRepo, "/") instead

(gosimple)


1634-1641: Consistent error logging.

The error message in the log doesn't match the returned error message. Consider making them consistent.

-		slog.Error("Error triggering Digger jobs",
+		slog.Error("Error triggering Digger Jobs",

Or vice versa, adjust the returned error message to match the logging:

-		return fmt.Errorf("error triggering Digger Jobs")
+		return fmt.Errorf("error triggering Digger jobs")
backend/utils/gitlab.go (3)

24-24: Consider setting log level as a variable or constant

You've added structured logging using slog throughout this file. To maintain consistency in logging and make it easier to adjust log levels in the future, consider defining logging levels as variables or constants at the package level.

+// Package-level constants for log levels
+const (
+    LogLevelDebug = slog.LevelDebug
+    LogLevelInfo  = slog.LevelInfo
+    LogLevelWarn  = slog.LevelWarn
+    LogLevelError = slog.LevelError
+)

110-114: Consider adding log correlation IDs

While you've added structured logging with clear contexts, consider adding unique correlation IDs to trace a single operation across multiple log entries. This would be particularly useful for tracking operations across multiple services or functions.

 slog.Debug("Retrieved changed files",
+    "correlationId", uuid.New().String(),
     "projectId", projectId,
     "prNumber", prNumber,
     "changedFilesCount", len(changedFiles),
 )

120-123: Include file existence check before reading

The current implementation attempts to read digger.yml directly without first checking if it exists. Consider adding a file existence check before reading to provide more specific error messages for missing configuration files versus other read errors.

 diggerYmlPath := path.Join(dir, "digger.yml")
+// Check if file exists before attempting to read
+if _, err := os.Stat(diggerYmlPath); os.IsNotExist(err) {
+    slog.Error("digger.yml file not found",
+        "path", diggerYmlPath,
+    )
+    return fmt.Errorf("digger.yml file not found at %s", diggerYmlPath)
+}
 diggerYmlBytes, err := os.ReadFile(diggerYmlPath)
backend/controllers/jobs.go (1)

58-58: Consider adding a retry mechanism for status conversion

Rather than just logging and continuing when status conversion fails, you might want to implement a retry mechanism or fallback to a default status. This would make the system more robust when handling corrupted or unexpected data.

-slog.Error("Could not convert status to string", "jobId", j.ID, "status", j.Status, "error", err)
-continue
+slog.Error("Could not convert status to string, using default", "jobId", j.ID, "status", j.Status, "error", err)
+// Fall back to "unknown" status
+jobsRes[i].Status = orchestrator_scheduler.DiggerJobStatusUnknown.ToString()
backend/controllers/projects.go (2)

1187-1187: Boolean comparison can be simplified

The boolean comparison with a constant can be simplified.

-if batch.Status == orchestrator_scheduler.BatchJobSucceeded && batch.BatchType == orchestrator_scheduler.DiggerCommandApply && automerge == true {
+if batch.Status == orchestrator_scheduler.BatchJobSucceeded && batch.BatchType == orchestrator_scheduler.DiggerCommandApply && automerge {
🧰 Tools
🪛 golangci-lint (1.64.8)

1187-1187: S1002: should omit comparison to bool constant, can be simplified to config.Reporting.AiSummary

(gosimple)


1068-1069: Use keyed fields in struct literal

The SourceGroupingReporter struct literal uses unkeyed fields, which makes the code less maintainable if the struct definition changes.

-reporter := reporting.SourceGroupingReporter{serializedJobs, batch.PrNumber, ghService}
+reporter := reporting.SourceGroupingReporter{
+    Jobs:      serializedJobs,
+    PrNumber:  batch.PrNumber, 
+    PrService: ghService,
+}
🧰 Tools
🪛 golangci-lint (1.64.8)

1068-1068: composites: github.com/diggerhq/digger/libs/comment_utils/reporting.SourceGroupingReporter struct literal uses unkeyed fields

(govet)

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 28a8ecd and e8f476e.

📒 Files selected for processing (43)
  • backend/bootstrap/main.go (6 hunks)
  • backend/ci_backends/github_actions.go (2 hunks)
  • backend/ci_backends/provider.go (2 hunks)
  • backend/controllers/cache.go (5 hunks)
  • backend/controllers/connections.go (9 hunks)
  • backend/controllers/dashboard.go (2 hunks)
  • backend/controllers/github.go (26 hunks)
  • backend/controllers/github_api.go (3 hunks)
  • backend/controllers/github_test.go (5 hunks)
  • backend/controllers/internal_users.go (5 hunks)
  • backend/controllers/jobs.go (4 hunks)
  • backend/controllers/orgs.go (4 hunks)
  • backend/controllers/policies.go (16 hunks)
  • backend/controllers/policies_api.go (6 hunks)
  • backend/controllers/projects.go (21 hunks)
  • backend/controllers/repos.go (4 hunks)
  • backend/controllers/runs.go (7 hunks)
  • backend/middleware/basic.go (3 hunks)
  • backend/middleware/jwt.go (12 hunks)
  • backend/middleware/middleware.go (3 hunks)
  • backend/models/runs.go (4 hunks)
  • backend/models/scheduler.go (4 hunks)
  • backend/models/scheduler_test.go (2 hunks)
  • backend/models/setup.go (3 hunks)
  • backend/models/storage.go (64 hunks)
  • backend/models/storage_test.go (3 hunks)
  • backend/segment/segment.go (3 hunks)
  • backend/services/auth.go (2 hunks)
  • backend/services/messages.go (5 hunks)
  • backend/services/scheduler.go (3 hunks)
  • backend/services/spec.go (6 hunks)
  • backend/tasks/runs.go (1 hunks)
  • backend/tasks/runs_test.go (2 hunks)
  • backend/tasks/tasks.go (1 hunks)
  • backend/utils/ai.go (7 hunks)
  • backend/utils/allowlist.go (2 hunks)
  • backend/utils/batch_utils.go (1 hunks)
  • backend/utils/bitbucket.go (3 hunks)
  • backend/utils/crypt.go (1 hunks)
  • backend/utils/github.go (8 hunks)
  • backend/utils/gitlab.go (2 hunks)
  • backend/utils/graphs.go (3 hunks)
  • backend/utils/pr_comment.go (6 hunks)
🧰 Additional context used
🧬 Code Definitions (26)
backend/ci_backends/github_actions.go (3)
next/ci_backends/github_actions.go (1)
  • GithubActionCi (12-14)
libs/spec/models.go (1)
  • Spec (54-69)
backend/utils/github.go (1)
  • GetWorkflowIdAndUrlFromDiggerJobId (308-369)
backend/controllers/cache.go (1)
backend/models/setup.go (1)
  • DB (20-20)
backend/models/scheduler_test.go (4)
backend/models/setup.go (1)
  • Database (13-15)
backend/models/orgs.go (5)
  • Organisation (9-14)
  • Repo (16-27)
  • Project (76-87)
  • Token (154-160)
  • ProjectRun (29-39)
backend/models/github.go (4)
  • GithubAppInstallation (37-45)
  • VCSConnection (7-22)
  • VCSConnection (26-28)
  • GithubAppInstallationLink (55-61)
backend/models/scheduler.go (3)
  • GithubDiggerJobLink (91-98)
  • DiggerJob (48-64)
  • DiggerJobParentLink (14-18)
backend/utils/batch_utils.go (2)
backend/models/scheduler.go (2)
  • DiggerBatch (26-46)
  • DiggerVCSGithub (22-22)
backend/utils/github.go (2)
  • GithubClientProvider (89-93)
  • GetGithubService (203-226)
backend/controllers/github_api.go (1)
backend/models/setup.go (1)
  • DB (20-20)
backend/controllers/policies.go (2)
backend/middleware/jwt.go (1)
  • ORGANISATION_ID_KEY (326-326)
backend/models/setup.go (1)
  • DB (20-20)
backend/controllers/github_test.go (3)
backend/models/orgs.go (5)
  • Organisation (9-14)
  • Repo (16-27)
  • Project (76-87)
  • Token (154-160)
  • ProjectRun (29-39)
backend/models/github.go (1)
  • GithubAppInstallation (37-45)
backend/models/scheduler.go (2)
  • GithubDiggerJobLink (91-98)
  • DiggerJob (48-64)
backend/models/storage_test.go (3)
backend/models/setup.go (2)
  • Database (13-15)
  • DB (20-20)
backend/models/github.go (4)
  • GithubAppInstallation (37-45)
  • VCSConnection (7-22)
  • VCSConnection (26-28)
  • GithubAppInstallationLink (55-61)
backend/models/scheduler.go (3)
  • GithubDiggerJobLink (91-98)
  • DiggerJob (48-64)
  • DiggerJobParentLink (14-18)
backend/controllers/orgs.go (1)
backend/models/setup.go (1)
  • DB (20-20)
backend/utils/graphs.go (2)
next/services/scheduler.go (1)
  • ConvertJobsToDiggerJobs (206-251)
backend/models/scheduler.go (2)
  • DiggerVCSType (20-20)
  • DiggerJob (48-64)
backend/models/scheduler.go (1)
backend/models/setup.go (1)
  • DB (20-20)
backend/controllers/internal_users.go (2)
backend/models/orgs.go (1)
  • Organisation (9-14)
backend/models/setup.go (1)
  • DB (20-20)
backend/middleware/middleware.go (2)
backend/middleware/basic.go (2)
  • HttpBasicWebAuth (14-32)
  • HttpBasicApiAuth (46-88)
backend/models/scheduler.go (1)
  • JobToken (74-81)
backend/controllers/runs.go (3)
backend/middleware/jwt.go (1)
  • ORGANISATION_ID_KEY (326-326)
backend/models/setup.go (1)
  • DB (20-20)
backend/models/runs.go (1)
  • RunPendingApproval (17-17)
backend/controllers/policies_api.go (1)
backend/models/setup.go (1)
  • DB (20-20)
backend/middleware/basic.go (3)
backend/models/setup.go (1)
  • DEFAULT_ORG_NAME (17-17)
backend/middleware/jwt.go (2)
  • ORGANISATION_ID_KEY (326-326)
  • ACCESS_LEVEL_KEY (329-329)
backend/middleware/middleware.go (1)
  • CheckJobToken (59-84)
backend/models/storage.go (8)
backend/models/setup.go (2)
  • Database (13-15)
  • DB (20-20)
backend/models/orgs.go (3)
  • Repo (16-27)
  • Project (76-87)
  • Organisation (9-14)
libs/digger_config/config.go (2)
  • Project (43-60)
  • DiggerConfig (12-27)
backend/models/runs.go (2)
  • RunType (25-25)
  • DiggerRun (41-60)
backend/models/artefact.go (1)
  • JobArtefact (7-15)
libs/digger_config/yaml.go (1)
  • DiggerConfigYaml (9-25)
backend/models/locking.go (1)
  • DiggerLock (5-11)
backend/models/cache.go (1)
  • RepoCache (8-14)
backend/controllers/jobs.go (1)
backend/models/setup.go (1)
  • DB (20-20)
backend/utils/gitlab.go (4)
libs/ci/gitlab/gitlab.go (2)
  • GitLabService (134-137)
  • GitLabContext (19-38)
backend/utils/github.go (1)
  • CloneGitRepoAndDoAction (32-79)
ee/cli/pkg/utils/github.go (1)
  • CloneGitRepoAndDoAction (19-35)
libs/digger_config/digger_config.go (1)
  • LoadDiggerConfig (174-191)
backend/utils/pr_comment.go (1)
libs/comment_utils/summary/updater.go (1)
  • GithubCommentMaxLength (38-38)
backend/utils/github.go (2)
backend/utils/gitshell.go (1)
  • NewGitShellWithTokenAuth (45-53)
next/utils/github.go (2)
  • SetPRStatusForJobs (135-178)
  • GetGithubHostname (28-34)
backend/services/spec.go (4)
backend/models/scheduler.go (1)
  • DiggerJob (48-64)
backend/utils/github.go (1)
  • GithubClientProvider (89-93)
backend/utils/crypt.go (1)
  • DecryptConnection (95-159)
libs/spec/models.go (1)
  • Spec (54-69)
backend/services/scheduler.go (4)
backend/ci_backends/ci_backends.go (1)
  • CiBackend (8-11)
backend/models/scheduler.go (2)
  • DiggerJob (48-64)
  • DiggerVCSGithub (22-22)
backend/utils/github.go (1)
  • GithubClientProvider (89-93)
libs/spec/models.go (1)
  • Spec (54-69)
backend/tasks/runs.go (4)
backend/models/runs.go (2)
  • DiggerRunQueueItem (32-39)
  • DiggerRun (41-60)
backend/utils/github.go (1)
  • GithubClientProvider (89-93)
backend/ci_backends/github_actions.go (1)
  • GithubActionCi (15-17)
backend/services/spec.go (3)
  • GetRunNameFromJob (92-120)
  • GetSpecFromJob (145-226)
  • GetVCSTokenFromJob (19-90)
backend/controllers/github.go (10)
backend/utils/github.go (1)
  • GithubClientProvider (89-93)
backend/ci_backends/provider.go (1)
  • CiBackendProvider (10-12)
backend/models/setup.go (1)
  • DB (20-20)
libs/ci/github/github.go (2)
  • ProcessGitHubPullRequestEvent (638-657)
  • ConvertGithubPullRequestEventToJobs (441-593)
backend/models/scheduler.go (1)
  • DiggerVCSGithub (22-22)
backend/segment/segment.go (1)
  • Track (48-58)
backend/config/envgetters.go (1)
  • LimitByNumOfFilesChanged (7-11)
backend/ci_backends/ci_backends.go (1)
  • CiBackend (8-11)
ee/drift/controllers/github.go (2)
  • ExistingRepos (152-152)
  • OAuthAccessResponse (186-188)
backend/middleware/jwt.go (1)
  • ORGANISATION_ID_KEY (326-326)
backend/controllers/projects.go (13)
backend/models/orgs.go (3)
  • Organisation (9-14)
  • Repo (16-27)
  • Project (76-87)
backend/models/setup.go (1)
  • DB (20-20)
backend/middleware/jwt.go (1)
  • ORGANISATION_ID_KEY (326-326)
ee/drift/controllers/ci_jobs.go (1)
  • SetJobStatusRequest (14-21)
backend/utils/github.go (1)
  • GithubClientProvider (89-93)
backend/utils/batch_utils.go (1)
  • PostCommentForBatch (10-58)
backend/models/scheduler.go (1)
  • DiggerBatch (26-46)
libs/digger_config/digger_config.go (1)
  • LoadDiggerConfigYamlFromString (222-229)
libs/comment_utils/reporting/utils.go (1)
  • SourceDetails (11-15)
libs/comment_utils/reporting/source_grouping.go (1)
  • SourceGroupingReporter (23-27)
libs/ci/ci.go (1)
  • PullRequestService (5-27)
backend/utils/gitlab.go (1)
  • GetGitlabService (34-68)
libs/digger_config/converters.go (1)
  • ConvertDiggerYamlToConfig (185-325)
🪛 golangci-lint (1.64.8)
backend/controllers/policies.go

331-331: func loadDiggerConfig is unused

(unused)

backend/middleware/basic.go

23-23: Error return value of c.Error is not checked

(errcheck)


39-39: Error return value of c.Error is not checked

(errcheck)

backend/models/storage.go

1589-1589: func validateDiggerConfigYaml is unused

(unused)


1776-1776: SA4006: this value of result is never used

(staticcheck)

backend/services/scheduler.go

66-66: Error return value is not checked

(errcheck)


108-108: Error return value of models.DB.UpdateDiggerJob is not checked

(errcheck)

backend/tasks/runs.go

70-70: Error return value of ciBackend.TriggerWorkflow is not checked

(errcheck)


177-177: Error return value of ciBackend.TriggerWorkflow is not checked

(errcheck)

backend/controllers/github.go

60-60: ineffectual assignment to err

(ineffassign)


1127-1127: func getBatchType is unused

(unused)


1736-1736: const MaxPatchSize is unused

(unused)


2052-2052: S1003: should use !strings.Contains(githubRepo, "/") instead

(gosimple)

backend/controllers/projects.go

1187-1187: S1002: should omit comparison to bool constant, can be simplified to config.Reporting.AiSummary

(gosimple)


1388-1388: S1002: should omit comparison to bool constant, can be simplified to automerge

(gosimple)


1068-1068: composites: github.com/diggerhq/digger/libs/comment_utils/reporting.SourceGroupingReporter struct literal uses unkeyed fields

(govet)

🔇 Additional comments (203)
backend/utils/crypt.go (1)

10-11: LGTM - Import reordering

This change simply reorders the import statement, moving the models package import to a new location after the io import. This doesn't affect functionality and aligns with Go's convention of grouping imports.

backend/ci_backends/github_actions.go (4)

3-8: Import updated for structured logging

The standard log package has been replaced with the structured logging package log/slog as part of the PR's objective to implement structured logging.


20-20: Enhanced logging with structured format

The logging statement has been updated to use structured logging with slog.Info, providing better context with key-value pairs for the workflow trigger operation.


39-40: Improved error logging with structured format

The error logging has been enhanced with slog.Error, providing clearer error information when the JobId is empty.


45-46: Improved error logging for workflow retrieval failures

The error logging has been enhanced with slog.Error and structured key-value pairs, making it easier to identify and troubleshoot issues when retrieving workflow information.

backend/models/setup.go (3)

3-6: Import updated for structured logging

The standard log package has been replaced with the structured logging package log/slog, aligning with the PR's goal of implementing structured logging throughout the codebase.


27-28: Enhanced error logging with structured format

The database connection error logging has been improved using structured logging with slog.Error, providing clearer context with the "error" key-value pair.


36-41: Improved error handling and logging for organization creation

This change significantly improves the code by:

  1. Using structured logging with slog.Info to log the creation of a default organization
  2. Adding error handling for the CreateOrganisation call that was previously missing
  3. Properly logging any failures with slog.Error

This will help with debugging organization creation issues that were previously silently ignored.

backend/ci_backends/provider.go (2)

3-6: Import updated for structured logging

The standard log package has been replaced with the structured logging package log/slog, aligning with the PR's goal of implementing structured logging throughout the codebase.


19-20: Enhanced error logging with structured format

The error logging has been improved using structured logging with slog.Error, providing better context with the "error" key-value pair when failures occur while getting a GitHub client.

backend/controllers/connections.go (3)

5-5: Excellent adoption of structured logging

Properly importing the modern log/slog package as part of transitioning to structured logging.


26-26: Well-structured error logging

Good use of key-value pairs to provide context with the error message, making logs more searchable and debuggable.


124-124: Good use of informational logging

Appropriate use of slog.Info for successful operations with meaningful context parameters.

backend/controllers/dashboard.go (4)

5-5: Proper import for structured logging

Correctly importing the log/slog package to support structured logging.


22-22: Good use of slog.Info for expected conditions

Appropriately using the info level for an expected condition (organization not found) with relevant context parameters.


25-25: Consistent error logging pattern

Good structured error logging with context parameters. Note that there's a minor spelling inconsistency between "organisationId" (variable) and "organisation" (error message), but this is likely pre-existing and doesn't affect functionality.


33-33: Helpful debug logging added

Adding a debug log provides valuable tracing information for troubleshooting while not cluttering logs in production.

backend/services/auth.go (1)

7-7: Proper import for structured logging

Correct replacement of standard log with structured log/slog package.

backend/middleware/middleware.go (6)

5-5: Proper import for structured logging

Correctly importing the log/slog package to support structured logging.


17-17: Good use of informational logging

Appropriate use of slog.Info for configuration and setup information.

Also applies to: 26-26, 29-29, 39-39, 48-48, 51-51


32-33: Proper error handling for missing authentication

Good replacement of log.Fatal with slog.Error followed by panic to maintain the same program termination behavior while providing structured logs.

Also applies to: 54-55


62-62: Well-structured warning logs

Good use of slog.Warn for authentication issues with appropriate context.

Also applies to: 69-69


76-80: Consider implementing the suggested error-checking improvement

There's a past review comment suggesting to reorder error-checking logic to improve the flow.

Consider implementing the suggestion from the previous review to check errors immediately after GetJobToken to avoid using an invalid jobToken:

-	if jobToken == nil {
-		slog.Warn("Invalid bearer token")
-		c.String(http.StatusForbidden, "Invalid bearer token")
-		c.Abort()
-		return nil, fmt.Errorf("invalid bearer token")
-	}
-
-	if err != nil {
-		slog.Error("Error while fetching token from database", "error", err)
-		c.String(http.StatusInternalServerError, "Error occurred while fetching database")
-		c.Abort()
-		return nil, fmt.Errorf("could not fetch cli token")
-	}
+	if err != nil {
+		slog.Error("Error while fetching token from database", "error", err)
+		c.String(http.StatusInternalServerError, "Error occurred while fetching database")
+		c.Abort()
+		return nil, fmt.Errorf("could not fetch cli token")
+	}
+
+	if jobToken == nil {
+		slog.Warn("Invalid bearer token")
+		c.String(http.StatusForbidden, "Invalid bearer token")
+		c.Abort()
+		return nil, fmt.Errorf("invalid bearer token")
+	}

82-82: Helpful debug logging added

Adding debug logging for token verification with relevant context is excellent for troubleshooting.

backend/utils/bitbucket.go (4)

28-34: Good use of structured logging with grouped fields

The structured logging implementation with grouped repository fields improves readability and context for debugging purposes.


57-66: Well-organized log entry with relevant context

The log entry captures all necessary repository details in a structured format, which will help with debugging and monitoring.


98-106: Good practice: storing path in variable before use

Storing the path in a variable before using it is a good practice that improves code readability and reduces duplication.


130-138: Enhanced logging with computed metrics

Adding the project count to the success log provides valuable context about the operation result.

backend/segment/segment.go (3)

15-15: Successfully migrated to structured logging

The debug log has been properly converted to use slog's structured format.


35-35: Improved logging with structured fields

Converting the userId from a string interpolation to a structured field improves searchability and parsing in log aggregation systems.


52-52: Well-structured log with multiple context fields

Including both userId and action as separate fields in the structured log enhances the ability to filter and analyze logs.

backend/controllers/repos.go (3)

22-28: Good error handling with appropriate log levels

The code correctly differentiates between Info-level logging for expected conditions (record not found) and Error-level logging for unexpected errors, with proper context fields.


41-41: Comprehensive error logging with database context

Including both the external organisation ID and internal database ID provides excellent context for troubleshooting database-related issues.


53-54: Valuable success logging with metrics

Adding a success log with the repository count provides useful operational metrics that can help monitor system performance and usage patterns.

backend/controllers/github_api.go (4)

3-13: Improved logging with structured approach.

The replacement of standard logging with log/slog will provide better context and machine-readable logs, making debugging and monitoring easier.


22-26: Good error logging with context.

Adding the error context to logs will help with troubleshooting JSON binding issues.


34-40: Enhanced error handling with structured context.

Using different log levels (Info vs Error) appropriately based on the error type, along with adding relevant context fields.


61-64: Consider using HTTP status constants.

For consistency with the rest of the codebase, consider using the HTTP status constant instead of the literal value.

-			c.JSON(200, gin.H{"status": "already linked to this org"})
+			c.JSON(http.StatusOK, gin.H{"status": "already linked to this org"})
backend/controllers/policies_api.go (5)

3-12: Proper structured logging imports.

The switch to structured logging with appropriate import organization.


17-21: Appropriate warn level for invalid inputs.

Using slog.Warn is the right choice for invalid user inputs that don't represent system errors.


28-34: Well-structured error logging with context.

Good use of different log levels based on error type with proper contextual information.


54-55: Added helpful debug logging.

Adding debug-level logs for successful operations helps with traceability without cluttering production logs.


107-117: Enhanced success logging for policy operations.

Good addition of informational logs for both creating and updating policies, with context about the organization and policy type.

backend/controllers/cache.go (3)

37-39: Fixed status code and improved error message.

Good use of HTTP status constants and correction of the typo in the error message.


50-52: Fixed typo in repository error message.

Correcting the error message text for consistency and clarity.


93-94: Consistent HTTP status code usage.

Using http.StatusOK instead of the literal 200 value improves code readability and maintainability.

backend/controllers/github.go (2)

77-80: Well-structured event logging.

Good use of structured logging with relevant context fields for monitoring GitHub events.


104-110: Improved event context in logs.

Adding detailed context about the event (repository, issue number) will help with debugging and tracing through the event lifecycle.

backend/utils/allowlist.go (3)

3-10: Appropriate import replacement for structured logging.

The introduction of log/slog for structured logging and replacement of the removed contains function with lo package is well-implemented.


12-28: Good enhancement with structured logging.

The refactoring to use structured logging in the ExtractCleanRepoName function is well-implemented. Adding both error and debug logs with relevant context fields (url, error, originalUrl, cleanName) improves observability.


31-56: Comprehensive structured logging implementation.

The refactoring of IsInRepoAllowList to use structured logging is thorough and well-executed. The additional logs provide better visibility into:

  1. When no allow list is defined
  2. URL parsing errors
  3. Repository allow list status (both allowed and disallowed cases)

Using contextual fields improves debuggability.

backend/controllers/runs.go (4)

3-15: Appropriate import changes for structured logging.

The addition of log/slog and organization of imports is well-implemented.


17-96: Well-structured logging in RunsForProject function.

The implementation of structured logging in the RunsForProject function is comprehensive. Each error condition now includes relevant context (projectId, organisationId, repoId) which will greatly improve troubleshooting. The addition of a success log at the end (lines 87-91) provides valuable operational visibility.


98-159: Thorough structured logging in RunDetails function.

The refactoring of logging in the RunDetails function follows best practices by:

  1. Including contextual fields with each log entry
  2. Using appropriate log levels (warn, error, info)
  3. Providing clear, actionable messages
  4. Adding a success log for completed operations

This will significantly improve observability.


161-242: Comprehensive structured logging in ApproveRun function.

The implementation of structured logging in ApproveRun follows a consistent pattern. The additional context provided in the log messages, especially for forbidden access cases (lines 205-208), enhances security observability. The logs added for both successful approval and already-approved cases provide useful operational insights.

🧰 Tools
🪛 golangci-lint (1.64.8)

219-219: S1002: should omit comparison to bool constant, can be simplified to !run.IsApproved

(gosimple)

backend/utils/batch_utils.go (2)

3-8: Appropriate import for structured logging.

The addition of log/slog import is correctly implemented.


10-58: Well-structured logging in PostCommentForBatch function.

The implementation of structured logging in the PostCommentForBatch function is thorough and follows best practices:

  1. Adding an initial debug log (lines 11-17) captures all relevant batch details
  2. Structured error logs (lines 23-28, 34-39) include all context needed for troubleshooting
  3. The success log (lines 43-47) confirms completed operations
  4. The warning log for unknown VCS types (lines 51-56) is properly structured

This approach significantly improves observability without changing the function's behavior.

backend/controllers/orgs.go (3)

3-13: Well-organized imports for structured logging.

The addition of log/slog and organization of imports is correctly implemented.


20-39: Enhanced logging in CreateFronteggOrgFromWebhook function.

The refactoring to use structured logging in this function adds valuable context at key points:

  1. JSON binding errors are now properly logged
  2. Organization creation failures include all relevant fields (tenantId, name, source)
  3. The success log captures the newly created organization ID

These changes improve observability without altering the function's behavior.


41-155: Comprehensive structured logging in AssociateTenantIdToDiggerOrg function.

The implementation of structured logging in this security-critical function significantly improves observability:

  1. Authentication failures are properly logged with appropriate context
  2. JWT validation errors include detailed information for troubleshooting
  3. Organization operations (retrieval/creation) are well-documented with logs
  4. The debug log for JWT claims (line 112) helps with authentication flow debugging

This thorough approach will make security auditing and troubleshooting much easier.

backend/models/scheduler.go (7)

6-8: Good structured logging implementation.

The replacement of standard logging with log/slog is a good choice for structured logging and aligns with the PR objectives.


104-105: Improved error logging with structured context.

The structured error logging provides better context with the job ID and error, making debugging easier.


107-126: Enhanced job processing with structured debug logs.

Good addition of debug logging after job mapping, providing valuable context about the job status and metadata.


141-144: Good batch processing traceability.

Adding debug logging at the beginning of batch mapping enhances observability and makes it easier to trace the flow of batch processing.


149-149: Improved error context for batch job failures.

The structured error logging for batch job retrieval failures now includes the batch ID, making troubleshooting easier.


156-160: Comprehensive context for job mapping errors.

The structured error logging now provides detailed context with job ID, digger job ID, batch ID, and the error itself, which significantly improves debugging capabilities.


167-169: Added completion logging for batch processing.

Good addition of success logging for batch mapping that includes the job count, which helps monitor performance and batch sizes.

backend/bootstrap/main.go (4)

9-9: Good package import for structured logging.

The addition of log/slog import supports the PR objective of implementing structured logging.


63-64: Enhanced error reporting in the profiling functions.

The structured error logging in profiling functions provides better context for troubleshooting profiling issues.

Also applies to: 67-69, 81-83, 91-93


112-113: Improved Sentry error reporting.

The structured error logging for Sentry initialization failures provides better visibility into integration issues.


244-249: Well-structured logging initialization.

The initLogging function now properly initializes the structured logger with appropriate level settings. This implementation is clean and follows best practices.

backend/utils/ai.go (7)

8-8: Good package import for structured logging.

The addition of log/slog import supports the PR objective of implementing structured logging.


13-16: Enhanced function entry logging.

Good addition of debug logging at the function entry point that provides context about the endpoint and input size, which helps with tracing and performance monitoring.


25-27: Improved error handling with structured logging.

The structured error logging throughout the function provides better context for troubleshooting API integration issues.

Also applies to: 32-34, 44-46, 52-54


30-31: Use of constants for HTTP methods.

Good practice using http.MethodPost instead of string literals, which improves code maintainability and readability.

Also applies to: 110-111


57-71: Differentiated logging for API responses.

Good separation of logging for bad requests (warnings) versus unexpected errors, which helps distinguish between client issues and server/integration problems.


85-88: Added success logging for monitoring.

The addition of info logging for successful operations provides valuable metrics on response status and size.


93-96: Consistent logging pattern across functions.

The same structured logging pattern is consistently applied across both GenerateTerraformCode and GetAiSummaryFromTerraformPlans functions, which is a good practice for code maintainability.

Also applies to: 105-105, 137-141, 145-149, 165-168

backend/models/storage.go (6)

7-9: Good package import for structured logging.

The addition of log/slog import and the reorganization of imports supports the PR objective of implementing structured logging.


931-931: Fixed parameter name typo.

The parameter name was corrected from diggeRrunId to diggerRunId, improving code consistency and readability.


705-707: Excellent logging with type information.

Including the type of orgId in the log message is very helpful for debugging issues related to type mismatches or unexpected input types.


1083-1087: Good use of field grouping.

Using slog.Group to organize related fields in the log message enhances readability and makes it easier to parse the logs.


1728-1734: Improved error handling differentiation.

Good improvement in error handling by logging different messages for "not found" errors versus other errors, which helps distinguish between expected and unexpected cases.


1808-1836: Comprehensive logging for cache operations.

The logging for repo cache operations provides good visibility into the cache hit/miss patterns and the size of the cached configs, which is valuable for performance monitoring.

backend/services/spec.go (7)

6-10: Good introduction of structured logging.

The addition of the log/slog package for structured logging improves log readability and searchability, which will be valuable for debugging and monitoring.


24-30: Well-structured debug logging with proper grouping.

Using slog.Group effectively organizes related fields, making logs easier to parse and understand.


42-45: Improved error logging with context.

Adding the job ID and installation ID as structured fields provides valuable context for troubleshooting GitHub token retrieval failures.


96-97: Error text mismatch: Log message vs. returned error.

The log message mentions "Could not unmarshal job spec" but the error message returns "could not marshal json string".

-		slog.Error("Could not unmarshal job spec", "jobId", job.DiggerJobID, "error", err)
-		return nil, fmt.Errorf("could not marshal json string: %v", err)
+		slog.Error("Could not unmarshal job spec", "jobId", job.DiggerJobID, "error", err)
+		return nil, fmt.Errorf("could not unmarshal job spec: %v", err)

108-118: Great use of structured logging for run name generation.

The debug log clearly shows all components used to generate the run name, which will be helpful for debugging name-related issues.


173-179: Good defensive handling of nil CommentId.

The code properly handles the case where batch.CommentId is nil by defaulting to an empty string and logging a warning, which helps identify potential issues without breaking execution.


216-223: Well-structured logging for completed spec creation.

The debug log with grouped information about the created spec provides useful context for troubleshooting without cluttering the log.

backend/middleware/basic.go (5)

5-12: Good addition of structured logging.

The introduction of log/slog for structured logging improves error visibility and debuggability.


49-60: Improved error handling for missing authorization.

Adding warning logs for missing or malformed authorization headers provides better visibility into authentication failures.


64-74: Good token verification with debug logging.

The logging improvements provide better visibility into token processing and validation.


80-85: Clear warning for invalid tokens.

The warning log helps track unauthorized access attempts.


37-41:

✅ Verification successful

Improved error handling for organization fetch failures.

The error is now properly logged with context, but the return value from c.Error() is not checked, which was flagged by static analysis.

While ignoring the return value is common and likely harmless in this context, verify if any error handling policy exists for Gin's c.Error() calls:


🏁 Script executed:

#!/bin/bash
# Search for patterns of checking c.Error() return values in the codebase
rg "if.*c\.Error\(" || echo "No instances of checking c.Error() return value found"

Length of output: 138


Action Required: Confirm error handling consistency for Gin's c.Error() usage

The updated code now logs the error with sufficient context. Our investigation shows that the return value of c.Error() is not checked anywhere in the codebase—this is the common practice. Given that static analysis flagged this behavior but it aligns with established patterns, no changes are required.

🧰 Tools
🪛 golangci-lint (1.64.8)

39-39: Error return value of c.Error is not checked

(errcheck)

backend/controllers/policies.go (11)

6-9: Appropriate import restructuring for slog.

The imports have been properly organized to include the structured logging package.


41-44: Enhanced error visibility with structured logging.

Adding warning logs when organization ID is missing from context helps track down authentication issues.


54-60: Informative error logging for policy queries.

The structured logs now include relevant context like repo name, project name, and policy type, which will aid in debugging.


69-70: Useful debug log for successful policy retrieval.

Adding debug logs for successful operations helps trace request flows during troubleshooting.


107-115: Clear logging for authorization issues.

The warning log includes both the policy organization ID and the logged-in organization ID, making it easy to identify permission mismatches.


179-194: Good operation result tracking with structured logs.

Adding info logs for both creation and update operations provides clear visibility into policy changes.


276-290: Comprehensive logging for policy operations.

The structured logs effectively capture the full context of policy operations for repositories and projects.


327-328: Informative success log for token creation.

Including the organization ID in the success log helps track token issuance.


352-353: Good debug log for successful config loading.

Adding a successful operation log helps confirm when configurations are properly loaded.


364-365: Useful debug log with metrics.

Including the count of independent projects provides quantitative information that can help identify potential issues if the numbers don't match expectations.


331-354:

❓ Verification inconclusive

Unused function detected.

The loadDiggerConfig function is not being used according to the static analysis.

Verify if this function is actually used anywhere in the codebase:


🏁 Script executed:

#!/bin/bash
# Search for usages of the loadDiggerConfig function
rg "loadDiggerConfig" --type go | grep -v "func loadDiggerConfig"

Length of output: 65


Attention: Verify Usage of loadDiggerConfig Function

The static search using rg did not reveal any direct usage of loadDiggerConfig outside its definition. However, due to the low-quality inference from the initial command output, please manually re-verify that there are no indirect calls (e.g. via reflection or function pointers) by searching for patterns like "loadDiggerConfig(" in the repository. If this additional check confirms that the function is indeed unused, it would be best to remove it or add a clear comment indicating its intended future use.

🧰 Tools
🪛 golangci-lint (1.64.8)

331-331: func loadDiggerConfig is unused

(unused)

backend/controllers/internal_users.go (7)

4-6: Good adoption of structured logging.

The change to use log/slog instead of the standard logger aligns with the project's move toward structured logging.


30-34: Informative operation logging.

Adding structured logging at the beginning of the operation clearly documents the intention and input parameters.


44-45: Helpful log for organization creation fallback.

This log clearly indicates when a new organization is being created because an existing one wasn't found.


54-55: Good success confirmation logging.

The success log with the organization ID and external ID provides clear traceability for the operation.


79-84: Comprehensive user creation logging.

The structured log effectively captures all relevant parameters for user creation operations.


96-101: Detailed error logging for user creation failures.

The error log includes all relevant context for debugging user creation issues.


105-106: Good success confirmation for user creation.

The success log includes the user ID, email, and organization ID, providing complete context for the operation.

backend/models/runs.go (6)

4-5: Good implementation of structured logging with slog.

The import change from standard log to log/slog aligns with the PR's goal of implementing structured logging.


83-83: Improved error logging with structured context.

The structured logging approach with key-value pairs ("runId", r.ID, "error", err) provides better context than the previous approach, making logs more searchable and filterable.


89-90: Good consistent error logging pattern.

You're maintaining consistency in the error logging pattern, using the same approach as the previous error log.


115-115: Helpful debug log for successful serialization.

Adding a debug log for successful serialization enhances observability by providing visibility into normal operation, not just errors.


122-123: Good structured error logging.

The structured logging with context about the runStageId makes errors more traceable.


126-138: Good refactoring for better readability and logging.

The code has been refactored to use a variable before returning, and a debug log has been added. This improves both readability and debugging capabilities.

backend/services/messages.go (9)

4-6: Good implementation of structured logging.

The import changes to include fmt and log/slog align with the PR's goal of implementing structured logging.


18-18: Helpful debug log for session interaction.

Adding a debug log when adding a message to a session provides better visibility into session operations.


27-28: Good structured error logging with dynamic type information.

Using the new typeof helper function to include the type of the unexpected session value in the error log is a great improvement for troubleshooting.


31-32: Improved error logging with context.

Including the key and error details in the log message provides better context for debugging session save failures.


50-56: Good helper function for improved logging.

The typeof helper function is a nice addition that makes the logs more informative when dealing with unexpected types.


59-61: Consistent logging across message methods.

The consistent use of Info logging in the message addition methods provides useful operational visibility.

Also applies to: 64-66, 69-71


79-88: Good debug logging with message counts.

Logging the counts of different message types provides valuable information for debugging session behavior.


94-95: Improved error logging.

The structured error logging provides better context than the previous implementation.


105-115: Good helper function for message counting.

The countMessages helper function improves code readability and maintainability, while safely handling edge cases (nil values and type mismatches).

backend/middleware/jwt.go (7)

5-9: Good implementation of structured logging.

The import changes to include log/slog align with the PR's goal of implementing structured logging.


21-22: Well-structured logging in SetContextParameters.

The structured logging provides comprehensive context about the token validation process, enhancing diagnostics when issues occur.

Also applies to: 27-28, 31-32, 35-36, 38-39, 47-48


55-56: Good permission-related logging.

The debug logs for permission checks provide valuable insights into authorization decisions.

Also applies to: 62-63, 71-72, 78-79


95-96: Improved logging in JWTWebAuth.

Converting all logging calls to structured logging provides better context and consistency across the authentication process.

Also applies to: 101-102, 108-109, 117-118, 131-132, 139-140, 149-154


167-168: Good structured logging in SecretCodeAuth.

The structured logs provide better visibility into webhook secret validation.

Also applies to: 180-181, 185-186


194-195: Comprehensive logging in JWTBearerTokenAuth.

The structured logging across different token processing paths (CLI, API, JWT) provides excellent visibility into the token validation workflow.

Also applies to: 201-202, 208-217, 220-239, 241-274, 282-287


299-300: Good access level logging.

Logging the access level decisions helps with auditing and troubleshooting permission issues.

Also applies to: 304-305

backend/utils/graphs.go (8)

7-9: Good implementation of structured logging.

The import changes to include log/slog align with the PR's goal of implementing structured logging.


19-31: Excellent use of structured logging with grouped fields.

Using slog.Group to organize related repository information is a great practice that makes logs more readable while maintaining structure.


36-39: Comprehensive logging in job conversion process.

The structured logging throughout the job conversion process provides excellent visibility into the workflow, with appropriate log levels for different events.

Also applies to: 46-46, 51-55, 61-64, 69-72, 77-77, 81-81


97-123: Detailed graph traversal logging.

The debug logs throughout the graph traversal capture important relationships and processing steps, which will be invaluable for troubleshooting complex graph operations.

Also applies to: 127-174


185-189: Good summary logging at function completion.

Adding an info log at the end of the function with job count metrics provides a useful summary of the operation's results.


194-202: Consistent logging in graph traversal functions.

The logging in the TraverseGraphVisitAllParentsFirst function maintains the same structured approach, providing good visibility into the traversal process.

Also applies to: 214-233


238-273: Good logging for graph creation and manipulation.

The logs in ImpactedProjectsOnlyGraph provide clear visibility into the graph creation process, including important metrics like root node count.


282-323: Comprehensive logging in graph building.

The structured logging throughout the CollapsedGraph function captures the process of adding vertices and edges, which is crucial for debugging complex graph relationships.

backend/utils/gitlab.go (3)

35-43: Well-structured logging with grouped repository information

Good use of slog.Group to organize repository information logically. This approach makes logs more readable and easier to parse, especially when viewing in structured log analysis tools.


71-82: Well-structured logging implementation

The structured logging for the GetDiggerConfigForBranchGitlab function effectively captures all relevant context. The use of slog.Group for repository information creates a clear hierarchical structure in the logs, enhancing readability and making it easier to filter logs based on repository properties.


159-162: Consistent logging level for operation results

You're using slog.Info for the final success message, which is appropriate for significant operations. Make sure this level is consistent with how you log successful operations in other parts of the codebase to maintain a predictable logging pattern.

backend/utils/pr_comment.go (4)

19-20: Structured logging improves debuggability

Good addition of structured logging to the function. The log message clearly identifies what's happening and includes the PR number as context, which will help with debugging and monitoring.


55-61: Comprehensive error logging for comment updates

The error logging for comment updates now includes more context with PR number, comment ID, and the specific error. This structured approach will make troubleshooting failed comment updates much easier.


164-176: Good handling of comment message length limits

The logging added for handling long comment messages provides clear warnings when trimming is needed and includes detailed information about the original length, maximum length, and how much was trimmed. This will be valuable for diagnosing issues with truncated messages.


180-187: Error handling includes appropriate context

The error logging for failed comment updates now includes the PR number, comment ID, and the error, providing enough context to identify and troubleshoot issues without having to search through logs for related entries.

backend/tasks/runs.go (2)

18-22: Excellent use of context grouping for logs

Creating a reusable runContext group is an excellent approach. This ensures consistent context across all log entries related to this run and makes it easier to correlate and filter logs for specific runs.


245-247: Enhanced error logging with structured context

Using slog.Warn with structured fields for unknown status values is a good practice. It clearly captures the unexpected condition along with all relevant context needed for diagnosis.

backend/controllers/jobs.go (3)

22-22: Appropriate warning level for missing parameter

Good use of slog.Warn for the missing parameter scenario. This is the right log level since it's an issue with the request rather than an internal error, and the structured logging clearly identifies which parameter is missing.


31-31: Correct log level for not found condition

Using slog.Info for the "not found" condition is appropriate since this is an expected possibility rather than an error condition. The inclusion of organisationId in the log message provides helpful context.


65-69: Valuable success logging

Adding success logging provides visibility into normal operations, not just errors. Including the counts and identifiers gives useful metrics for monitoring system activity and performance. This is a good practice for observability.

backend/services/scheduler.go (9)

5-7: Import changes look good

The standard log package has been properly replaced with log/slog for structured logging. The addition of runtime/debug is appropriate for capturing stack traces during panic recovery.


20-20: Good structured logging implementation

The use of structured logging with key-value pairs provides much better context than the previous implementation. Including the parent job ID makes this log entry more useful for debugging.


56-63: Well-structured logging with grouped fields

The use of slog.Group() for organizing repository information is a good practice that makes logs more readable while maintaining their structured nature.


170-171: Good addition of success logging

Adding a success log message before starting the asynchronous workflow URL update is a good practice. It provides a clear indication that the job was successfully triggered before moving on to the next step.


176-186: Improved panic recovery with stack trace

The addition of stack trace logging in the panic recovery handler is excellent. This will make debugging issues much easier by providing the complete stack information.


196-197: Helpful log message for workflow process start

Adding a log message to indicate the start of the workflow URL update process improves observability and makes debugging easier.


202-205: Good debugging with attempt counter

Including the attempt number in the debug log provides valuable context for understanding retry behavior when fetching the workflow URL fails.


219-221: Well-structured success logging

The success log with the workflow URL included provides good context for tracking the job's execution path.


228-228: Appropriate log level for failed URL fetch

Using slog.Warn instead of slog.Error is appropriate here since this is a recoverable condition that doesn't necessarily indicate a system failure.

backend/tasks/tasks.go (11)

3-14: Clean import statement organization

The imports are well-organized with standard library imports separated from application imports. The addition of log/slog and reorganization of other imports is appropriate for the structured logging changes.


15-21: Well-implemented structured logging initialization

The initLogging function now properly sets up the structured logging with a text handler and appropriate log level. This provides a consistent logging format throughout the application.


25-28: Good startup logging sequence

Starting with an informative startup message and then confirming database connection is a good practice for application initialization logs.


34-35: Clear task execution logging

Adding a clear log message at the start of each scheduled task provides valuable context for understanding when tasks are executed.


42-57: Excellent structured logging for item processing

The debug log for processing count followed by detailed logs for each item being processed provides great visibility into the task's execution. The use of slog.Group for repository information is a good practice.


62-74: Well-structured function call with improved error logging

The refactoring of the GetGithubService call with proper parameter alignment and detailed error logging improves code readability and debugging capability.


81-81: Helpful task completion summary

Adding a completion log with the count of processed items provides a clear indication of task success and volume.


94-110: Detailed job processing logs

The addition of debug logs for job processing with comprehensive context (job ID, batch ID, repository details) significantly improves observability.


133-156: Well-structured job scheduling with improved error handling

The refactoring of the ScheduleJob call with proper parameter alignment and detailed success/error logging provides better visibility into the job scheduling process.


158-161: Informative task completion summary

The task completion log with total jobs and successfully processed counts provides valuable metrics for monitoring.


168-172: Clearly marked TODO with explanation

The TODO comment about improving the application's running approach is well-documented. It clearly indicates that the current approach consumes CPU cycles and suggests a better alternative using proper channels.

🧰 Tools
🪛 golangci-lint (1.64.8)

172-172: SA5002: this loop will spin, using 100% CPU

(staticcheck)

backend/utils/github.go (10)

24-26: Improved error handling pattern

Replacing log.Fatal with slog.Error followed by panic is a better pattern as it provides structured error information before terminating.


35-40: Good contextual logging for repository operations

Adding structured debug logs with repository details before clone operations improves traceability.


45-49: Detailed error logging for git operations

The error log for failed cloning now includes all relevant information (repository URL, branch, error) which makes troubleshooting much easier.


56-61: Consistent error handling pattern

The checkout error handling follows the same structured pattern as the clone error handling, which promotes consistency throughout the codebase.


101-104: Helpful debug logging for client initialization

Adding debug logs with GitHub App ID and installation ID provides useful context for tracking client initialization.


152-155: Good success confirmation logging

Adding a success log after obtaining the GitHub client and token confirms successful completion of a critical operation.


229-232: Clear function entry logging

Adding an informative log at the start of the function with relevant parameters provides good context for function execution.


309-316: Comprehensive workflow search logging

The debug log with details about the workflow search (count of runs, time filter) provides valuable context for understanding the search scope.


350-356: Well-structured success logging for workflow discovery

The success log with workflow details (job ID, run ID, URL) provides clear confirmation that the workflow was found.


362-367: Appropriate warning for missing workflow

Using slog.Warn with complete context information is appropriate for cases where a workflow is not found, as this might be expected in some scenarios.

backend/controllers/projects.go (12)

57-61: Good informative logging for project listing

The addition of structured logging with repository, organization ID, and project count provides valuable context for understanding the response to a project listing request.


88-90: Helpful debug logging for organization context

Adding debug logs when inferring the organization context from the logged-in user improves traceability of the request flow.


153-156: Consistent informational logging

The log for found projects with organization ID and project count provides consistent information that helps with monitoring and debugging.


251-255: Comprehensive success logging

The success log with project ID, name, and repository name provides a complete picture of the retrieved project.


283-287: Good request context logging

Adding debug logs with request details (repository name, organization ID, project name) at the start of request processing provides valuable context for tracing.


474-478: Informative success logging for run history

The success log with project details and run count provides clear confirmation of the operation's success and scope.


508-508: Simple yet effective job status logging

The initial log message with job ID and organization ID provides a clear starting point for tracing job status updates.


526-531: Detailed job status update logging

The log with current status, new status, and batch ID provides comprehensive context for understanding the status transition.


590-596: Excellent panic recovery with structured stack trace

The panic recovery handler with structured logging of the error and stack trace is well-implemented. Using slog.Group for the stack trace keeps the logs clean and organized.


662-666: Clear job completion handling log

The log message for job completion handling with job ID, repository name, and batch ID provides complete context for tracing the completion process.


1056-1060: Informative process start logging

The log for updating source-based comments with batch ID, source details count, and PR number provides clear context for monitoring this operation.


1079-1082: Comprehensive success logging

The success log with batch ID and PR number provides clear confirmation that the comment update process completed successfully.

backend/models/scheduler_test.go (2)

22-22: Consistent approach to error handling in test files.

The change from error logging to panic reflects a cleaner approach to handling test failures. Panics provide better stack traces for debugging test failures compared to log statements.


29-29: Consistent error handling across the file.

All error handling now uses panic instead of log.Fatal, which is appropriate for tests as it provides better stack traces and is consistent with the slog integration approach across the codebase.

Also applies to: 37-37, 47-47, 53-53, 59-59, 67-67

backend/models/storage_test.go (1)

23-23: Aligned error handling approach in test files.

The modification from logging errors to using panic is consistent with the changes in other test files, aligning with the PR's objective to transition from standard logging to structured logging with slog.

Also applies to: 32-32, 40-40, 52-52, 60-60

backend/controllers/github_test.go (2)

9-10: Import organization improved.

The reorganization of imports with whitespace separating different import groups improves readability and follows Go's convention for import grouping.


578-578: Consistent error handling approach in test files.

The consistent replacement of log.Fatal with panic across all test files provides better debugging information during test failures through stack traces. This change aligns with the PR's structured logging objectives.

Also applies to: 585-585, 593-593, 605-605, 612-612, 619-619, 626-626, 632-632, 641-641, 657-657, 665-665

Comment on lines 60 to 62
_, _, webhookSecret, _, err := d.GithubClientProvider.FetchCredentials(appID)

payload, err := github.ValidatePayload(c.Request, []byte(webhookSecret))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Ineffectual assignment to error variable.

The error variable is assigned but not checked, which could lead to uncaught errors.

-	_, _, webhookSecret, _, err := d.GithubClientProvider.FetchCredentials(appID)
+	_, _, webhookSecret, _, _ := d.GithubClientProvider.FetchCredentials(appID)

Alternatively, if error handling is needed:

-	_, _, webhookSecret, _, err := d.GithubClientProvider.FetchCredentials(appID)
+	_, _, webhookSecret, _, fetchErr := d.GithubClientProvider.FetchCredentials(appID)
+	if fetchErr != nil {
+		slog.Error("Error fetching credentials", "appID", appID, "error", fetchErr)
+		c.String(http.StatusInternalServerError, "Error fetching credentials")
+		return
+	}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
_, _, webhookSecret, _, err := d.GithubClientProvider.FetchCredentials(appID)
payload, err := github.ValidatePayload(c.Request, []byte(webhookSecret))
_, _, webhookSecret, _, fetchErr := d.GithubClientProvider.FetchCredentials(appID)
if fetchErr != nil {
slog.Error("Error fetching credentials", "appID", appID, "error", fetchErr)
c.String(http.StatusInternalServerError, "Error fetching credentials")
return
}
payload, err := github.ValidatePayload(c.Request, []byte(webhookSecret))
🧰 Tools
🪛 golangci-lint (1.64.8)

60-60: ineffectual assignment to err

(ineffassign)

Comment on lines 1589 to 1597
func validateDiggerConfigYaml(configYaml string) (*configuration.DiggerConfig, error) {
diggerConfig, _, _, err := configuration.LoadDiggerConfigFromString(configYaml, "./")
if err != nil {
slog.Error("failed to validate digger config YAML", "error", err)
return nil, fmt.Errorf("validation error, %w", err)
}
slog.Debug("digger config YAML validated successfully")
return diggerConfig, nil
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Unused function detected.

The validateDiggerConfigYaml function is reported as unused by the static analyzer. Consider removing it if it's not needed or adding references to it if it's intended to be used.

-func validateDiggerConfigYaml(configYaml string) (*configuration.DiggerConfig, error) {
-	diggerConfig, _, _, err := configuration.LoadDiggerConfigFromString(configYaml, "./")
-	if err != nil {
-		slog.Error("failed to validate digger config YAML", "error", err)
-		return nil, fmt.Errorf("validation error, %w", err)
-	}
-	slog.Debug("digger config YAML validated successfully")
-	return diggerConfig, nil
-}

Or if the function is needed elsewhere, ensure it's properly referenced.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
func validateDiggerConfigYaml(configYaml string) (*configuration.DiggerConfig, error) {
diggerConfig, _, _, err := configuration.LoadDiggerConfigFromString(configYaml, "./")
if err != nil {
slog.Error("failed to validate digger config YAML", "error", err)
return nil, fmt.Errorf("validation error, %w", err)
}
slog.Debug("digger config YAML validated successfully")
return diggerConfig, nil
}
🧰 Tools
🪛 golangci-lint (1.64.8)

1589-1589: func validateDiggerConfigYaml is unused

(unused)

"repoFullName", repoFullName,
"orgId", orgId,
"repoCacheId", repoCache.ID)

repoCache.DiggerConfig = configMarshalled
repoCache.DiggerYmlStr = diggerYmlStr
result = db.GormDB.Save(&repoCache)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Unused result value.

The value of result on line 1776 is never used, which may indicate an error check that was overlooked.

repoCache.DiggerConfig = configMarshalled
repoCache.DiggerYmlStr = diggerYmlStr
result = db.GormDB.Save(&repoCache)
+if result.Error != nil {
+    slog.Error("failed to update repo cache", 
+        "repoFullName", repoFullName,
+        "orgId", orgId,
+        "error", result.Error)
+    return nil, result.Error
+}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
result = db.GormDB.Save(&repoCache)
repoCache.DiggerConfig = configMarshalled
repoCache.DiggerYmlStr = diggerYmlStr
result = db.GormDB.Save(&repoCache)
if result.Error != nil {
slog.Error("failed to update repo cache",
"repoFullName", repoFullName,
"orgId", orgId,
"error", result.Error)
return nil, result.Error
}
🧰 Tools
🪛 golangci-lint (1.64.8)

1776-1776: SA4006: this value of result is never used

(staticcheck)

slog.Error("could not get vcs token",
"error", err,
"jobId", job.ID,
runContext)
return
}

ciBackend.TriggerWorkflow(*spec, *runName, *vcsToken)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Unchecked error from TriggerWorkflow call

The error return value from ciBackend.TriggerWorkflow is not being checked. If the workflow trigger fails, your code will continue execution assuming success, which could lead to inconsistent state.

Apply this fix to properly handle potential errors:

-ciBackend.TriggerWorkflow(*spec, *runName, *vcsToken)
+err = ciBackend.TriggerWorkflow(*spec, *runName, *vcsToken)
+if err != nil {
+    slog.Error("failed to trigger workflow",
+        "error", err,
+        "jobId", job.ID,
+        runContext)
+    return
+}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
ciBackend.TriggerWorkflow(*spec, *runName, *vcsToken)
err = ciBackend.TriggerWorkflow(*spec, *runName, *vcsToken)
if err != nil {
slog.Error("failed to trigger workflow",
"error", err,
"jobId", job.ID,
runContext)
return
}
🧰 Tools
🪛 golangci-lint (1.64.8)

70-70: Error return value of ciBackend.TriggerWorkflow is not checked

(errcheck)

slog.Error("could not get vcs token",
"error", err,
"jobId", job.ID,
runContext)
return
}

ciBackend.TriggerWorkflow(*spec, *runName, *vcsToken)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Unchecked error from TriggerWorkflow call

Similar to the previous instance, the error return value from ciBackend.TriggerWorkflow is not being checked here either. This could lead to the same issues with inconsistent state if the workflow trigger fails.

Apply this fix to properly handle potential errors:

-ciBackend.TriggerWorkflow(*spec, *runName, *vcsToken)
+err = ciBackend.TriggerWorkflow(*spec, *runName, *vcsToken)
+if err != nil {
+    slog.Error("failed to trigger apply workflow",
+        "error", err,
+        "jobId", job.ID,
+        runContext)
+    return
+}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
ciBackend.TriggerWorkflow(*spec, *runName, *vcsToken)
err = ciBackend.TriggerWorkflow(*spec, *runName, *vcsToken)
if err != nil {
slog.Error("failed to trigger apply workflow",
"error", err,
"jobId", job.ID,
runContext)
return
}
🧰 Tools
🪛 golangci-lint (1.64.8)

177-177: Error return value of ciBackend.TriggerWorkflow is not checked

(errcheck)

Comment on lines 65 to 67
ciBackend := ci_backends.GithubActionCi{Client: client}
ScheduleJob(ciBackend, repoFullName, repoOwner, repoName, batchId, job, gh)
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Error return value not checked

The return value from ScheduleJob() is not being checked, which could lead to silent failures.

-ciBackend := ci_backends.GithubActionCi{Client: client}
-ScheduleJob(ciBackend, repoFullName, repoOwner, repoName, batchId, job, gh)
+ciBackend := ci_backends.GithubActionCi{Client: client}
+if err := ScheduleJob(ciBackend, repoFullName, repoOwner, repoName, batchId, job, gh); err != nil {
+    slog.Error("Failed to schedule job", "jobId", job.DiggerJobID, "error", err)
+    return err
+}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
ciBackend := ci_backends.GithubActionCi{Client: client}
ScheduleJob(ciBackend, repoFullName, repoOwner, repoName, batchId, job, gh)
}
ciBackend := ci_backends.GithubActionCi{Client: client}
if err := ScheduleJob(ciBackend, repoFullName, repoOwner, repoName, batchId, job, gh); err != nil {
slog.Error("Failed to schedule job", "jobId", job.DiggerJobID, "error", err)
return err
}
}
🧰 Tools
🪛 golangci-lint (1.64.8)

66-66: Error return value is not checked

(errcheck)

Comment on lines 107 to 109
job.Status = orchestrator_scheduler.DiggerJobQueuedForRun
models.DB.UpdateDiggerJob(job)
return nil
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Error return value not checked

The return value from models.DB.UpdateDiggerJob() is not being checked, which could lead to silent failures.

-job.Status = orchestrator_scheduler.DiggerJobQueuedForRun
-models.DB.UpdateDiggerJob(job)
-return nil
+job.Status = orchestrator_scheduler.DiggerJobQueuedForRun
+if err := models.DB.UpdateDiggerJob(job); err != nil {
+    slog.Error("Failed to update job status", "jobId", job.DiggerJobID, "error", err)
+    return err
+}
+return nil
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
job.Status = orchestrator_scheduler.DiggerJobQueuedForRun
models.DB.UpdateDiggerJob(job)
return nil
job.Status = orchestrator_scheduler.DiggerJobQueuedForRun
if err := models.DB.UpdateDiggerJob(job); err != nil {
slog.Error("Failed to update job status", "jobId", job.DiggerJobID, "error", err)
return err
}
return nil
🧰 Tools
🪛 golangci-lint (1.64.8)

108-108: Error return value of models.DB.UpdateDiggerJob is not checked

(errcheck)

@motatoes motatoes merged commit cfb4dc9 into diggerhq:develop Mar 29, 2025
8 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants