Skip to content

panic caused by fatal error: concurrent map writes in sink.go #1865

@csullivanupgrade

Description

@csullivanupgrade

Expected Behavior

During normal operation the event-listener does not panic.

Actual Behavior

The event listener panics on concurrent write access to the extensions map.

Steps to Reproduce the Problem

Generally, the problem seems to occur with the use of TriggerGroups that target multiple Triggers that use extensions.

I've added a unit test with high concurrency (100 goroutines) that is able to reproduce the problem:

go test -race is able to detect the problem with low concurrency (only 2 goroutines).

Additional Info

  • Kubernetes version:

    Output of kubectl version:

    ➜ kubectl version
    Client Version: v1.31.2
    Kustomize Version: v5.4.2
    Server Version: v1.32.5-eks-5d4a308
    
  • Tekton Pipeline version:

    Output of tkn version or kubectl get pods -n tekton-pipelines -l app=tekton-pipelines-controller -o=jsonpath='{.items[0].metadata.labels.version}':

    ➜ tkn version
    Client version: 0.38.1
    Pipeline version: v0.62.3
    Triggers version: v0.29.1
    Dashboard version: v0.49.0
    

These gists are dumps of the stack traces from panics due to this issue:

Metadata

Metadata

Assignees

No one assigned

    Labels

    kind/bugCategorizes issue or PR as related to a bug.

    Type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions