Skip to content

Run tests in CI and fix failing tests. #1672

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
15 changes: 13 additions & 2 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,17 @@ on:
workflow_call:

jobs:
test_go:
name: Go tests
runs-on: ubuntu-latest
container:
image: quay.io/prometheus/golang-builder:1.23-base
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- uses: prometheus/promci@c3c93a50d581b928af720f0134b2b2dad32a6c41 # v0.4.6
- uses: ./.github/promci/actions/setup_environment
- run: make test

build:
name: Build for common architectures
runs-on: ubuntu-latest
Expand Down Expand Up @@ -61,7 +72,7 @@ jobs:
publish_master:
name: Publish master branch artifacts
runs-on: ubuntu-latest
needs: [build_all, verify-example-configs]
needs: [test_go, build_all, verify-example-configs]
if: |
(github.repository == 'prometheus-community/yet-another-cloudwatch-exporter')
&&
Expand All @@ -81,7 +92,7 @@ jobs:
publish_release:
name: Publish release artifacts
runs-on: ubuntu-latest
needs: [build_all, verify-example-configs]
needs: [test_go, build_all, verify-example-configs]
if: |
(github.repository == 'prometheus-community/yet-another-cloudwatch-exporter')
&&
Expand Down
12 changes: 6 additions & 6 deletions cmd/yace/scraper.go
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ import (
"github.com/prometheus-community/yet-another-cloudwatch-exporter/pkg/model"
)

type scraper struct {
type Scraper struct {
registry atomic.Pointer[prometheus.Registry]
featureFlags []string
}
Expand All @@ -38,16 +38,16 @@ type cachingFactory interface {
Clear()
}

func NewScraper(featureFlags []string) *scraper { //nolint:revive
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it worth making scraper public as it's used only in main.go?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The linter was complaining about the lint being present.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought I saw a nolintlint false-positive bug recently.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That could be it.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IIUC we could revert this specific change if it was just a false positive?

s := &scraper{
func NewScraper(featureFlags []string) *Scraper {
s := &Scraper{
registry: atomic.Pointer[prometheus.Registry]{},
featureFlags: featureFlags,
}
s.registry.Store(prometheus.NewRegistry())
return s
}

func (s *scraper) makeHandler() func(http.ResponseWriter, *http.Request) {
func (s *Scraper) makeHandler() func(http.ResponseWriter, *http.Request) {
return func(w http.ResponseWriter, r *http.Request) {
handler := promhttp.HandlerFor(s.registry.Load(), promhttp.HandlerOpts{
DisableCompression: false,
Expand All @@ -56,7 +56,7 @@ func (s *scraper) makeHandler() func(http.ResponseWriter, *http.Request) {
}
}

func (s *scraper) decoupled(ctx context.Context, logger *slog.Logger, jobsCfg model.JobsConfig, cache cachingFactory) {
func (s *Scraper) decoupled(ctx context.Context, logger *slog.Logger, jobsCfg model.JobsConfig, cache cachingFactory) {
logger.Debug("Starting scraping async")
s.scrape(ctx, logger, jobsCfg, cache)

Expand All @@ -75,7 +75,7 @@ func (s *scraper) decoupled(ctx context.Context, logger *slog.Logger, jobsCfg mo
}
}

func (s *scraper) scrape(ctx context.Context, logger *slog.Logger, jobsCfg model.JobsConfig, cache cachingFactory) {
func (s *Scraper) scrape(ctx context.Context, logger *slog.Logger, jobsCfg model.JobsConfig, cache cachingFactory) {
if !sem.TryAcquire(1) {
// This shouldn't happen under normal use, users should adjust their configuration when this occurs.
// Let them know by logging a warning.
Expand Down
4 changes: 4 additions & 0 deletions pkg/exporter.go
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@ import (
"log/slog"

"github.com/prometheus/client_golang/prometheus"
prom "github.com/prometheus/common/model"

"github.com/prometheus-community/yet-another-cloudwatch-exporter/pkg/clients"
"github.com/prometheus-community/yet-another-cloudwatch-exporter/pkg/clients/cloudwatch"
Expand Down Expand Up @@ -182,6 +183,9 @@ func UpdateMetrics(
factory clients.Factory,
optFuncs ...OptionsFunc,
) error {
// Use legacy validation as that's the behaviour of former releases.
prom.NameValidationScheme = prom.LegacyValidation

options := defaultOptions()
for _, f := range optFuncs {
if err := f(&options); err != nil {
Expand Down
13 changes: 13 additions & 0 deletions pkg/promutil/prometheus_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@ import (

"github.com/prometheus/client_golang/prometheus"
dto "github.com/prometheus/client_model/go"
"github.com/prometheus/common/model"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
Expand Down Expand Up @@ -71,6 +72,12 @@ func TestSanitize(t *testing.T) {
}

func TestPromStringTag(t *testing.T) {
originalValidationScheme := model.NameValidationScheme
model.NameValidationScheme = model.LegacyValidation
defer func() {
model.NameValidationScheme = originalValidationScheme
}()

Comment on lines +75 to +80
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this should raise attention to the breaking changes that added utf8 support:

These two dependencies are now merged on master (unreleased), meaning that yace can now potentially export labels with utf8 characters. If the remote write destination does not support utf8, this would result in breaking behaviour right? I'm wondering if we should stick to the old validation scheme for some more time and add a feature flag to enable the new scheme.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree. Could we do this in a follow up?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we do the part where we set the legacy validation in this PR? Once pandora's box is opened, I guess it's worth merging a fully usable solution.

testCases := []struct {
name string
label string
Expand Down Expand Up @@ -134,6 +141,12 @@ func TestPromStringTag(t *testing.T) {
}

func TestNewPrometheusCollector_CanReportMetricsAndErrors(t *testing.T) {
originalValidationScheme := model.NameValidationScheme
model.NameValidationScheme = model.LegacyValidation
defer func() {
model.NameValidationScheme = originalValidationScheme
}()

metrics := []*PrometheusMetric{
{
Name: "this*is*not*valid",
Expand Down
Loading