Golden file testing in xprin currently requires either manual hooks or explicit diff or dyff assertions with fixed file paths. This makes the common workflow render compare and update more verbose than necessary and hard to scale across many tests. Updating golden files also requires manual copying of Outputs.Render which is error prone and breaks the flow of running tests.
A typical workaround is to use a post test hook that copies rendered outputs into golden files. For example:
hooks:
post-test:
- name: update-golden
run: |
if [[ $UPDATE_GOLDEN ]]; then
echo "Updaing golden..."
{{- range $id, $test := .Tests }}
cp "{{ $test.Outputs.Render }}" "{{ $id }}.golden.yaml"
{{- end }}
fi
However this approach does not work reliably. It requires every test to have an id and the Tests map is not consistently accessible at this point in execution so golden files cannot be generated for all tests. This makes the solution fragile and difficult to use in practice.
The goal is to make golden testing simple and consistent. A test should define its expected output once and compare against it on every run. When changes are intentional there should be a clear way to update golden files without modifying test definitions. This reduces boilerplate improves readability and aligns with common testing patterns.
I propose adding a first class golden section to test cases. This section defines the golden file path comparison mode and optional resource scoping. It reuses the existing diff and dyff engines internally. A CLI flag such as --update-goldens allows regenerating golden files on demand. This keeps test definitions clean while providing a straightforward workflow for validating and updating expected outputs.
Example spec:
common:
golden:
dir: ./testdata
naming: "{{ .Test.ID }}.golden.yaml"
mode: diff # or dyff
tests:
- name: "basic render"
id: "basic"
inputs:
xr: xr.yaml
composition: composition.yaml
functions: functions.yaml
- name: "resource specific"
id: "db-cluster"
inputs:
xr: xr-db.yaml
composition: composition.yaml
functions: functions.yaml
golden:
resource: "Cluster/my-db" # only compare this resource
mode: dyff
common.golden can be:
golden:
dir: ./testdata # folder in current test folder to store golden files
naming: "{{ .Test.ID }}.golden.yaml" # naming convention, evaluated on each test
mode: diff # diff | dyff (default: diff)
While tests.[*].golden can be:
golden:
file: ./testdata/basic.golden.yaml # required, explicit path
mode: diff # diff | dyff (default: diff)
skipped: true # opt out of golden, if common.golden is defined
# optional: compare only a specific rendered resource
resource: "Kind/name" # e.g. "Cluster/my-db"
Golden file testing in xprin currently requires either manual hooks or explicit diff or dyff assertions with fixed file paths. This makes the common workflow render compare and update more verbose than necessary and hard to scale across many tests. Updating golden files also requires manual copying of Outputs.Render which is error prone and breaks the flow of running tests.
A typical workaround is to use a post test hook that copies rendered outputs into golden files. For example:
However this approach does not work reliably. It requires every test to have an id and the Tests map is not consistently accessible at this point in execution so golden files cannot be generated for all tests. This makes the solution fragile and difficult to use in practice.
The goal is to make golden testing simple and consistent. A test should define its expected output once and compare against it on every run. When changes are intentional there should be a clear way to update golden files without modifying test definitions. This reduces boilerplate improves readability and aligns with common testing patterns.
I propose adding a first class golden section to test cases. This section defines the golden file path comparison mode and optional resource scoping. It reuses the existing diff and dyff engines internally. A CLI flag such as --update-goldens allows regenerating golden files on demand. This keeps test definitions clean while providing a straightforward workflow for validating and updating expected outputs.
Example spec:
common.goldencan be:While
tests.[*].goldencan be: