Skip to content

Conversation

@neoaggelos
Copy link
Contributor

Summary

References #180

Changes

  • Add simplestreams sync command that implements the sync mechanism
  • Add hack/infra/images.yaml file, currently needs to be statically updated to add new images

Notes

To sync images from GitHub actions into a simplestreams index directory at $DIR:

go run ./cmd/exp/simplestreams sync --root-dir "$DIR" --manifest ./hack/infra/images.yaml

Signed-off-by: Angelos Kolaitis <[email protected]>
Signed-off-by: Angelos Kolaitis <[email protected]>
Signed-off-by: Angelos Kolaitis <[email protected]>
@neoaggelos neoaggelos requested a review from a team as a code owner December 25, 2025 01:41
Signed-off-by: Angelos Kolaitis <[email protected]>
@stgraber
Copy link
Member

There's something pretty wrong with this tool. It runs the system out of memory...

See multi-GB RSS:

lxc       191631 85.1 41.0 6379624 3442848 pts/1 Sl+  05:55   0:52              \_ /home/lxc/.cache/go-build/27/27b4c97c7fd5f6e55151ec58b12f5d75f016a35a850a4cdf9ee650029dde7ef1-d/simplestreams sync --root-dir /data/images.linuxcontainers.org/capn/ --manifest ./hack/infra/images.yaml

Comment on lines +64 to +66
if _, err := io.Copy(f, resp.Body); err != nil {
return fmt.Errorf("failed to download image: %w", err)
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should be a for loop doing copy in chunk of 1-4MB to avoid elevated memory usage.

Something like this: https://github.com/lxc/incus-os/blob/main/incus-osd/internal/providers/utils.go#L64

@stgraber
Copy link
Member

importVirtualMachineUnifiedTarball also triggers the OOM condition

@stgraber
Copy link
Member

Our image publisher is an Incus container with 4GB of RAM, so Go I/Os really need to be correctly chunked to avoid the process getting killed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Development

Successfully merging this pull request may close these issues.

2 participants