-
Notifications
You must be signed in to change notification settings - Fork 343
llvm/20.1.2 package update #48808
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
llvm/20.1.2 package update #48808
Conversation
octo-sts
bot
commented
Apr 2, 2025
Signed-off-by: wolfi-bot <[email protected]>
Hi everybody! Would anyone mind/anything break if we renamed this llvm-20 and didn't have an unversioned llvm package file? if we did a provides with the package version I think that ensures llvm always installs the newest version. |
Please use 👍 or 👎 on this comment to indicate if you agree or disagree with the recommendation. To provide more detailed feedback please comment on the recommendation prefixed with /ai-verify: e.g. /ai-verify partially helpful but I also added bash to the build environment Gen AI suggestions to solve the build error: Based on the build error, I'll provide a specific analysis and fix: • Detected Error:
• Error Category: Build/Configuration • Failure Point: • Root Cause Analysis: • Suggested Fix: - name: compiler-rt
description: Clang ${{vars.major-version}} compiler runtime
pipeline:
- runs: |
mkdir -p ${{targets.contextdir}}/${{vars.clang-prefix}}/bin
# Make hwasan_symbolize move optional
if [ -f "${{targets.destdir}}/${{vars.clang-prefix}}/bin/hwasan_symbolize" ]; then
mv ${{targets.destdir}}/${{vars.clang-prefix}}/bin/hwasan_symbolize ${{targets.contextdir}}/${{vars.clang-prefix}}/bin/
fi
mkdir -p ${{targets.contextdir}}/${{vars.clang-prefix}}/lib/${{build.arch}}-unknown-linux-gnu
mv ${{targets.destdir}}/${{vars.clang-prefix}}/lib/${{build.arch}}-unknown-linux-gnu/*clang_rt* \
${{targets.contextdir}}/${{vars.clang-prefix}}/lib/${{build.arch}}-unknown-linux-gnu/
mv ${{targets.destdir}}/${{vars.clang-prefix}}/lib/${{build.arch}}-unknown-linux-gnu/*orc_rt* \
${{targets.contextdir}}/${{vars.clang-prefix}}/lib/${{build.arch}}-unknown-linux-gnu/
# Rest of the pipeline remains unchanged... • Explanation: • Additional Notes:
• References:
|
Are we version-streaming 19 too? Or skipping it and sticking to 20? |
Yeah it should be version streamed into |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Blocking this to stop llvm-20 from slipping in inappropriately.
I like how Debian/Ubuntu make clang/gcc metapackages that add symlinks like |
+1. This is something Dann and I talked about the other day, and I'd like to pursue the same approach for GCC. I'm currently working on prepping GCC 15, and I'm considering proposing a new |
Sounds good to me! I'd just like the filenames to be consistent in indicating the version. |
superseded by #50547 |
Unlike Debian, we package all versions of toolchains, and offer any one of them to be the default. Hence one can have python-3.12 by default, or 3.11, or 3.10. And same with gcc. Originally our pythons were not co-installable, but nowadays they are coinstallable, and any one of them can be the default as needed. And we do need it to build images matching a given tag. So I am not sure what you mean by gcc-defaults, and what it would have. Right now our gcc-NN.yaml are very versionstream friendly they build versioned compilers, have -default package to make it default and cover c/c++. It is always a user choice as to which stack to have the default. And this is used to build container images, but also certain downstream projects (i.e. cuda N needs gcc Y and clang Z as default). The gcc.yaml however also builds compilers which we do not version stream (fortran, go, java) and runtime libraries that are shared by all version stream (libgcc and similar). It would help if we split gcc.yaml into multiple packages => as in gcc-fortran.yaml gcc-go.yaml etc. And also made gcc-libs.yaml which only ships things like libgcc. Then we could have gcc-14.yaml (today) which only has c/c++ compilers packaged the same way as gcc-13.yaml gcc-12.yaml are. |
+1. That leaves the question of which We could do this with provides - have every gcc-X |
I vote we just don't provide a default gcc (or any other toolchain tbh) and make everyone specify the version. We know things will break when we update gcc it does not see like a big ask that we write a yq script or a bot that bumps |
Yeah, we could figure out how to do it. But I'd like to try and understand the philosophy, and why we think it is worth it. The value in being able to provide any needed combination in images is clear, I'm just referring to our builds. With python and ruby, we want to support multiple runtimes and library stacks equally. Explicitly pinning applications to specific interpreter versions fell out of that, and were due to shipping a runtime. I didn't understand it pinning to be a goal in and of itself. With GCC/llvm, we really just want to use the latest if possible (more hardened, hopefully faster). What is the value of touching every YAML file to achieve that? Granted, there is less ambiguity in an explicit |
I do have an entirely too long note about why I want this philosophically I'll clean it up a bit and send it your way.
This is very true! and I would like to think of better ways to do that. But we have not explicitly chosen to version stream
This is all correct. Though part of what was in my head for multiversioning python3 was that I don't think having a default/unversioned python declared by a package is a good idea similiarly to why I don't think it is for gcc. And in fact we still have problems with multi versioned python because at package build time (python3 runtime) we have people declaring unversioned dependencies and encountering problems.
We're not doing that right now though, we are for the production of images and whoever is out there using apk to install gcc, but not for wolfi itself. As far as I'm aware we do not have a total rebuild and version bump policy for compiler releases. Any package that will fail to build once a new version of gcc is release is FTBS as soon as that version of gcc is in the repo and we just don't know it yet. Testing in a side build is not adequate and I think is a duplication of effort. Why would we run an entire world build against a new gcc version and wait for the results just to know that in a few weeks we'll be able to bump and rebuild those packages, or we know the patches they need to have to be rebuilt when as far as I'm aware we have no particular plans to bump and take advantage of the new compiler anytime soon. Even if we did have that policy (which I think we should) it does not make sense to go about it by releasing an updated gcc package and then rebuilding everything and checking for errors. We will cause incidents doing that, something with strict compiler requirements will break and have CVE. We could instead push
I think the value is in the ability to have a controlled, staged and monitored rollout of a set of build dependencies that we have created a calendar for and CODEOWNER locks on because we they get released they break things. The penalty we'd pay for that is ~20 something characters in a yaml file that a bot will take care of for us. Even the initial addition could be automated and some additional storage space for packages that are probably largely going to compile to the same code anyway. But in exchange we get a guarantee that we have rebuilt a particular package to take advantage of the new features that is easily discoverable and a way to control and track the rollout that doesn't require modifying the gcc package or withdrawing it if problems are found during deployment. What I think this really comes down to for gcc is do we accept silent failures existing as soon as we release a new version of gcc because our Index is controlling policy for the entire distort for no real reason and as a result create extravagant procedures to pre validate every update because, we have taken note and planned time because we know it will break things when published. Or do we setup infrastructure so that gcc-16 comes out, our automation puts up a gcc-16 PR, notices when it's available bumps packages that have lower version of gcc and submits the PR to go through our automation and testing and then flags it for review or as a failure. We know this strategy works because our bots do this constantly every day. Touch the yaml files seems like a minor inconvenience compared to the potential to completely eliminate compiler upgrade related FTBS incident in the future. And I don't see any reason why packages that are version streamed specifically for their disruption should have defaults enforced by the index that should really just be a set of compatible widgets you have the option of using. |
same way we did it for python? Move away from having gcc.yaml, ensure we only ever have gcc-13.yaml gcc-14.yaml gcc-15.yaml. And have them provide gcc, with different priority. When introducing gcc-15.yaml, we can do so as priority 0. And eventually make it the highest priority. That approach worked well for the package building, but possibly not the image building. This is why i also said we likely want to split the runtime libs and the additional compilers first. Because currently our gcc-N.yaml packaging is not symmetrical with gcc.yaml, because the latter provides the public shared libraries shared by all compilers and linked into userspace binaries. |
Somehow I see things like this:
That should be relatively straightforward and manageable. |
That just sounds like a lot of yaml modification to preserve the ability to be able to I don't see a reason why we should not just have an entirely virtual gcc and make people choose if they need it. If you're compiling software just pick a compiler. Everyone already makes this choice when they pick the version of a distro they'll install and it's fine because that's made to work in a complete set. Even rolling distros are fine because they are expected complete updates and people should expect some amount of risk on a rolling bistro. We are a factory making parts to act like lego pieces that we assemble into images and I don't see any benefit to encoding the policy in the packages themselves. In current buzz word terminology I believe that the further right we can shift policy choice around things like this the better choices we can make and the more flexibility and effective we can be with testing to the left. |