Skip to content

Conversation

@Ikar-Rahl
Copy link

Short Version

Adds install() + generated config files so stlab can be consumed with find_package() instead of only add_subdirectory(). Keeps CPM with the expectation that consumers will use either CPM_USE_LOCAL_PACKAGES or CPM_LOCAL_PACKAGES_ONLY when they provide the dependencies themselves.

What

  • Install rules + stlab-config.cmake + version file
  • New option STLAB_INSTALL, defaults to OFF (to be decided)
  • Removed the forced CPM_SOURCE_CACHE and put it inside PROJECT_IS_TOP_LEVEL, otherwise it corrupts consumer CPM cache when they use stlab via CPM
  • Install rules of copy-on-write and enum-ops handled in stlab temporarily, to simplify the PR

Why

  • Lets downstream treat stlab like a normal CMake package (relocatable, cacheable)
  • Prevents duplicate copies of dependencies (ODR/ABI risk) - dependencies can be consumed by multiple libraries with conflicting versions
  • No change to developer workflow; CPM still fetches when nothing is installed
  • Aligns with future breakup into smaller libs, this changes can be implemented at cpp_library.

Not Removing CPM

CPM stays. CPM_USE_LOCAL_PACKAGES was tested and working as expected.

Scope

No vcpkg-specific code; just standard CMake packaging mechanics.

Optional Next Step

I can add the copy-on-write and enum-ops logic to the cpp_library to automatically apply it to any of its users.

Maintenance Cost

The find_dependency must be correctly added in the template if new dependencies are added.

Feedback Welcome

Happy to tweak naming or change install layout if you prefer.

@sean-parent
Copy link
Member

Lets downstream treat stlab like a normal CMake package (relocatable, cacheable)

How is fetch_content or CPM not "normal" CMake? And CPM supports caching.

Prevents duplicate copies of dependencies (ODR/ABI risk) - dependencies can be consumed by multiple libraries with conflicting versions
No change to developer workflow; CPM still fetches when nothing is installed

Explain - a key reason for using CPM is to avoid duplicate copies of the library since CPM allows for lock files and full version control.

Aligns with future breakup into smaller libs, this changes can be implemented at cpp_library.

If I were convinced to accept this, the changes to cpp-library would be a prerequisite (I've been actively removing support for install from cpp-library to simplify it). Along with CI support and refactoring this PR so the cmake for install is in a separate file.

My current read is that this is too much, and there is not yet any CI. I'm a firm believer in "build everything from source with the same compiler and same settings at the same time."

Previous versions of the library supported (at various times) CMake install, vcpkg, and conan. All were a PITA to keep running in CI across all platforms.

@Ikar-Rahl
Copy link
Author

Thanks for the detailed feedback. Here are my thoughts on your points.

How is fetch_content or CPM not "normal" CMake?

You're right, they are normal. My point is about the interface vs. the implementation.

The "normal" CMake mechanism for a consumer to declare a dependency is find_package(). What that command does is the implementation detail. It could find exported targets from an install(), or it could (as in my first patch) be a FindModule.cmake that falls back to using CPM.

The goal is that the caller doesn't have to care how a dependency is retrieved. The second patch achieves the same result by delegating the find_package call to CPM. As long as the consumer is aware of CPM_USE_LOCAL_PACKAGES, the behavior is identical: the consumer calls find_package.

And CPM supports caching.

CPM supports source caching. An install() target enables binary caching.

To be clear, binary caching is still built from source, compiling everything with the same compiler and flags, just as you want. With vcpkg, this binary cache simply becomes shareable between machines. This is what solves the "enormous configuration time" problem you get from parsing hundreds of CMakeLists.txt files at configure time.

Explain - a key reason for using CPM is to avoid duplicate copies... CPM allows for lock files and full version control.

Gladly. You're right, that works perfectly, but only in a fully hermetic build where your project and all its dependencies use CPM in the same configuration run.

As you know, that isn't the reality for most large-scale projects. The standard CMake abstraction is find_package precisely to avoid the problems of a monolithic configuration. Once a project relies on find_package for even one dependency, the hermetic model breaks and creates conflicts.

A concrete example:

  1. FrameworkB is built. It uses CPM to fetch and build [email protected]. It is then compiled and installed.

  2. ProjectA is built. It calls CPMAddPackage([email protected]) for its own use and also calls find_package(FrameworkB) (getting the installed binary).

  3. The final link for ProjectA now contains two versions of stlab: the symbols for v1.5 inside FrameworkB.lib and the v1.6 target from its own CPM call.

This ODR violation is a symptom of the two larger reasons why the single-stage, add_subdirectory model doesn't scale:

  1. Configuration Time: When a project has hundreds of dependencies, a single CMake configuration becomes unfeasible. It can take an hour just to parse all the CMakeLists.txt files and execute all their logic, before the (also huge) build even starts.
  2. Target and Variable Pollution: This is even more problematic. The "pollution" from subprojects becomes a nightmare. The CPM_SOURCE_CACHE variable that was fixed is one example. A more insidious case is when dep1 sets a global variable or cache entry that dep2 reads, causing unpredictable interference that is incredibly difficult to debug. And when you have a lot of dependencies, this become your daily infinite maintenance cycle.

For theses reasons we rely on installed dependencies. And as it was described above, CPM does not handle that.

"build everything from source with the same compiler and same settings at the same time."

I agree with this 100%. We do the same, we don't rebuild Qt for example, but most dependencies are built from source in the same environment.

Installable does not contradict this. It just means we build in stages rather than one single, monolithic configuration.

I've been actively removing support for install from cpp-library to simplify it.

Yes, I saw that, which is why I tried to move quickly on this.

Previous versions... All were a PITA to keep running in CI across all platforms.

This seems to be the real issue. As long as a project install()s correctly, writing a vcpkg port is trivial. And we will be maintaining a vcpkg port for stlab, that won't cost us much, as long as it install correctly.

The goal is a system with minimal maintenance. Splitting the library will add complexity, but that install logic can likely be centralized in cpp_library.

To help address your actual pain points: what were the specific CI issues you encountered, and on what platforms?

@sean-parent
Copy link
Member

To help address your actual pain points: what were the specific CI issues you encountered, and on what platforms?

The issue was that the dependencies had to be updated in multiple places. Adding a new dependency or even changing a version caused a ripple. Then vcpkg itself was another dependency, and we went through a couple rounds of vcpkg changing how it was installed.

My goal is for dependencies to be listed only in one place, with their version number, and to keep as much as possible out of the CI install phase (I think I'm down to just installing doxgen for the documentation build).

One possible approach would be:

  • add a minimal install to cpp-library in a separate included cmake file with asl and stlab sharing the same file until they also move to cpp-library. Since I don't see a way to test the install without duplicating dependencies and complicating the install, the install would be untested and community-supported.

@Ikar-Rahl
Copy link
Author

Ikar-Rahl commented Nov 4, 2025

Let's first have an installation approach you are happy with before thinking about having official vcpkg support, integrating it too soon may be confusing, especially if you want to have multiple libraries.

My goal is for dependencies to be listed only in one place

This is the maintenance cost I wrote about earlier, the current approach require you to update the find_dependency every time you change either the find_package or the CPMAddPackage.
this is a typical costs that is not solved by cmake unfortunately. CPM has a CPM_PACKAGES that coudl be used to generate corresponding find_dependencies, but it is limited to only CPM calls, also, not all dependency are public, cpp_library is a good example, even if you call it via CPMAddPackage, it is not needed by (installed) stlab consumer.

I think the best solution would be (and I hate to say that) make a wrapper for adding dependencies, that will be defined in the cpp_library, stlab_add_dependency or something similar.
This wrapper would have option of calling CPM or find_packages, the version, public or private, and, most importantly, would record the calls, so that it generate the find_dependency in the config.tmpl.cmake.
It is the the only reliable way of having a fully fledged install while still having to declare dependencies in one place.
This would imply that this wrapper is used in all the project though, for both curent CPMAddPackage and find_package calls. Is this acceptable ?

or even changing a version caused a ripple

You mean changing version in one place and not the other ? This would be solved by the previous solution, though maybe you are describing something else ?

add a minimal install to cpp-library in a separate included cmake file with asl and stlab sharing the same file until they also move to cpp-library

It is possible, though they could use just the wrapper and the install logic for now from cpp_library, this would minimise duplicated code and boilerplate.
I see cpp-library having then 2 function, one to declare a dependency, and one to setup install for a target.
The cmake.tmpl.in is typically stored in the project itself though, while it is possible to keep a generic one in cpp-library, projects often use it to share cmake variable or do some checks to ensure the environment is valid. It would be similar to the one in the current PR, except the find_dependencies that would be automatically deduced. It would be handy to have it in the project, even if it stay mostly empty. But I let you to decide on that.

keep as much as possible out of the CI install phase

That would be up to you, though I don't see reason why it would fail. We could use components to define what get installed.

Since I don't see a way to test the install without duplicating dependencies and complicating the install

In my head, to test the install, you just call install on stlab (with proper environment so that it find qt, libdispatch, Threads etc). And then find_package it in your test project. No need to duplicate dependencies, the one found via find_package won't be installed as they are imported target, the one added with add_subdirectory via cpm will have their own install rules.

the install would be untested and community-supported.

Community driven still sounds better than a fork to me. I will discuss this with our tech leads to make sure I can allocate time to implement it. But from my POV, we will have to do it one way or another, and I prefer a way that is as upstreamable as possible.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants