Skip to content

docs: document release candidate testnet policy #573

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

dan-da
Copy link
Collaborator

@dan-da dan-da commented Apr 29, 2025

updates the mdbook with:

  1. a (new) description of the versioning scheme.

  2. a modified git branching visualization and (new) description to reflect the process of iterating release candidates.

  3. a policy for testing each release candidate in the public testnet.


note: all changes are proposals. If anything is unclear or there is not concensus, then we should consider a meeting to discuss/resolve, as that should hopefully be faster than a lot of comments back and forth.

Also, it's quite possible that the mdbook document I started from was already out of date with respect to current processes, so if that's the case, please consider merging in current practice and/or documenting in a comment, so I can.

updates the mdbook with:

1. a modified git branching visualization and description to reflect
the process of iterating release candidates.

2. a policy for testing each release candidate in the public testnet.
The release flow for a major version change is essentially the same except that the major version number is bumped and the minor version returns to 0.


## Release Candidate Protocol (-rc1, -rc2, etc)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think some of these steps should be replaced by “follow the release protocol” and a link to said release protocol.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

by that, do you mean a link to docs/src/contributing/releasing.md?

Perhaps we need to ensure that document matches up with this one before merging this PR. I haven't reviewed it yet, other than a 5 second scan just now.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

addressed in 66b1717.

Ok, I made a first effort at syncing up these documents. So now they link to eachother for some things rather than duplicating. I think there is still some overlap and room to re-arrange some more, but I'd rather leave that to the author(s) of releasing.md and/or person(s) actually performing the releases. That may be you @jan-ferdinand, I'm not certain actually.

Anyway there are some changes in releasing.md now that I'd appreciate if you can have a look at.

addresses review comments in:
Neptune-Crypto#573

In addition to reviewer-suggested changes in git-workflow.md this merges
a few things from it into releases.md.  Hopefully they are a bit more
cohesive now.
@dan-da
Copy link
Collaborator Author

dan-da commented Apr 30, 2025

@jan-ferdinand thx for the review. I've addressed all of your suggestions. I made some changes to releasing.md that I'd appreciate if you can take a look at. My thought is that this PR could be merged now, and any further changes made in follow-up commits.

Comment on lines +5 to +11
## Automation

This is presently a largely manual process. It is expected/encouraged that one
or more custom deployment tools will be developed so this becomes a push-button
process instead, or even fully automated when certain event(s) happen. Such a
tool will remove tedium as well as the human error factor, which is important
for performing the task regularly and consistently.
Copy link
Member

@jan-ferdinand jan-ferdinand Apr 30, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't really agree with this paragraph. At present, releasing is a semi-automated task, and uses a bunch of valuable tools. For example, dist does a bunch of heavy lifting: it produces the binaries for various target platforms, provides installers for them, creates a github release, among other things. cargo release publishes all inter-dependent workspace crates in the right order1 and can be configured to do the git tagging in the way we want it to.2

There are also steps that will stay manual for the foreseeable future. For example, creating a changelog is curation work,3 which is difficult to automate.4 I'm happy to have additional steps automated, but I don't believe there's a 1-click solution waiting around the corner.

By and large, I'm happy to have a summary of the document at the top, but I think at present, it undersells the amount of tedium the tools in use remove already. How about moving the summary at the beginning of the “## Release Process Checklist” section here, then writing a new & short sentence for that section?

Footnotes

  1. This is not really relevant right now, but once we have multiple workspace members, it's a huge help.

  2. Granted, this configuration is not present at the moment.

  3. The alternative of simply dumping the output of git cliff into the changelog makes the changelog unreadable and thus superfluous. See also here, section “Commit log diffs”.

  4. Unless we're willing to incorporate an LLM. Which… maybe?

Copy link
Collaborator Author

@dan-da dan-da May 1, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't have to include the paragraph. It seems you want it removed, so I will do so.

However, I maintain that the process is inherently a manual process right now or else releasing.md could simply be: "step 1. run build_and_deploy.sh. step 2. go do some other work, because you are done". Or even better "nightly builds are automatically created every day. release candidates and releases are automatically produced and deployed, when version tags are detected in git".

Instead, it is a pretty lengthy document with a lot of manual steps.

Yes, those manual steps may involve running some complex tools. What I am suggesting is that custom glue scripts can be created that bind everything together into a single automation.

Now: I am not involved with this release process and am happy to keep it that way. Those who do the work should set things up how best suits them. I can say though that I have been involved with release processes in the past, for larger projects than this, and I can tell you that we setup up full automation to have nightly releases, release candidates, and so on. So if I were doing it (which I'm not volunteering for) that's the direction I would pursue. More work up front, so that there is less work on an ongoing basis.

If this projects succeeds and endures, I have no doubt that will eventually happen. And the paragraph was intended as a gentle nudge in that direction. In the meantime, it's fine to go with whatever is most expedient.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Those are fair points. I think everything except generating a meaningful changelog can be fully automated.1 I also don't want to imply that the release process is final. If someone wants to write and maintain more tools, I won't stand in their way.
Overall, I don't think the sentiment of the paragraph is wrong, but since it is talking about potential features and potential future development plans, it feels like it belongs in the issue tracker more than in the document.

Footnotes

  1. For nightly builds, the changelog is irrelevant; do we want or need nightly builds?


## Release Candidate vs Release.

Review the distinction [here](git-workflow.md), and make sure you know which is being generating and what the correct version should be.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Review the distinction [here](git-workflow.md), and make sure you know which is being generating and what the correct version should be.
Review the distinction [here](git-workflow.md), and make sure you know which is being generated and what the correct version should be.

Comment on lines +195 to +197
Announce the release-candidate in a post at talk.neptune.cash.

Ensure that a composer is running using the release-candidate binary.
Copy link
Member

@jan-ferdinand jan-ferdinand Apr 30, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe flip order, but since this section needs an overhaul anyway, it's a bit 🤷.


### If an actual release:

#### Set tag `latest-release`
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With this PR, what used to be the release branch is becoming the latest-release tag. I think a tag makes more sense than a branch. However, the corresponding references to the branch in the README.md should also be updated.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes thx. and we should probably get buy-off from @aszepieniec regarding the change before merging. Though to me it is really the only sensible thing... that latest-release always points to commit for the latest release. so hopefully there's consensus on that.

Copy link
Contributor

@aszepieniec aszepieniec left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the writeup.

I left some comments inline.

In any case, this is something we want @Sword-Smith to be on board with so there is no rush as it will take a while until he is back in the office.

Comment on lines +104 to +117
### Testnet policy (pre-release)

It is project policy that each major and minor release be tested as a
release-candidate in the public community testnet prior to making an official
release.

A period of public testnet testing provides the community with an opportunity to
try out candidate builds in their own unique environments and report any issues,
and to prepare for new features in advance.

It is also an important period of integration testing "in the wild" for the
release candidate binary. Operating on a public testnet, it will necessarily be
exposed to peers running older versions of the software and may shake out issues
that do not occur with automated testing.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With a policy like this in place we should also have instructions for people who are so inclined to run and connect to testnet, along with (an) IP address(es) for bootstrapping.

Comment on lines +121 to +122
1. The release candidate should be published using the same release process as
the eventual release; see [releasing](releasing.md).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't get it. If we run the release-protocol for the release-candidate, and then two weeks later, it will show no changes.

Also it is worth mentioning that a difference is the version, which for release candidates should end in rc* where * is some digit.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps the step “update the changelog” should be excluded when publishing a release candidate?

Comment on lines +125 to +126
3. The release candidate should target 2 weeks of testing after the announcement
on average with a minimum of 1 week.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I find this sentence confusing because release candidates are not capable of shoulding and targeting. Is what you are trying to say, "The release candidate should be on public testnet for at least two weeks following the announcement before it is made into an official release."? If so, how about using that phrase instead?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes that's better, thx.

Comment on lines +127 to +130
4. The company behind neptune-cash will provide at least 1 dedicated machine for
the public testnet for the purpose of running the release-candidate binary
and composing blocks. Of course the community is also encouraged to run
nodes and mine (compose or guess) if possible.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The company does not possess hardware capable of composing blocks. The release/testing policy should not lay claim to the private property of employees or other contributors. While they may volunteer their private hardware (and electricity) for any period or periods of time, they neither have nor incur a responsibility to do so or continue doing so.

Let's make the release/testing policy robust against the potential void of volunteered hardware. In particular, that means configuring testnet to allow mockable proofs.

As to whether the company should allocate resources to satisfy this rule as proposed, that discussion does not belong on GitHub.

Copy link
Collaborator Author

@dan-da dan-da May 1, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In that case I would suggest that the testnet use mock-proofs for now to ensure the blockchain progresses, because otherwise it seems most likely the testnet will simply be dead because no one is adequately incentivized to donate hardware resources to 24/7 composing.

One thing I haven't even mentioned yet is that bitcoin's testnet uses a perpetually low difficulty so it is easy for most anyone to mine blocks. I think even CPU mining still works on the public testnetv3 which has been around since 2014 or so. So we could do the same, in combination with mock proofs.

edit: above is wrong on some testnetv3 details, but reflects the spirit and my experience from approx 10 years ago.

Now, mock proofs do not test the system to the same extent that real proofs do, so at such time as an entity steps forward to provide dedicated hardware we might consider switching to a "real proofs" testnet.

In fact, we could facilitate this future switch now in code by creating a new testnet variant, eg testnet-mock, which would set use_mock_proof = true. So for now we use the testnet-mock network for release-candidate testing, and later we switch to testnet for testing with real proofs when that is viable.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

today I learned. here's a brief AI summary of bitcoin's 20 minute rule for testnet difficulty:

The "20-Minute Exception" Rule: This is the primary reason Testnet difficulty often stays low. If a new block is not found within 20 minutes of the previous block, the difficulty for the very next block is automatically reset to the minimum possible difficulty (1), regardless of the previous difficulty level. This allows even very low-hashrate miners (like CPUs) to eventually find a block and keep the chain moving.

and apparently there is testnetv4 now as of Dec 2024.

Articles:
https://www.bitgo.com/resources/blog/transition-to-bitcoin-testnet4/
https://blog.lopp.net/griefing-bitcoin-testnet/ (problems with ancient testnetv3)

Comment on lines +131 to +134
5. A set of automated tests will be created that utilize the RPC API to perform
automated transactions and test basic functionality of the live running
nodes. These tests should/must be run at least once against each
release-candidate binary while connected to the public testnet network.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A different set of tests for every release candidate, or are all release candidates subject to the same set of tests (that can be updated over time to reflect API evolution)?

Let's add this rule to the test policy after the API test suite exists. If we add it before, we risk over-committing and risk having to delay urgent updates, even if they are orthogonal to the things being tested by that test suite.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A different set of tests for every release candidate, or are all release candidates subject to the same set of tests (that can be updated over time to reflect API evolution)?

the latter. It's basically an integration test suite, but running against node(s) on a live (test) network. This is the sort of thing that perhaps we could get a google-summer-of-code volunteer/intern to build, with proper guidance.

Let's add this rule to the test policy after the API test suite exists. If we add it before, we risk over-committing and risk having to delay urgent updates, even if they are orthogonal to the things being tested by that test suite.

I'd suggest keeping it, but mark it as a goal/aspirational rather than policy, rule or promise.


**patch**: bumped any time changes from a release branch are included in a release.

resets candidate to "-rc1".
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Based on the picture in the github workflow, a patch release resets the suffix to "" (empty string). Specifically, the version evolves from 0.4.0 to 0.4.1 without suffix. The suffix is only added when a new release candidate is made.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thx, I'll revisit this.

Comment on lines +219 to +223
## latest-release tag

The `master` branch can contain changes that are not compatible with whatever network is currently live. Beta-testers looking for the branch that will synchronize with the network that is currently live need branch `release`. This branch may cherry-pick commits that are meant for `master` so long as they are backwards-compatible. However, when this task is too cumbersome, branch `release` will become effectively abandoned -- until the next network version is released.
The `master` branch may contain changes that are not compatible with previous
release. Individuals looking for the latest release can simply checkout the
`latest-release` tag, which is updated as part of the release process.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's wrong with the current practice of using branch "release" for this purpose?

Copy link
Collaborator Author

@dan-da dan-da May 2, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. A branch that is force-pushed (needed to relocate it) requires anyone that has it checked out to rebase. Essentially it breaks git pull.
  2. It is nebulous/confusing. What does a "release" branch mean? releases are performed at specific commits. tags are the mechanism for labeling specific commits.
  3. if there are no problems with a particular release that require further commits, then a tag is all that's needed, and a branch need not be created, as per the diagram.
  4. I believe it's a fairly non-standard practice.
  5. It feels like a sub-optimal and unnecessary usage to me, based on decades of experience with version control systems and release management. I have a fairly strong aversion to it in fact and have to kind of "look away".

In case you don't find my reasons convincing, I also asked gemini AI, who appears to concur.

question:

Is it a common or recommended git practice to create a branch named "release" that is force pushed to a new location with each official release?

response:

While it's possible to create a "release" branch and force push to it with each official release, it is generally NOT a common or recommended Git practice, especially in collaborative environments. Here's why:

Problems with Force-Pushing to a Shared "release" Branch:

  • Rewriting History: Force-pushing rewrites the commit history of the remote branch. This can cause significant problems for other collaborators who have already based their work on the previous state of the "release" branch. They would need to rebase or reset their local branches, which can be confusing and error-prone, potentially leading to lost work.
  • Disrupting Collaboration: A constantly changing history on a shared branch makes it difficult for team members to track changes, understand the evolution of the release, and collaborate effectively.
  • Complicating Auditing and Traceability: Rewriting history makes it harder to audit past releases, track down the origin of specific changes, and understand the lineage of the codebase.
  • Going Against Git Principles: Git is designed to track history. Force-pushing undermines this fundamental principle.

More Common and Recommended Practices for Handling Releases in Git:

  1. Release Branches with Merge:

    • Create a dedicated release branch (e.g., release/1.0, release/v2.3.0) from a stable development branch (like develop in Gitflow or main).
    • Perform final bug fixes and preparations on this release branch.
    • Merge the release branch into the main branch (main or master) to mark the official release.
    • Tag the release commit on the main branch with the release version (e.g., v1.0, 2.3.0).
    • Optionally, merge the release branch back into the development branch to ensure any last-minute fixes are included in ongoing development.
  2. Tagging Releases on the Main Branch:

    • For simpler workflows (like GitHub Flow), you might directly tag the specific commit on the main branch that represents the official release.
  3. Immutable Release Branches (Less Common with Force-Push):

    • Some teams might create a release branch and treat it as immutable once created. They would never force-push to it after its initial creation. Any necessary changes would involve creating new commits and potentially merging.

Why a Constantly Force-Pushed "release" Branch Might Seem Appealing (and why it's problematic):

  • "Clean" History: The idea might be to have a very linear and "clean" history on the "release" branch, showing only the final state of each release. However, this comes at the cost of losing the history of how that release was prepared.
  • Simplified View: It might seem simpler to just point the "release" branch to the latest release commit. However, branches are meant to be pointers that move forward, not backward or sideways by force.

In summary, while technically feasible, force-pushing to a shared "release" branch is strongly discouraged due to the negative impact on collaboration, history tracking, and the fundamental principles of Git. The standard practice involves creating release branches, merging them into the main branch, and tagging the release commits. This preserves history and facilitates collaboration.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess the kernel of the argument is that branches are for tracking history which is good for collaboration, whereas tags are just pointers to commits without context. For releases, you generally don't want to do much collaboration anyway, and the branch-machinery that works well for history management is actually working against you when just want to get to the current release snapshot.

I agree. Let's use a tag instead.

@Sword-Smith
Copy link
Member

Why don't we focus on testing the release candidate on main net instead? That's where the action is. We'd have to create our own action on testnet. On main net, you'd quickly discover if your new version is incompatible with older versions. On testnet, that's less likely to happen. Also: There are sometimes reorganizations on main net, but they never happen on test net because it's not a competitive environment. Reorganizations test important gotchas of blockchain node logic.

@dan-da
Copy link
Collaborator Author

dan-da commented May 15, 2025

Why don't we focus on testing the release candidate on main net instead?

The keyword here may be "focus".

Ideally, we would be testing each release-candidate in every way possible, right?

Mainnet has these challenges for testing:

  1. requires using real funds. (for devs and community members)
  2. composing and upgrading functionality requires serious hardware, and it was previously stated that such is not presently available to dedicate to release-candidate testing.
  3. do we really want to be spamming mainnet with a bunch of test transactions?
  4. incoming blocks and Tx are unpredictable/random. (also true of testnet, testnet-mock).
  5. not a "safe" environment for community members to test out changes before release.

I think a sort of ideal scenario for testing release-candidates would be:

  1. run a regtest-specific test harness against it. This is a fully (or mostly) deterministic environment where the only blocks and tx created are the ones we produce. This test harness could eg test re-org scenarios that are not possible to reliably test on a public network. It can also run quickly since blocks can be generated near instantly. So this provides a good initial smoke-test.

  2. run a testnet-mock specific test harness against it. This can perform composing and upgrading cheaply, eg on a VPS. devs and community members can also perform test transactions at no real cost, run their own test scripts, etc. In contrast to (1) the node has to deal with unpredictable peer messages, transactions, blocks so issues may be exposed that do not occur in a deterministic environment.

  3. run a mainnet specific test harness against it. This might not perform any composing or upgrading. It might or might not initiate transactions since that requires real funds and spams the network. again, devs and community can run their own tests as well.

If asked to choose an order of implementation (focus) with regards to building test-harnesses, I would order them as 2, 1, 3.

In case it is not already clear: the test harnesses I have in mind would be calling public RPC methods. The test-harnesses could/should live in their own crate inside the repo/workspace.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants