feat: improve issue submission and triage workflow#1279
feat: improve issue submission and triage workflow#1279
Conversation
Signed-off-by: Rob Walworth <robert.walworth@swirldslabs.com>
Signed-off-by: Rob Walworth <robert.walworth@swirldslabs.com>
Signed-off-by: Rob Walworth <robert.walworth@swirldslabs.com>
|
Hey @rwalworth 👋 thanks for the PR! This comment updates automatically as you push changes -- think of it as your PR's live scoreboard! PR Checks✅ DCO Sign-off -- All commits have valid sign-offs. Nice work! ✅ GPG Signature -- All commits have verified GPG signatures. Locked and loaded! ✅ Merge Conflicts -- No merge conflicts detected. Smooth sailing! ✅ Issue Link -- Linked to #1278 (assigned to you). 🎉 All checks passed! Your PR is ready for review. Great job! |
| - type: dropdown | ||
| id: os | ||
|
|
||
| - type: textarea |
There was a problem hiding this comment.
These optional sections, if the user did not fill them in, can we not post them to the issue? maybe one day can adjust your edit bot to clear the optional content artefacts
There was a problem hiding this comment.
This is already handled by the /finalize bot. The isMeaningfulContent() function in finalize-comments.js returns false for content that is blank, _No response_, or the literal text Optional. When reconstructBody() runs, it skips any section that doesn't pass that check.
So after a maintainer runs /finalize, the reconstructed issue body will only contain sections that have meaningful content - no orphaned "Optional." placeholders or empty _No response_ sections.
The issue before triage (while it still has status: awaiting triage) will show GitHub's default rendering including empty sections, which is expected. The cleanup happens at finalization time.
| description: What version of the software are you using? | ||
| label: Version | ||
| placeholder: v0.1.0 | ||
| label: ✔️ Acceptance Criteria |
There was a problem hiding this comment.
this section is confusing to me as a bug reporter, maybe i experienced a bug but do not understand its mechanism
you could perhaps add e.g. what were you expecting? but then rthis duplicates r86
There was a problem hiding this comment.
The intent of the Acceptance Criteria section is: the pre-populated checkboxes are generic enough to apply to nearly every bug fix, so reporters can leave them as-is without understanding the fix mechanism. They don't need to fill this in - it's pre-filled for them:
- The described behavior is no longer reproducible
- A regression test is added
- No unrelated changes introduced
- All existing tests pass
That said, I can make this clearer by updating the description field to say something like:
These are the standard acceptance criteria for a bug fix. You can leave them as-is — a maintainer will refine them during triage if needed.
This distinguishes it from the "Expected Behavior" field (what the reporter observes) vs this field (what a developer needs to verify is fixed).
| name: Bug report | ||
| description: Create a report to help us improve | ||
| labels: [ bug ] | ||
| name: Bug Report |
There was a problem hiding this comment.
what if the user submits a blank issue? many people do as for some it is easier. will you want to support blank issues, and append in content to that after its labelled?
There was a problem hiding this comment.
Blank issues are fully supported in this workflow. When a maintainer runs /finalize on a blank issue, parseSections() in finalize-comments.js finds no meaningful content, and reconstructBody() builds the full template structure from scratch. The isMeaningfulContent() function treats empty sections, _No response_, and Optional. as absent - so the finalized body will have the complete template scaffolding with placeholder content ready for the assignee to fill in.
The one thing that changes from GitHub's perspective: GitHub still requires required: true fields to be non-empty at submission time. For the blank-issue case, contributors who click past the form will get a GitHub validation error on required fields. This is intentional - it is important to at least add a description of the issue. But if someone pastes a single sentence in the description and leaves everything else blank, that's a valid blank-ish issue and /finalize will handle it cleanly.
.github/ISSUE_TEMPLATE/bug.yml
Outdated
| placeholder: | | ||
|  | ||
| label: ✅ Expected Behavior | ||
| description: What should have happened? |
There was a problem hiding this comment.
beginner bug experiencers may not know what should happened, sometimes they just know they got blocked
There was a problem hiding this comment.
Fair point - the label "✅ Expected Behavior" with the terse description "What should have happened?" might feel daunting if you don't know the intended behavior. I'll soften the description to something like:
What should have happened? Even a simple "I expected no error / no crash / the function to return a value" is helpful.
The embedded example block (hidden in the final submission) already shows a concrete before/after, which helps. But making the field description more encouraging for beginners is a good improvement - I'll fold that in.
.github/ISSUE_TEMPLATE/feature.yml
Outdated
| - Search [existing issues](https://github.com/hiero-ledger/hiero-sdk-cpp/issues) to see if this has been proposed before | ||
| - For large or protocol-level changes, consider opening a [Hiero Improvement Proposal](https://github.com/hiero-ledger/hiero-improvement-proposals) or starting a [GitHub Discussion](https://github.com/hiero-ledger/hiero-sdk-cpp/discussions) first | ||
|
|
||
| A well-described feature request makes it much easier for maintainers to evaluate, plan, and implement. |
There was a problem hiding this comment.
I'll expand "maintainers" to "maintainers and the project team" here to better reflect that the broader community is also involved in evaluating and implementing features.
| label: Problem | ||
| description: What is the problem you are trying to solve? | ||
| placeholder: I'm always frustrated when ... | ||
| label: 👾 Problem Description |
There was a problem hiding this comment.
i know the original template says problem first and this is just my 2 cents, but in my mind when i propose a feature, i have an idea i want to share first, then the reasons why
There was a problem hiding this comment.
That's a valid perspective. The "problem-first" ordering here is intentional - it's a technique borrowed from product and open source best practices to avoid the XY problem (proposing a specific solution before the underlying need is understood). When we understand the problem clearly, we can sometimes find a better solution than the one originally proposed.
That said, I take your point that it can feel unnatural when you come to the form with a specific idea in mind. I'll add a note to the description field. This acknowledges the idea-first flow while keeping the problem grounding.
| label: Alternatives | ||
| description: What alternative solutions have you considered? | ||
|
No newline at end of file |
||
| label: 🔄 Alternatives Considered |
There was a problem hiding this comment.
should a feature request include architectural weighing up or implementation considerations? in my mind, i'd just have an idea, with reasons why, (not a solution either)
for a problem - solution i would see it as a 'enhacement.yml' or 'improvement.yml'
There was a problem hiding this comment.
The "Alternatives Considered" and "Implementation Notes" sections are both required: false, so feature proposers with a simple idea can leave them completely blank - the form will still submit, and /finalize will strip those empty sections from the reconstructed body.
The sections are there to accommodate proposers who have already thought through alternatives or have strong implementation intuitions (which is common from SDK users who have deep knowledge of the codebase). For everyone else, blank is fine.
On the "enhancement vs feature" distinction - that's an interesting framing. For improvements or enhancements to existing behavior (no new public API), the Task template is the right fit and a kind: enhancement label should be applied during triage. The Feature template is reserved for genuinely new capabilities. So the distinction you're describing maps to Task + kind: enhancement vs Feature, rather than needing a separate "enhancement.yml".
| attributes: | ||
| label: 👩💻 Implementation Notes | ||
| description: | | ||
| Any hints about where or how this might be implemented in the SDK? |
There was a problem hiding this comment.
saying hints is a good approach, perhaps the imeplementation and other approaches sections can be merged into optional hints?
There was a problem hiding this comment.
Interesting thought, though I'm not entirely convinced a merged "Optional Hints" section would be simpler in practice - a single freeform hints field might actually be harder to fill in than two focused prompts, since contributors wouldn't have the specific questions to anchor on. The current separation also lets the bot handle each section independently when reconstructing the body.
That said, I'd rather let real usage guide the decision. If the two-section structure is causing confusion in actual submissions, consolidating them becomes a much easier call. I'll keep an eye on it.
| description: What does "done" look like for this feature? | ||
| value: | | ||
| - [ ] The feature works as described in the proposal | ||
| - [ ] Unit and/or integration tests are added |
There was a problem hiding this comment.
the feature proposer, should they understand the requirements to create the feature, like the exact testing methods/etc? or is this more for the maintainer and can be appended in the /finalise?
There was a problem hiding this comment.
The Acceptance Criteria section is pre-populated with generic checkboxes that apply to virtually all feature implementations. Feature proposers don't need to understand the testing methodology - they can leave these as-is. Maintainers review and refine the criteria during triage, and /finalize preserves any edits made to the body before running.
The rationale for including it in the submission form (rather than only adding it during /finalize) is that it signals to the proposer upfront what "done" looks like, which often leads to more complete proposals. But I can make the field description clearer about this being a pre-filled starting point, similar to what I mentioned for the bug template.
More broadly - the more detail a proposer provides across all sections, the more context a maintainer has during triage to push the skill level of the issue down. A well-described feature with clear acceptance criteria and implementation hints can sometimes go from an Intermediate to a Beginner issue, which opens it up to a much wider pool of contributors. So filling in optional sections, even partially, is genuinely encouraged.
| --- | ||
| ## Thanks for contributing! 🔧 | ||
|
|
||
| Tasks cover maintenance, quality, tooling, and improvement work that keeps the |
There was a problem hiding this comment.
i can imagine some confusion distinguihsing between tasks and features, because they sound similar re
What needs to be done and why
maybe you can add examples of features in the above template?
There was a problem hiding this comment.
Yeah honestly coming up with the distinction got a bit tough for me too. The task intro mentions "improvement work" which can sound like a feature. I'll add two things:
-
A brief note in the intro pointing to the new
docs/contributing/issue-types.mdguide (also in this PR): "Not sure if this is a Task or a Feature? See the Issue Types Guide." -
A short "Not a task?" note in the intro, similar to how other templates distinguish types. Something like:
- Adding new public API or behavior → use the Feature Request template
- Reporting incorrect behavior → use the Bug Report template
- Everything else (maintenance, tooling, quality) → this template
The docs/contributing/issue-types.md guide added in this PR goes into detail on the distinctions with examples, but surface-level guidance in the template itself helps contributors self-select without having to read the full guide first.
|
I really like this so many things about this idea, and can see it improves many things. I will need more time to consider this for a review, because there are some situations that have me thinking and understand what sort of costs can be involved. I assume these are not very important right now though |
There was a problem hiding this comment.
I really like this so many things about this idea, and can see it improves many things. I will need more time to consider this for a review, because there are some situations that have me thinking and understand what sort of costs can be involved. I assume these are not very important right now though
Thanks for the detailed review @exploreriii - really helpful!
On the "costs" question, I think we're in good shape on all fronts:
- API rate limits: Each
/finalizeinvocation makes ~4–5 GitHub REST API calls.GITHUB_TOKENin Actions allows 1,000 requests/hr per repository, which means we'd need 200+ finalizations in a single hour before coming close to the limit. That's not a realistic concern for typical triage activity. - Bot maintenance: This shouldn't be a significant ongoing burden. A lot of the complexity is now extracted into shared helpers (
api.js,logger.js), and the snapshot tests mean we'll catch regressions early without needing to manually revisit the logic. Once the implementation stabilizes, there's not much reason to touch it. - Workflow complexity: For the most part, this is how we already handle issue submissions but now it's streamlined and more correct. The old approach of having contributors self-select a skill level was backwards; they don't know what skill level the issue is, and those labels carry real meaning for the backlog. This gives the project team proper control over how issues are categorized, prioritized, and surfaced to contributors - which directly affects the roadmap and contributor experience.
Happy to dig into any specifics if something is still giving you pause!
.github/ISSUE_TEMPLATE/bug.yml
Outdated
| placeholder: | | ||
|  | ||
| label: ✅ Expected Behavior | ||
| description: What should have happened? |
There was a problem hiding this comment.
Fair point - the label "✅ Expected Behavior" with the terse description "What should have happened?" might feel daunting if you don't know the intended behavior. I'll soften the description to something like:
What should have happened? Even a simple "I expected no error / no crash / the function to return a value" is helpful.
The embedded example block (hidden in the final submission) already shows a concrete before/after, which helps. But making the field description more encouraging for beginners is a good improvement - I'll fold that in.
| description: What version of the software are you using? | ||
| label: Version | ||
| placeholder: v0.1.0 | ||
| label: ✔️ Acceptance Criteria |
There was a problem hiding this comment.
The intent of the Acceptance Criteria section is: the pre-populated checkboxes are generic enough to apply to nearly every bug fix, so reporters can leave them as-is without understanding the fix mechanism. They don't need to fill this in - it's pre-filled for them:
- The described behavior is no longer reproducible
- A regression test is added
- No unrelated changes introduced
- All existing tests pass
That said, I can make this clearer by updating the description field to say something like:
These are the standard acceptance criteria for a bug fix. You can leave them as-is — a maintainer will refine them during triage if needed.
This distinguishes it from the "Expected Behavior" field (what the reporter observes) vs this field (what a developer needs to verify is fixed).
| - type: dropdown | ||
| id: os | ||
|
|
||
| - type: textarea |
There was a problem hiding this comment.
This is already handled by the /finalize bot. The isMeaningfulContent() function in finalize-comments.js returns false for content that is blank, _No response_, or the literal text Optional. When reconstructBody() runs, it skips any section that doesn't pass that check.
So after a maintainer runs /finalize, the reconstructed issue body will only contain sections that have meaningful content - no orphaned "Optional." placeholders or empty _No response_ sections.
The issue before triage (while it still has status: awaiting triage) will show GitHub's default rendering including empty sections, which is expected. The cleanup happens at finalization time.
.github/ISSUE_TEMPLATE/feature.yml
Outdated
| - Search [existing issues](https://github.com/hiero-ledger/hiero-sdk-cpp/issues) to see if this has been proposed before | ||
| - For large or protocol-level changes, consider opening a [Hiero Improvement Proposal](https://github.com/hiero-ledger/hiero-improvement-proposals) or starting a [GitHub Discussion](https://github.com/hiero-ledger/hiero-sdk-cpp/discussions) first | ||
|
|
||
| A well-described feature request makes it much easier for maintainers to evaluate, plan, and implement. |
There was a problem hiding this comment.
I'll expand "maintainers" to "maintainers and the project team" here to better reflect that the broader community is also involved in evaluating and implementing features.
| label: Problem | ||
| description: What is the problem you are trying to solve? | ||
| placeholder: I'm always frustrated when ... | ||
| label: 👾 Problem Description |
There was a problem hiding this comment.
That's a valid perspective. The "problem-first" ordering here is intentional - it's a technique borrowed from product and open source best practices to avoid the XY problem (proposing a specific solution before the underlying need is understood). When we understand the problem clearly, we can sometimes find a better solution than the one originally proposed.
That said, I take your point that it can feel unnatural when you come to the form with a specific idea in mind. I'll add a note to the description field. This acknowledges the idea-first flow while keeping the problem grounding.
| label: Alternatives | ||
| description: What alternative solutions have you considered? | ||
|
No newline at end of file |
||
| label: 🔄 Alternatives Considered |
There was a problem hiding this comment.
The "Alternatives Considered" and "Implementation Notes" sections are both required: false, so feature proposers with a simple idea can leave them completely blank - the form will still submit, and /finalize will strip those empty sections from the reconstructed body.
The sections are there to accommodate proposers who have already thought through alternatives or have strong implementation intuitions (which is common from SDK users who have deep knowledge of the codebase). For everyone else, blank is fine.
On the "enhancement vs feature" distinction - that's an interesting framing. For improvements or enhancements to existing behavior (no new public API), the Task template is the right fit and a kind: enhancement label should be applied during triage. The Feature template is reserved for genuinely new capabilities. So the distinction you're describing maps to Task + kind: enhancement vs Feature, rather than needing a separate "enhancement.yml".
| attributes: | ||
| label: 👩💻 Implementation Notes | ||
| description: | | ||
| Any hints about where or how this might be implemented in the SDK? |
There was a problem hiding this comment.
Interesting thought, though I'm not entirely convinced a merged "Optional Hints" section would be simpler in practice - a single freeform hints field might actually be harder to fill in than two focused prompts, since contributors wouldn't have the specific questions to anchor on. The current separation also lets the bot handle each section independently when reconstructing the body.
That said, I'd rather let real usage guide the decision. If the two-section structure is causing confusion in actual submissions, consolidating them becomes a much easier call. I'll keep an eye on it.
| description: What does "done" look like for this feature? | ||
| value: | | ||
| - [ ] The feature works as described in the proposal | ||
| - [ ] Unit and/or integration tests are added |
There was a problem hiding this comment.
The Acceptance Criteria section is pre-populated with generic checkboxes that apply to virtually all feature implementations. Feature proposers don't need to understand the testing methodology - they can leave these as-is. Maintainers review and refine the criteria during triage, and /finalize preserves any edits made to the body before running.
The rationale for including it in the submission form (rather than only adding it during /finalize) is that it signals to the proposer upfront what "done" looks like, which often leads to more complete proposals. But I can make the field description clearer about this being a pre-filled starting point, similar to what I mentioned for the bug template.
More broadly - the more detail a proposer provides across all sections, the more context a maintainer has during triage to push the skill level of the issue down. A well-described feature with clear acceptance criteria and implementation hints can sometimes go from an Intermediate to a Beginner issue, which opens it up to a much wider pool of contributors. So filling in optional sections, even partially, is genuinely encouraged.
| --- | ||
| ## Thanks for contributing! 🔧 | ||
|
|
||
| Tasks cover maintenance, quality, tooling, and improvement work that keeps the |
There was a problem hiding this comment.
Yeah honestly coming up with the distinction got a bit tough for me too. The task intro mentions "improvement work" which can sound like a feature. I'll add two things:
-
A brief note in the intro pointing to the new
docs/contributing/issue-types.mdguide (also in this PR): "Not sure if this is a Task or a Feature? See the Issue Types Guide." -
A short "Not a task?" note in the intro, similar to how other templates distinguish types. Something like:
- Adding new public API or behavior → use the Feature Request template
- Reporting incorrect behavior → use the Bug Report template
- Everything else (maintenance, tooling, quality) → this template
The docs/contributing/issue-types.md guide added in this PR goes into detail on the distinctions with examples, but surface-level guidance in the template itself helps contributors self-select without having to read the full guide first.
Signed-off-by: Rob Walworth <robert.walworth@swirldslabs.com>
gsstoykov
left a comment
There was a problem hiding this comment.
Overall looks good.
Btw is the plan to enforce people to use these commands and etc. We should keep in mind that a random person hopping in the SDK would just want to see an issue and contribute not having to know all this stuff.
| | Level | Scope | Expected time | Who it's for | | ||
| |---|---|---|---| | ||
| | `skill: good first issue` | Single file, step-by-step | 1–4 hours | First-time contributors | | ||
| | `skill: beginner` | 1–3 related files | 4–8 hours | Contributors with 2 completed GFIs | | ||
| | `skill: intermediate` | Multiple modules | 1–3 days | Contributors familiar with the codebase | | ||
| | `skill: advanced` | Repository-wide | 3+ days | Experienced contributors | |
There was a problem hiding this comment.
I don't really like the scope and expected time sections. The number of files doesn't really define complexity. Could be a GFI change typo for 5 files and that wouldn't make it beginner/intermediate for example. Also the expected time would vary as well.. it could be a real simple one but require more time than 1-4h and an advanced one which requires a lot of know how but could be resolved in hours not days. I know these sections provide ruff estimates but would still disagree on how we measure these. Maybe we should think of some other metric.
There was a problem hiding this comment.
Fair point on both counts and I've had the same thoughts - I kept them since these rough parameters are used in multiple repos, but I agree these could maybe warrant further discussion.
I went ahead and dropped the "Scope" column since file count is a poor proxy for complexity (a doc update can touch 10 files and still be trivial; a focused algorithmic change in one file can be genuinely hard). Added a note under the table clarifying that the time estimates are rough orientation guides rather than targets, and that the linked guidelines give the full criteria for choosing a level.
cc @exploreriii
| | Label | When to use | | ||
| |---|---| | ||
| | `kind: enhancement` | Improving existing functionality without adding new public API | | ||
| | `kind: documentation` | README, guides, API doc comments | | ||
| | `kind: refactor` | Code restructuring without behavior change | | ||
| | `kind: security` | Security-related improvements or hardening | | ||
| | `kind: testing` | Adding or improving tests | | ||
| | `kind: maintenance` | Dependencies, build system, CI/CD | |
There was a problem hiding this comment.
This seems odd. Maybe change Label column to Kind.
There was a problem hiding this comment.
Renamed the column header to "Kind" to match the label group it documents
Signed-off-by: Rob Walworth <robert.walworth@swirldslabs.com>
The From a contributor's perspective, the experience will be exactly the same as before:
There won't be much of a visible change for contributors outside of using the new templates to submit issues - which now no longer allow the issue submitter to decide on the skill level of the issue. |
Summary
This PR overhauls the issue submission and triage workflow for the Hiero C++ SDK. It replaces the five skill-level-specific issue submission templates with three problem-type templates (Bug, Feature, Task), adds a
/finalizebot command that transitions triaged issues to ready-for-dev state, and adds maintainer and contributor documentation to support the new process.Key Changes:
/finalizebot command that validates labels, updates issue title/body, and swaps status labelsapi.js,logger.js) and refactorassign.js/unassign.jsto use themMotivation
Previously, issue submitters chose a skill-level template themselves. Contributors cannot reliably assess implementation complexity, so difficulty labels were inconsistently applied and the triage step was unclear. The new process separates concerns:
/finalize/assignThis makes the contribution experience more predictable for everyone.
Changes
Issue Templates
Removed the five skill-level submission templates that asked contributors to self-assess complexity, which led to mislabeled issues:
01_good_first_issue_candidate.yml02_good_first_issue.yml03_beginner_issue.yml04_intermediate_issue.yml05_advanced_issue.ymlAdded three problem-type templates that focus on describing the issue clearly:
bug.ymlfeature.ymltask.ymlAll three templates now:
status: awaiting triage(instead of a skill-level label)/finalizeBot CommandNew
finalize.jscommand triggered by commenting/finalizeon an issue. The command:status: awaiting triagemust be presentskill:labelpriority:labelkind:label; Feature: 0kind:labels[Beginner]: Fix thing)status: awaiting triage→status: ready for devAll failure paths post informative comments and tag maintainers when manual intervention is needed.
Files Added:
.github/scripts/commands/finalize.js.github/scripts/commands/finalize-comments.jsShared Helpers Extraction
Extracted reusable logic from
assign.jsandunassign.jsinto new shared helper modules:helpers/api.js—swapLabels,postComment,acknowledgeComment,hasLabel,getLabelsByPrefixhelpers/logger.js—createDelegatingLoggerfor consistent log delegation across commandshelpers/constants.js— AddedREADY_FOR_DEVlabel constantRefactored
assign.jsandunassign.jsto import fromhelpersinstead of duplicating logic.Bot Dispatcher Update
Updated
bot-on-comment.jsto route/finalizeto the new handler alongside existing/assignand/unassigncommands.Documentation
docs/contributing/issue-types.mddocs/maintainers/guidelines-triage.md/finalizeusage, edge casesCONTRIBUTING.md/finalizeworkflowTesting
Comprehensive snapshot-based test suite added in
test-finalize-bot.js(778 lines).Test coverage includes:
Updated
test-assign-bot.jsto import helpers from the refactored location.Test Plan:
/finalizecorrectlyFiles Changed Summary
Added (7 files)
.github/scripts/commands/finalize.js.github/scripts/commands/finalize-comments.js.github/scripts/helpers/api.js.github/scripts/helpers/logger.js.github/ISSUE_TEMPLATE/task.ymldocs/contributing/issue-types.mddocs/maintainers/guidelines-triage.mdModified (8 files)
.github/ISSUE_TEMPLATE/bug.yml.github/ISSUE_TEMPLATE/feature.yml.github/scripts/bot-on-comment.js.github/scripts/commands/assign.js.github/scripts/commands/unassign.js.github/scripts/helpers/constants.js.github/scripts/tests/test-assign-bot.jsCONTRIBUTING.mdRemoved (5 files)
.github/ISSUE_TEMPLATE/01_good_first_issue_candidate.yml.github/ISSUE_TEMPLATE/02_good_first_issue.yml.github/ISSUE_TEMPLATE/03_beginner_issue.yml.github/ISSUE_TEMPLATE/04_intermediate_issue.yml.github/ISSUE_TEMPLATE/05_advanced_issue.ymlBreaking Changes
None for SDK consumers. This PR touches only GitHub workflow tooling (issue templates, bot scripts, and documentation). There are no changes to the C++ SDK source, public API, or build system.
Existing issues already labeled
status: ready for devare unaffected. New issues will now use the three-template system and require maintainer triage before becoming available for contributors.