Skip to content

feat: improve issue submission and triage workflow#1279

Open
rwalworth wants to merge 5 commits intomainfrom
01278-advanced-improve-issue-submission-and-triage-workflow
Open

feat: improve issue submission and triage workflow#1279
rwalworth wants to merge 5 commits intomainfrom
01278-advanced-improve-issue-submission-and-triage-workflow

Conversation

@rwalworth
Copy link
Copy Markdown
Contributor

Summary

This PR overhauls the issue submission and triage workflow for the Hiero C++ SDK. It replaces the five skill-level-specific issue submission templates with three problem-type templates (Bug, Feature, Task), adds a /finalize bot command that transitions triaged issues to ready-for-dev state, and adds maintainer and contributor documentation to support the new process.

Key Changes:

  • Remove 5 skill-level issue templates; replace with 3 problem-type templates (Bug, Feature, Task)
  • Add /finalize bot command that validates labels, updates issue title/body, and swaps status labels
  • Extract shared helpers (api.js, logger.js) and refactor assign.js / unassign.js to use them
  • Add maintainer triage guide and contributor issue-type guide
  • Comprehensive snapshot-based tests for the finalize command (778+ lines)

Motivation

Previously, issue submitters chose a skill-level template themselves. Contributors cannot reliably assess implementation complexity, so difficulty labels were inconsistently applied and the triage step was unclear. The new process separates concerns:

  • Submitters describe the problem (Bug, Feature, or Task)
  • Maintainers assess complexity, apply skill/priority/kind labels, then run /finalize
  • Contributors pick up well-structured, clearly labeled issues via /assign

This makes the contribution experience more predictable for everyone.


Changes

Issue Templates

Removed the five skill-level submission templates that asked contributors to self-assess complexity, which led to mislabeled issues:

Removed Reason
01_good_first_issue_candidate.yml Skill assessment is now a maintainer responsibility
02_good_first_issue.yml "
03_beginner_issue.yml "
04_intermediate_issue.yml "
05_advanced_issue.yml "

Added three problem-type templates that focus on describing the issue clearly:

Added/Updated Type Purpose
bug.yml Bug Report broken or unexpected behavior
feature.yml Feature Request a new user-facing SDK capability
task.yml Task (new) Propose maintenance, refactoring, or improvement work

All three templates now:

  • Auto-apply status: awaiting triage (instead of a skill-level label)
  • Include inline examples embedded as hidden HTML comments to guide submitters without cluttering the form
  • Require structured fields: description, reproduction steps / proposed approach / implementation steps, expected/actual behavior, acceptance criteria

/finalize Bot Command

New finalize.js command triggered by commenting /finalize on an issue. The command:

  1. Acknowledges the comment with a 👍 reaction
  2. Verifies the commenter has triage-or-above repository permissions
  3. Validates all required labels are correctly applied (all violations collected before responding):
    • status: awaiting triage must be present
    • Exactly 1 skill: label
    • Exactly 1 priority: label
    • Bug/Task: exactly 1 kind: label; Feature: 0 kind: labels
    • Issue type must be Bug, Feature, or Task
  4. Builds an updated title with the skill-level prefix (e.g. [Beginner]: Fix thing)
  5. Rebuilds the issue body in the standard skill-level template format (preserving submitter content)
  6. Swaps status: awaiting triagestatus: ready for dev
  7. Posts a success comment

All failure paths post informative comments and tag maintainers when manual intervention is needed.

Files Added:

  • .github/scripts/commands/finalize.js
  • .github/scripts/commands/finalize-comments.js

Shared Helpers Extraction

Extracted reusable logic from assign.js and unassign.js into new shared helper modules:

  • helpers/api.jsswapLabels, postComment, acknowledgeComment, hasLabel, getLabelsByPrefix
  • helpers/logger.jscreateDelegatingLogger for consistent log delegation across commands
  • helpers/constants.js — Added READY_FOR_DEV label constant

Refactored assign.js and unassign.js to import from helpers instead of duplicating logic.

Bot Dispatcher Update

Updated bot-on-comment.js to route /finalize to the new handler alongside existing /assign and /unassign commands.

Documentation

File Description
docs/contributing/issue-types.md Explains the three issue types and when to use each
docs/maintainers/guidelines-triage.md Step-by-step triage guide: labeling checklist, /finalize usage, edge cases
CONTRIBUTING.md Updated to reference the new templates and /finalize workflow

Testing

Comprehensive snapshot-based test suite added in test-finalize-bot.js (778 lines).

Test coverage includes:

Scenario Tests
Permission check — unauthorized (read-only, non-collaborator)
Permission check — API error
Label validation — all violation types (missing skill, extra skill, missing priority, missing kind, extra kind, Feature with kind, wrong status, unknown type)
Multiple simultaneous violations
Title prefix logic — no existing prefix, replace existing prefix, all skill levels
Body reconstruction — all skill levels, original content preservation
Update failure (API error during issue update)
Label swap failure
Successful finalization (all skill levels)

Updated test-assign-bot.js to import helpers from the refactored location.

Test Plan:

  • All existing assign/unassign bot tests pass
  • All finalize bot tests pass (snapshot-verified)
  • Bot dispatcher routes /finalize correctly

Files Changed Summary

Added (7 files)

  • .github/scripts/commands/finalize.js
  • .github/scripts/commands/finalize-comments.js
  • .github/scripts/helpers/api.js
  • .github/scripts/helpers/logger.js
  • .github/ISSUE_TEMPLATE/task.yml
  • docs/contributing/issue-types.md
  • docs/maintainers/guidelines-triage.md

Modified (8 files)

  • .github/ISSUE_TEMPLATE/bug.yml
  • .github/ISSUE_TEMPLATE/feature.yml
  • .github/scripts/bot-on-comment.js
  • .github/scripts/commands/assign.js
  • .github/scripts/commands/unassign.js
  • .github/scripts/helpers/constants.js
  • .github/scripts/tests/test-assign-bot.js
  • CONTRIBUTING.md

Removed (5 files)

  • .github/ISSUE_TEMPLATE/01_good_first_issue_candidate.yml
  • .github/ISSUE_TEMPLATE/02_good_first_issue.yml
  • .github/ISSUE_TEMPLATE/03_beginner_issue.yml
  • .github/ISSUE_TEMPLATE/04_intermediate_issue.yml
  • .github/ISSUE_TEMPLATE/05_advanced_issue.yml

Breaking Changes

None for SDK consumers. This PR touches only GitHub workflow tooling (issue templates, bot scripts, and documentation). There are no changes to the C++ SDK source, public API, or build system.

Existing issues already labeled status: ready for dev are unaffected. New issues will now use the three-template system and require maintainer triage before becoming available for contributors.

Signed-off-by: Rob Walworth <robert.walworth@swirldslabs.com>
Signed-off-by: Rob Walworth <robert.walworth@swirldslabs.com>
Signed-off-by: Rob Walworth <robert.walworth@swirldslabs.com>
@rwalworth rwalworth self-assigned this Mar 25, 2026
@rwalworth rwalworth added the status: needs review The pull request is ready for maintainer review label Mar 25, 2026
@rwalworth rwalworth requested review from a team as code owners March 25, 2026 21:22
@rwalworth rwalworth linked an issue Mar 25, 2026 that may be closed by this pull request
13 tasks
@github-actions
Copy link
Copy Markdown

Hey @rwalworth 👋 thanks for the PR!
I'm your friendly PR Helper Bot 🤖 and I'll be riding shotgun on this one, keeping track of your PR's status to help you get it approved and merged.

This comment updates automatically as you push changes -- think of it as your PR's live scoreboard!
Here's the latest:


PR Checks

DCO Sign-off -- All commits have valid sign-offs. Nice work!


GPG Signature -- All commits have verified GPG signatures. Locked and loaded!


Merge Conflicts -- No merge conflicts detected. Smooth sailing!


Issue Link -- Linked to #1278 (assigned to you).


🎉 All checks passed! Your PR is ready for review. Great job!

- type: dropdown
id: os

- type: textarea
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These optional sections, if the user did not fill them in, can we not post them to the issue? maybe one day can adjust your edit bot to clear the optional content artefacts

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is already handled by the /finalize bot. The isMeaningfulContent() function in finalize-comments.js returns false for content that is blank, _No response_, or the literal text Optional. When reconstructBody() runs, it skips any section that doesn't pass that check.

So after a maintainer runs /finalize, the reconstructed issue body will only contain sections that have meaningful content - no orphaned "Optional." placeholders or empty _No response_ sections.

The issue before triage (while it still has status: awaiting triage) will show GitHub's default rendering including empty sections, which is expected. The cleanup happens at finalization time.

description: What version of the software are you using?
label: Version
placeholder: v0.1.0
label: ✔️ Acceptance Criteria
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this section is confusing to me as a bug reporter, maybe i experienced a bug but do not understand its mechanism

you could perhaps add e.g. what were you expecting? but then rthis duplicates r86

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The intent of the Acceptance Criteria section is: the pre-populated checkboxes are generic enough to apply to nearly every bug fix, so reporters can leave them as-is without understanding the fix mechanism. They don't need to fill this in - it's pre-filled for them:

  • The described behavior is no longer reproducible
  • A regression test is added
  • No unrelated changes introduced
  • All existing tests pass

That said, I can make this clearer by updating the description field to say something like:

These are the standard acceptance criteria for a bug fix. You can leave them as-is — a maintainer will refine them during triage if needed.

This distinguishes it from the "Expected Behavior" field (what the reporter observes) vs this field (what a developer needs to verify is fixed).

name: Bug report
description: Create a report to help us improve
labels: [ bug ]
name: Bug Report
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what if the user submits a blank issue? many people do as for some it is easier. will you want to support blank issues, and append in content to that after its labelled?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Blank issues are fully supported in this workflow. When a maintainer runs /finalize on a blank issue, parseSections() in finalize-comments.js finds no meaningful content, and reconstructBody() builds the full template structure from scratch. The isMeaningfulContent() function treats empty sections, _No response_, and Optional. as absent - so the finalized body will have the complete template scaffolding with placeholder content ready for the assignee to fill in.

The one thing that changes from GitHub's perspective: GitHub still requires required: true fields to be non-empty at submission time. For the blank-issue case, contributors who click past the form will get a GitHub validation error on required fields. This is intentional - it is important to at least add a description of the issue. But if someone pastes a single sentence in the description and leaves everything else blank, that's a valid blank-ish issue and /finalize will handle it cleanly.

placeholder: |
![Screenshot](bug.png)
label: ✅ Expected Behavior
description: What should have happened?
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

beginner bug experiencers may not know what should happened, sometimes they just know they got blocked

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fair point - the label "✅ Expected Behavior" with the terse description "What should have happened?" might feel daunting if you don't know the intended behavior. I'll soften the description to something like:

What should have happened? Even a simple "I expected no error / no crash / the function to return a value" is helpful.

The embedded example block (hidden in the final submission) already shows a concrete before/after, which helps. But making the field description more encouraging for beginners is a good improvement - I'll fold that in.

- Search [existing issues](https://github.com/hiero-ledger/hiero-sdk-cpp/issues) to see if this has been proposed before
- For large or protocol-level changes, consider opening a [Hiero Improvement Proposal](https://github.com/hiero-ledger/hiero-improvement-proposals) or starting a [GitHub Discussion](https://github.com/hiero-ledger/hiero-sdk-cpp/discussions) first

A well-described feature request makes it much easier for maintainers to evaluate, plan, and implement.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

community?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll expand "maintainers" to "maintainers and the project team" here to better reflect that the broader community is also involved in evaluating and implementing features.

label: Problem
description: What is the problem you are trying to solve?
placeholder: I'm always frustrated when ...
label: 👾 Problem Description
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i know the original template says problem first and this is just my 2 cents, but in my mind when i propose a feature, i have an idea i want to share first, then the reasons why

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's a valid perspective. The "problem-first" ordering here is intentional - it's a technique borrowed from product and open source best practices to avoid the XY problem (proposing a specific solution before the underlying need is understood). When we understand the problem clearly, we can sometimes find a better solution than the one originally proposed.

That said, I take your point that it can feel unnatural when you come to the form with a specific idea in mind. I'll add a note to the description field. This acknowledges the idea-first flow while keeping the problem grounding.

label: Alternatives
description: What alternative solutions have you considered?

No newline at end of file
label: 🔄 Alternatives Considered
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should a feature request include architectural weighing up or implementation considerations? in my mind, i'd just have an idea, with reasons why, (not a solution either)

for a problem - solution i would see it as a 'enhacement.yml' or 'improvement.yml'

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The "Alternatives Considered" and "Implementation Notes" sections are both required: false, so feature proposers with a simple idea can leave them completely blank - the form will still submit, and /finalize will strip those empty sections from the reconstructed body.

The sections are there to accommodate proposers who have already thought through alternatives or have strong implementation intuitions (which is common from SDK users who have deep knowledge of the codebase). For everyone else, blank is fine.

On the "enhancement vs feature" distinction - that's an interesting framing. For improvements or enhancements to existing behavior (no new public API), the Task template is the right fit and a kind: enhancement label should be applied during triage. The Feature template is reserved for genuinely new capabilities. So the distinction you're describing maps to Task + kind: enhancement vs Feature, rather than needing a separate "enhancement.yml".

attributes:
label: 👩‍💻 Implementation Notes
description: |
Any hints about where or how this might be implemented in the SDK?
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

saying hints is a good approach, perhaps the imeplementation and other approaches sections can be merged into optional hints?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Interesting thought, though I'm not entirely convinced a merged "Optional Hints" section would be simpler in practice - a single freeform hints field might actually be harder to fill in than two focused prompts, since contributors wouldn't have the specific questions to anchor on. The current separation also lets the bot handle each section independently when reconstructing the body.

That said, I'd rather let real usage guide the decision. If the two-section structure is causing confusion in actual submissions, consolidating them becomes a much easier call. I'll keep an eye on it.

description: What does "done" look like for this feature?
value: |
- [ ] The feature works as described in the proposal
- [ ] Unit and/or integration tests are added
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the feature proposer, should they understand the requirements to create the feature, like the exact testing methods/etc? or is this more for the maintainer and can be appended in the /finalise?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The Acceptance Criteria section is pre-populated with generic checkboxes that apply to virtually all feature implementations. Feature proposers don't need to understand the testing methodology - they can leave these as-is. Maintainers review and refine the criteria during triage, and /finalize preserves any edits made to the body before running.

The rationale for including it in the submission form (rather than only adding it during /finalize) is that it signals to the proposer upfront what "done" looks like, which often leads to more complete proposals. But I can make the field description clearer about this being a pre-filled starting point, similar to what I mentioned for the bug template.

More broadly - the more detail a proposer provides across all sections, the more context a maintainer has during triage to push the skill level of the issue down. A well-described feature with clear acceptance criteria and implementation hints can sometimes go from an Intermediate to a Beginner issue, which opens it up to a much wider pool of contributors. So filling in optional sections, even partially, is genuinely encouraged.

---
## Thanks for contributing! 🔧

Tasks cover maintenance, quality, tooling, and improvement work that keeps the
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i can imagine some confusion distinguihsing between tasks and features, because they sound similar re

What needs to be done and why

maybe you can add examples of features in the above template?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah honestly coming up with the distinction got a bit tough for me too. The task intro mentions "improvement work" which can sound like a feature. I'll add two things:

  1. A brief note in the intro pointing to the new docs/contributing/issue-types.md guide (also in this PR): "Not sure if this is a Task or a Feature? See the Issue Types Guide."

  2. A short "Not a task?" note in the intro, similar to how other templates distinguish types. Something like:

    • Adding new public API or behavior → use the Feature Request template
    • Reporting incorrect behavior → use the Bug Report template
    • Everything else (maintenance, tooling, quality) → this template

The docs/contributing/issue-types.md guide added in this PR goes into detail on the distinctions with examples, but surface-level guidance in the template itself helps contributors self-select without having to read the full guide first.

@exploreriii
Copy link
Copy Markdown
Contributor

I really like this so many things about this idea, and can see it improves many things. I will need more time to consider this for a review, because there are some situations that have me thinking and understand what sort of costs can be involved. I assume these are not very important right now though

Copy link
Copy Markdown
Contributor Author

@rwalworth rwalworth left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I really like this so many things about this idea, and can see it improves many things. I will need more time to consider this for a review, because there are some situations that have me thinking and understand what sort of costs can be involved. I assume these are not very important right now though

Thanks for the detailed review @exploreriii - really helpful!

On the "costs" question, I think we're in good shape on all fronts:

  • API rate limits: Each /finalize invocation makes ~4–5 GitHub REST API calls. GITHUB_TOKEN in Actions allows 1,000 requests/hr per repository, which means we'd need 200+ finalizations in a single hour before coming close to the limit. That's not a realistic concern for typical triage activity.
  • Bot maintenance: This shouldn't be a significant ongoing burden. A lot of the complexity is now extracted into shared helpers (api.js, logger.js), and the snapshot tests mean we'll catch regressions early without needing to manually revisit the logic. Once the implementation stabilizes, there's not much reason to touch it.
  • Workflow complexity: For the most part, this is how we already handle issue submissions but now it's streamlined and more correct. The old approach of having contributors self-select a skill level was backwards; they don't know what skill level the issue is, and those labels carry real meaning for the backlog. This gives the project team proper control over how issues are categorized, prioritized, and surfaced to contributors - which directly affects the roadmap and contributor experience.

Happy to dig into any specifics if something is still giving you pause!

placeholder: |
![Screenshot](bug.png)
label: ✅ Expected Behavior
description: What should have happened?
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fair point - the label "✅ Expected Behavior" with the terse description "What should have happened?" might feel daunting if you don't know the intended behavior. I'll soften the description to something like:

What should have happened? Even a simple "I expected no error / no crash / the function to return a value" is helpful.

The embedded example block (hidden in the final submission) already shows a concrete before/after, which helps. But making the field description more encouraging for beginners is a good improvement - I'll fold that in.

description: What version of the software are you using?
label: Version
placeholder: v0.1.0
label: ✔️ Acceptance Criteria
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The intent of the Acceptance Criteria section is: the pre-populated checkboxes are generic enough to apply to nearly every bug fix, so reporters can leave them as-is without understanding the fix mechanism. They don't need to fill this in - it's pre-filled for them:

  • The described behavior is no longer reproducible
  • A regression test is added
  • No unrelated changes introduced
  • All existing tests pass

That said, I can make this clearer by updating the description field to say something like:

These are the standard acceptance criteria for a bug fix. You can leave them as-is — a maintainer will refine them during triage if needed.

This distinguishes it from the "Expected Behavior" field (what the reporter observes) vs this field (what a developer needs to verify is fixed).

- type: dropdown
id: os

- type: textarea
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is already handled by the /finalize bot. The isMeaningfulContent() function in finalize-comments.js returns false for content that is blank, _No response_, or the literal text Optional. When reconstructBody() runs, it skips any section that doesn't pass that check.

So after a maintainer runs /finalize, the reconstructed issue body will only contain sections that have meaningful content - no orphaned "Optional." placeholders or empty _No response_ sections.

The issue before triage (while it still has status: awaiting triage) will show GitHub's default rendering including empty sections, which is expected. The cleanup happens at finalization time.

- Search [existing issues](https://github.com/hiero-ledger/hiero-sdk-cpp/issues) to see if this has been proposed before
- For large or protocol-level changes, consider opening a [Hiero Improvement Proposal](https://github.com/hiero-ledger/hiero-improvement-proposals) or starting a [GitHub Discussion](https://github.com/hiero-ledger/hiero-sdk-cpp/discussions) first

A well-described feature request makes it much easier for maintainers to evaluate, plan, and implement.
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll expand "maintainers" to "maintainers and the project team" here to better reflect that the broader community is also involved in evaluating and implementing features.

label: Problem
description: What is the problem you are trying to solve?
placeholder: I'm always frustrated when ...
label: 👾 Problem Description
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's a valid perspective. The "problem-first" ordering here is intentional - it's a technique borrowed from product and open source best practices to avoid the XY problem (proposing a specific solution before the underlying need is understood). When we understand the problem clearly, we can sometimes find a better solution than the one originally proposed.

That said, I take your point that it can feel unnatural when you come to the form with a specific idea in mind. I'll add a note to the description field. This acknowledges the idea-first flow while keeping the problem grounding.

label: Alternatives
description: What alternative solutions have you considered?

No newline at end of file
label: 🔄 Alternatives Considered
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The "Alternatives Considered" and "Implementation Notes" sections are both required: false, so feature proposers with a simple idea can leave them completely blank - the form will still submit, and /finalize will strip those empty sections from the reconstructed body.

The sections are there to accommodate proposers who have already thought through alternatives or have strong implementation intuitions (which is common from SDK users who have deep knowledge of the codebase). For everyone else, blank is fine.

On the "enhancement vs feature" distinction - that's an interesting framing. For improvements or enhancements to existing behavior (no new public API), the Task template is the right fit and a kind: enhancement label should be applied during triage. The Feature template is reserved for genuinely new capabilities. So the distinction you're describing maps to Task + kind: enhancement vs Feature, rather than needing a separate "enhancement.yml".

attributes:
label: 👩‍💻 Implementation Notes
description: |
Any hints about where or how this might be implemented in the SDK?
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Interesting thought, though I'm not entirely convinced a merged "Optional Hints" section would be simpler in practice - a single freeform hints field might actually be harder to fill in than two focused prompts, since contributors wouldn't have the specific questions to anchor on. The current separation also lets the bot handle each section independently when reconstructing the body.

That said, I'd rather let real usage guide the decision. If the two-section structure is causing confusion in actual submissions, consolidating them becomes a much easier call. I'll keep an eye on it.

description: What does "done" look like for this feature?
value: |
- [ ] The feature works as described in the proposal
- [ ] Unit and/or integration tests are added
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The Acceptance Criteria section is pre-populated with generic checkboxes that apply to virtually all feature implementations. Feature proposers don't need to understand the testing methodology - they can leave these as-is. Maintainers review and refine the criteria during triage, and /finalize preserves any edits made to the body before running.

The rationale for including it in the submission form (rather than only adding it during /finalize) is that it signals to the proposer upfront what "done" looks like, which often leads to more complete proposals. But I can make the field description clearer about this being a pre-filled starting point, similar to what I mentioned for the bug template.

More broadly - the more detail a proposer provides across all sections, the more context a maintainer has during triage to push the skill level of the issue down. A well-described feature with clear acceptance criteria and implementation hints can sometimes go from an Intermediate to a Beginner issue, which opens it up to a much wider pool of contributors. So filling in optional sections, even partially, is genuinely encouraged.

---
## Thanks for contributing! 🔧

Tasks cover maintenance, quality, tooling, and improvement work that keeps the
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah honestly coming up with the distinction got a bit tough for me too. The task intro mentions "improvement work" which can sound like a feature. I'll add two things:

  1. A brief note in the intro pointing to the new docs/contributing/issue-types.md guide (also in this PR): "Not sure if this is a Task or a Feature? See the Issue Types Guide."

  2. A short "Not a task?" note in the intro, similar to how other templates distinguish types. Something like:

    • Adding new public API or behavior → use the Feature Request template
    • Reporting incorrect behavior → use the Bug Report template
    • Everything else (maintenance, tooling, quality) → this template

The docs/contributing/issue-types.md guide added in this PR goes into detail on the distinctions with examples, but surface-level guidance in the template itself helps contributors self-select without having to read the full guide first.

Signed-off-by: Rob Walworth <robert.walworth@swirldslabs.com>
Copy link
Copy Markdown
Contributor

@gsstoykov gsstoykov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall looks good.
Btw is the plan to enforce people to use these commands and etc. We should keep in mind that a random person hopping in the SDK would just want to see an issue and contribute not having to know all this stuff.

Comment on lines +54 to +59
| Level | Scope | Expected time | Who it's for |
|---|---|---|---|
| `skill: good first issue` | Single file, step-by-step | 1–4 hours | First-time contributors |
| `skill: beginner` | 1–3 related files | 4–8 hours | Contributors with 2 completed GFIs |
| `skill: intermediate` | Multiple modules | 1–3 days | Contributors familiar with the codebase |
| `skill: advanced` | Repository-wide | 3+ days | Experienced contributors |
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't really like the scope and expected time sections. The number of files doesn't really define complexity. Could be a GFI change typo for 5 files and that wouldn't make it beginner/intermediate for example. Also the expected time would vary as well.. it could be a real simple one but require more time than 1-4h and an advanced one which requires a lot of know how but could be resolved in hours not days. I know these sections provide ruff estimates but would still disagree on how we measure these. Maybe we should think of some other metric.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fair point on both counts and I've had the same thoughts - I kept them since these rough parameters are used in multiple repos, but I agree these could maybe warrant further discussion.

I went ahead and dropped the "Scope" column since file count is a poor proxy for complexity (a doc update can touch 10 files and still be trivial; a focused algorithmic change in one file can be genuinely hard). Added a note under the table clarifying that the time estimates are rough orientation guides rather than targets, and that the linked guidelines give the full criteria for choosing a level.

cc @exploreriii

Comment on lines +63 to +70
| Label | When to use |
|---|---|
| `kind: enhancement` | Improving existing functionality without adding new public API |
| `kind: documentation` | README, guides, API doc comments |
| `kind: refactor` | Code restructuring without behavior change |
| `kind: security` | Security-related improvements or hardening |
| `kind: testing` | Adding or improving tests |
| `kind: maintenance` | Dependencies, build system, CI/CD |
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems odd. Maybe change Label column to Kind.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Renamed the column header to "Kind" to match the label group it documents

Signed-off-by: Rob Walworth <robert.walworth@swirldslabs.com>
@rwalworth
Copy link
Copy Markdown
Contributor Author

rwalworth commented Mar 27, 2026

Overall looks good. Btw is the plan to enforce people to use these commands and etc. We should keep in mind that a random person hopping in the SDK would just want to see an issue and contribute not having to know all this stuff.

The /finalize command is meant to be a tool that the project team can use during triage before an issue becomes ready for contributors. By the time a contributor can take an issue in the backlog, it's already been finalized and structured.

From a contributor's perspective, the experience will be exactly the same as before:

  1. Find an issue labeled status: ready for dev
  2. Comment /assign (which is directly stated in every issue template)
  3. Submit a PR

There won't be much of a visible change for contributors outside of using the new templates to submit issues - which now no longer allow the issue submitter to decide on the skill level of the issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

status: needs review The pull request is ready for maintainer review

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Advanced]: Improve Issue Submission and Triage Workflow

3 participants