| name | repo-research |
|---|---|
| description | Researches an open-source repository to find potential improvements, enhancements, bug fixes, and feature opportunities. Use when asked to "find things to contribute", "research this repo", "look for improvements", "find bugs to fix", "what can I work on", or "identify contribution opportunities". |
Systematically analyse a repository to identify actionable improvements that could become issues or pull requests.
Prefer the GitHub MCP server tools when available. Fall back to the gh CLI when MCP is not configured.
| Action | MCP tool | CLI fallback |
|---|---|---|
| Search issues | search_issues |
gh issue list --search |
| List open issues | list_issues |
gh issue list |
| List PRs | list_pull_requests |
gh pr list |
| Search code | search_code |
gh search code |
| Read files | get_file_contents |
gh api or clone + read |
Before looking for new opportunities, understand what the community already knows:
- Use
search_issues(MCP) orgh issue listto read open issues — note recurring themes, stale issues, andgood first issue/help wantedlabels. - Read recent closed issues and merged PRs — understand what changes are being accepted and what gets rejected.
- Check Discussions or mailing lists if available — maintainers often share roadmap hints there.
Produce a short summary of:
- Active areas of development
- Known pain points the community has raised
- Topics maintainers have explicitly said they want help with
Scan the codebase for common improvement signals:
| Signal | Where to look | Opportunity type |
|---|---|---|
TODO, FIXME, HACK, XXX comments |
Grep across source files | Bug fix / enhancement |
| Deprecated API usage | Import statements, compiler warnings | Enhancement |
| Missing or outdated dependencies | Dependency files, security advisories | Maintenance |
| Inconsistent error handling | Source files | Bug fix / enhancement |
| Dead code or unused exports | Static analysis, IDE warnings | Cleanup |
| Hardcoded values that should be configurable | Source files, config | Enhancement |
For each finding, note the file, line, and a one-sentence description of the issue.
Identify testing gaps:
- Missing test files — source files with no corresponding test file.
- Untested edge cases — functions that handle errors, boundary conditions, or configuration variants without test coverage.
- Flaky or skipped tests — tests marked as
skip,pending, orxfail. - Integration test gaps — features that only have unit tests but interact with external systems.
If a coverage report is available (e.g., coverage.html, Codecov badge), reference it.
Look for documentation issues:
- Outdated README — setup instructions that no longer work, broken links, screenshots of old UI.
- Missing API docs — public functions/types without doc comments.
- Incomplete examples — example code that doesn't compile or is out of date.
- Translation gaps — if the project supports i18n, check for missing or stale translations.
Flag potential concerns (without making false claims):
- Dependency vulnerabilities — check if
dependabot,renovate, or similar tools are enabled; note any open security PRs. - Obvious performance issues — N+1 queries, unbounded allocations, missing pagination on list endpoints.
- Missing input validation — user-facing endpoints or CLI commands that accept untrusted input without sanitisation.
Only report findings you can substantiate with specific code references.
Present all findings in a structured table:
| # | Category | Summary | File(s) | Impact | Effort |
|---|-------------|----------------------------------|-------------------|--------|--------|
| 1 | Bug fix | Off-by-one in pagination logic | api/list.go:42 | High | Low |
| 2 | Enhancement | Add retry logic to HTTP client | pkg/client.go | Medium | Medium |
| 3 | Docs | Broken install link in README | README.md:15 | Low | Low |
Category values: Bug fix, Enhancement, Feature, Docs, Tests, Maintenance, Performance, Security
Impact: How much this affects users or contributors (High / Medium / Low)
Effort: Estimated size of the change (Low / Medium / High)
Sort by impact descending, then effort ascending (high-impact, low-effort items first).
Based on the findings, recommend 3-5 concrete next steps the user should take. Prioritise:
- Items labelled
good first issueorhelp wantedby maintainers - High-impact, low-effort findings from the analysis
- Items aligned with the project's stated roadmap or recent activity
For each recommendation, state whether it should become an issue, a PR, or a discussion post.