Thank you for your interest in contributing to NeuroLink! This document provides guidelines and instructions for contributing to this project.
By participating in this project, you agree to abide by our Code of Conduct (to be implemented). Please read the CODE_OF_CONDUCT.md file for details.
- Fork the repository on GitHub
- Clone your fork locally
- Add the upstream repository as a remote to keep your fork in sync:
git remote add upstream https://github.com/juspay/neurolink.git
- Create a new branch for your changes:
git checkout -b feature/your-feature-name
- Node.js (version 18 or higher)
- pnpm (preferred package manager)
-
Install dependencies:
pnpm install
-
Set up environment variables (for local testing): Copy
.env.exampleto.envand add your API keys:cp .env.example .env
-
Run the development server:
pnpm dev
-
Make your changes
-
Run tests:
pnpm test -
Build the package:
pnpm build
NeuroLink enforces enterprise-grade code quality with automated validation that runs on every commit. All contributions must pass these quality gates:
When you make a commit, the following checks run automatically:
# These run automatically via Husky pre-commit hooks:
- ESLint validation (must pass with 0 errors)
- Prettier formatting (auto-fixes)
- Build validation checks
- Security scanning
- Environment validation
- Semantic commit format validationIf any check fails, your commit will be blocked with clear error messages explaining how to fix the issues.
All commits must follow semantic commit conventions with required scope:
# ✅ CORRECT FORMAT:
feat(providers): add LiteLLM integration support
fix(cli): resolve configuration loading issue
docs(readme): update installation instructions
test(providers): add OpenAI provider validation tests
# ❌ INCORRECT (will be blocked):
add new feature # Missing type and scope
feat: add feature # Missing required scope
update docs # Missing type and scopeRequired format: type(scope): description
Valid types: feat, fix, docs, style, refactor, test, chore, build, ci, perf, revert
Required scope: Must specify the area of change (providers, cli, docs, etc.)
Your code must pass these validation checks:
- ✅ No hardcoded API keys or secrets
- ✅ No dependency vulnerabilities (high/critical)
- ✅ Proper .gitignore patterns
- ✅ Environment variables documented in .env.example
- ✅ No console.log statements in production code (use logger instead)
- ✅ TypeScript strict mode compliance
- ✅ ESLint rules compliance (0 errors tolerance)
- ✅ Proper error handling patterns
- ✅ TODO/FIXME comments must reference issues
- ✅ All environment variables documented
- ✅ .env.example completeness
- ✅ Configuration consistency checks
Before committing, you can run these commands manually to check your code:
# Full validation pipeline
pnpm run validate:all
# Individual checks
pnpm run validate # Build validation
pnpm run validate:env # Environment checks
pnpm run validate:security # Security scanning
pnpm run validate:commit # Test commit message format
# Quality metrics
pnpm run quality:metrics # Get quality score
pnpm run quality:report # Generate detailed report
# Pre-commit simulation
pnpm run check:all # Run all pre-commit checks manuallyCommit blocked with semantic format error:
# Fix your commit message format
git commit --amend -m "feat(providers): add new provider support"ESLint errors:
# Auto-fix linting issues
pnpm run lint --fixSecurity scan failures:
# Check for issues and get detailed report
pnpm run validate:securityConsole.log detected:
# Replace console.log with logger
import { logger } from '../utils/logger.js';
logger.info('Your message here');Environment variables not documented:
# Add missing variables to .env.example with descriptions
MISSING_VAR=example_value # Description of what this doesAll pull requests must pass the CI/CD pipeline which includes:
- ✅ Validation Job: All custom validation scripts
- ✅ Security Audit: Dependency vulnerability scanning
- ✅ Build Verification: TypeScript compilation + CLI testing
- ✅ Test Coverage: Comprehensive test suite
- ✅ AI Code Review: Automated GitHub Copilot analysis
Pull requests that fail CI/CD will be blocked from merging.
Your contributions should maintain or improve the codebase quality score. Check your impact with:
pnpm run quality:metricsTarget: Maintain score above baseline and ideally improve it.
-
Commit your changes following the semantic commit format:
git commit -m "feat(providers): add support for new provider"Remember: All commits must follow the
type(scope): descriptionformat with required scope as documented above. The pre-commit hooks will validate this automatically. -
Push to your fork:
git push origin feature/your-feature-name
-
Submit a Pull Request to the main repository
-
Address review comments if any are provided
This project enforces strict coding standards automatically:
- TypeScript strict mode - Type safety is mandatory
- ESLint v9 - Advanced linting with zero-error tolerance
- Prettier - Consistent code formatting (auto-applied)
- Professional security scanning - Gitleaks integration for secret detection
- Build validation - Custom checks for console statements, API leaks, etc.
Style enforcement is automatic via pre-commit hooks. Manual checks:
pnpm run check:all # Run all validation checks
pnpm lint # ESLint validation
pnpm format # Prettier formattingNote: The build rule enforcement system will automatically prevent commits that don't meet quality standards. See the "Build Rule Enforcement & Quality Standards" section above for complete details.
NeuroLink has a comprehensive testing suite to ensure reliability across all AI providers and features. Please add tests for any new features or bug fixes.
All test suites run via tsx (not vitest, despite a vitest.config.ts existing in the repo). Suites are real integration runs against test/continuous-test-suite-*.ts orchestrators.
# Main suite — orchestrates the full integration run
pnpm test
# CI pipeline (test + test:client)
pnpm test:ci# Domain-specific suites
pnpm test:client # SDK client suite
pnpm test:context # Context compaction + file handling
pnpm test:mcp # MCP HTTP transport
pnpm test:rag # RAG (chunking, search, reranking)
pnpm test:providers # Provider validation
pnpm test:new-providers # DeepSeek, NVIDIA NIM, LM Studio, llama.cpp
pnpm test:media # Media generation (image, video)
pnpm test:memory # Memory persistence
pnpm test:tts # Text-to-speech providers
pnpm test:voice # Multi-provider voice (TTS + STT)
pnpm test:voice-server # Real-time voice agent server
pnpm test:observability # Tracing + telemetry
pnpm test:hitl # Human-in-the-loop workflows
pnpm test:credentials # Per-request credentials
pnpm test:evaluation # Evaluation scorers
pnpm test:middleware # Middleware chain
pnpm test:workflow # Workflow engine
pnpm test:ppt # PowerPoint generation
pnpm test:servers # HTTP server adapters
pnpm test:tracing # OpenTelemetry traces
pnpm test:proxy # Claude proxy
pnpm test:bugfixes # Regression fixtures
# Run a single suite directly
npx tsx test/continuous-test-suite-<name>.tstest/
├── continuous-test-suite.ts # Main orchestrator (pnpm test)
├── continuous-test-suite-<domain>.ts # Per-domain suites (mcp, rag, voice, etc.)
├── continuous-test-suite-issue-NN-*.ts # Regression fixtures for tracked issues
└── fixtures/ # CSVs, PDFs, PNG, JSON used by suites
Each domain suite is a self-contained tsx script that exits non-zero on failure. There is no vitest runner; do not write *.test.ts files using vi.mock or describe/it blocks — they will not be picked up.
- Add to the closest existing suite (e.g. provider work →
continuous-test-suite-providers.ts) - Or create a new suite
test/continuous-test-suite-<name>.tsand add a matchingtest:<name>script inpackage.json - Test error scenarios and edge cases — assertions inside the suite throw on failure
- Real integration runs preferred — these suites hit live provider APIs when credentials are present
Before running tests, ensure your environment is properly configured:
# 1. Install dependencies
pnpm install
# 2. Build the project
pnpm build
# 3. Set up environment variables (optional for mocked tests)
cp .env.example .env
# Add your API keys for integration testing
# 4. Verify setup
pnpm cli --version# Run the main suite
pnpm test
# Focus on a specific domain suite
npx tsx test/continuous-test-suite-providers.ts
npx tsx test/continuous-test-suite-rag.ts
# Run an individual issue regression suite
npx tsx test/continuous-test-suite-issue-01-model-access.ts# Complete CI pipeline (test + test:client + test:hitl)
pnpm test:ci
# Individual domain validations
pnpm test:providers # 21+ provider validation
pnpm test:performance # Performance benchmarks (tools/testing/performanceMonitor.ts)
pnpm test:voice # Voice (TTS/STT) validation
pnpm test:rag # RAG pipeline validation| Test Category | Expected Duration | Use Case |
|---|---|---|
| Basic Tests | 30-60 seconds | Quick validation |
| Provider Tests | 1-2 minutes | Provider compatibility |
| MCP Tests | 1-3 minutes | Tool integration |
| Performance Tests | 2-5 minutes | Benchmarking |
| Full Test Suite | 5-10 minutes | Complete validation |
Tests timeout or fail:
# Run an individual suite directly with tsx
npx tsx test/continuous-test-suite-<name>.ts
# Check environment setup
pnpm run env:validateProvider-specific failures:
# Test a specific provider via CLI
pnpm cli generate "test" --provider google-ai
# Run full provider validation suite
pnpm test:providersBuild-related test failures:
# Clean and rebuild
pnpm clean
pnpm build
pnpm testWe maintain integration coverage across:
- ✅ Core functionality — All primary features tested
- ✅ Provider integration — All 21+ AI providers validated
- ✅ Voice pipeline — TTS, STT, and realtime voice servers
- ✅ Error handling — Graceful failure scenarios
- ✅ MCP integration — Tool orchestration and configuration
- ✅ CLI functionality — Command-line interface validation
- ✅ SDK features — Software development kit testing
- ✅ Regressions — Pinned issue suites (
continuous-test-suite-issue-*.ts)
Sometimes you need a quick manual sanity-check outside the automated test-suite. Use the following examples as copy-paste snippets:
# Basic generation with default provider
pnpm cli generate "Hello world" --provider google-ai
# Streaming
pnpm cli stream "Count to 5" --provider google-ai
# Analytics / evaluation
pnpm cli generate "Test analytics" --provider google-ai --enable-analytics --format json
# Loop through all built-in providers (bash)
for p in openai google-ai anthropic bedrock vertex; do
pnpm cli generate "quick test" --provider "$p" || break
done// Run with: node -e "<snippet>"
import { NeuroLink } from "./dist/lib/neurolink.js";
const sdk = new NeuroLink();
const res = await sdk.generate({
input: { text: "Hello SDK" },
provider: "google-ai",
enableAnalytics: true,
});
console.log("✅ Content:", res.content.slice(0, 50));
console.log("✅ Analytics:", !!res.analytics);test/utils/streamingDebug.ts– analyse stream behaviour, timing and chunking.test/utils/visualRunner.ts– colour-coded progress & markdown reports.
These helpers are optional but invaluable when diagnosing flaky streaming or long-running suites.
When contributing tests, follow these guidelines:
- Test real scenarios - Use realistic inputs and expected outputs
- Mock external dependencies - Don't rely on external API calls in unit tests
- Test error conditions - Verify graceful handling of failures
- Use descriptive names - Test names should clearly describe what's being tested
- Keep tests focused - Each test should verify one specific behavior
- Add performance assertions - Include timing expectations where relevant
For examples of well-structured tests, refer to existing test files in the test/ directory.
For any new features or changes, please update the relevant documentation:
- README.md for general usage
- JSDoc comments for public APIs
- Code examples where appropriate
We maintain high-quality documentation with automated formatting checks:
All documentation is validated with markdownlint during CI/CD. To ensure your documentation meets standards:
# Check markdown formatting
npx markdownlint-cli2 "docs/**/*.md"
# Auto-fix formatting issues
npx markdownlint-cli2 --fix "docs/**/*.md"
# Check specific files
npx markdownlint-cli2 "README.md" "CONTRIBUTING.md"Recommended markdownlint configuration (.markdownlint.json):
{
"default": true,
"MD003": { "style": "atx" },
"MD007": { "indent": 2 },
"MD013": { "line_length": 120 },
"MD024": { "allow_different_nesting": true },
"MD033": { "allowed_elements": ["details", "summary", "br"] },
"MD041": false
}This configuration ensures:
- ✅ Consistent heading styles (ATX format:
# Heading) - ✅ Proper list indentation (2 spaces)
- ✅ Reasonable line length limits (120 characters)
- ✅ Allows nested headings with same content
- ✅ Permits essential HTML elements for documentation
- ✅ Flexible first-line requirements for complex docs
The docs workflow automatically runs markdownlint on all documentation files. If formatting issues are found:
- Local fixing: Run
npx markdownlint-cli2 --fix "docs/**/*.md"locally - Manual review: Check the CI output for specific formatting violations
- Commit fixes: Include markdown formatting fixes in your contribution
This ensures consistent, professional documentation across the entire project.
The maintainers follow this process for releases:
- Update version in package.json
- Update CHANGELOG.md
- Create a GitHub release
- Publish to npm
If you have any questions, feel free to open an issue or start a discussion on GitHub.
Thank you for contributing to NeuroLink!