Build Guardrails Hub validators that follow the validator-template conventions and match production patterns from guardrails-ai/toxic_language.
- Keep the template repo structure and file names unless there is a strong reason to add files.
- Always register validators with
@register_validator(name="<namespace>/<validator_name>", data_type="<type>"). - Always return
PassResult(...)orFailResult(...)from validator logic. - If validator behavior uses an LLM, always call the model via
litellm(never direct provider SDKs). - For LLM-based validators, expose
model: Optional[str]in__init__, and default it to the latest Claude Haiku model identifier available in LiteLLM at authoring time. - Include tests that prove both pass and fail paths.
- Keep README in validator-card format with install and usage examples.
- Run lint, typecheck, and tests before finalizing.
- Every imported runtime package must be declared in
[project.dependencies](no undeclared transitive dependency reliance). - Every required credential/config key must be listed in
.env(keys only, no values) and documented in README. - If extra install-time assets are required (tokenizers, model files, hub login), implement them in
validator/post-install.pyand document them in README.
Keep this minimum structure:
.github/workflows/pr_qc.yml
Makefile
pyproject.toml
README.md
validator/__init__.py
validator/main.py
tests/test_validator.py
Optional:
validator/post-install.py
.env
Only add runtime-serving files (app.py, inference specs, docker/deploy files) when remote inference hosting is explicitly required.
Implement one validator class:
- Inherit from
Validator. - Decorate with
register_validator. __init__must forward config tosuper().__init__(...).validate(self, value, metadata)must returnValidationResult.- On fail, use
FailResult(error_message=..., fix_value=... optional, error_spans=... optional, metadata=metadata optional). - On pass, use
PassResult(metadata=metadata optional).
If no metadata is required, accept metadata and ignore it safely.
- Add dependency:
litellminpyproject.toml. - Accept optional
modelinit argument. - Resolve default to latest Claude Haiku LiteLLM id at implementation time.
- Store chosen model in
self._model. - Route all completions through
litellm(for example,litellm.completion(...)orlitellm.acompletion(...)). - Handle timeout/errors and return deterministic
FailResultmessages when model output is malformed.
Reference skeleton:
from typing import Any, Callable, Dict, Optional
import litellm
from guardrails.validator_base import (
FailResult,
PassResult,
ValidationResult,
Validator,
register_validator,
)
DEFAULT_MODEL = "REPLACE_WITH_LATEST_CLAUDE_HAIKU_LITELLM_ID"
@register_validator(name="guardrails/example_validator", data_type="string")
class ExampleValidator(Validator):
def __init__(
self,
model: Optional[str] = None,
on_fail: Optional[Callable[..., Any]] = None,
**kwargs: Any,
) -> None:
chosen_model = model or DEFAULT_MODEL
super().__init__(on_fail=on_fail, model=chosen_model, **kwargs)
self._model = chosen_model
def validate(self, value: Any, metadata: Dict[str, Any]) -> ValidationResult:
try:
resp = litellm.completion(
model=self._model,
messages=[
{"role": "system", "content": "Return strict JSON."},
{"role": "user", "content": str(value)},
],
temperature=0,
)
# Parse and decide pass/fail here.
except Exception as exc:
return FailResult(error_message=f"LLM validation call failed: {exc}")
return PassResult()Export only the validator class:
from .main import MyValidator
__all__ = ["MyValidator"]Use Guard integration tests, not only unit tests of helper methods.
Minimum:
- Pass case asserts
validation_passed is True. - Fail case asserts expected exception/message when
on_fail="exception"or asserts failure summary for non-exception policy. - Any custom fix behavior (
fix_value) has at least one assertion.
Include:
- Overview table.
- Intended use.
- Dependencies and required keys/env.
- Install command:
guardrails hub install hub://<namespace>/<validator_name>
- Python usage example with
Guard. - API reference for
__init__andvalidateincluding metadata keys.
Update:
name,version,description,authors.- runtime dependencies (
guardrails-aiplus validator-specific packages, includinglitellmfor LLM validators). - dev deps (
pyright,pytest,ruff). requires-pythonconsistent with template baseline unless validator requires higher.- Keep dependency list explicit and minimal; remove unused packages.
guardrails-aiis present in runtime dependencies.litellmis present for any LLM-based validator.- Any package imported from
validator/*.pyis declared in runtime dependencies. - Any package used only by tests/lint/typecheck is declared in dev dependencies.
- Required API keys/env vars are listed in
.env(keys only) and README Requirements section. - If
post-install.pydownloads assets, README explains what it downloads and why.
Keep targets: dev, lint, type, test, qa.
qa must run:
- lint
- type
- tests
CI pr_qc.yml should install dev deps and run make qa.
If hosting validator inference remotely:
- Set
has_guardrails_endpoint=Trueinregister_validator. - Implement local and remote inference paths (for example,
_inference_local,_inference_remote). - Ensure remote response parsing is validated and errors are explicit.
- Add serving files only when requested.
Do not add remote-hosting complexity for local-only validators.
Before submitting:
- Validator imports from
guardrails.hubpath work (validator/__init__.pyis correct). make qapasses locally.- README install command and class name match actual package.
- Register name matches repo/package identity.
- LLM validators use LiteLLM only and accept optional model override.
- Default LLM model is set to latest Claude Haiku LiteLLM id at authoring time.
- Use snake_case repo/package names.
- Avoid names starting with
is_and avoidbugin names. - Register format must be
<namespace>/<validator_name>. - Keep validator class names clear and singular (for example,
ToxicLanguage,ValidAddress).