This directory contains example configurations for deploying Neo4j Enterprise clusters using the Neo4j Kubernetes Operator.
Before deploying any examples, ensure you have:
- Neo4j Kubernetes Operator installed in your cluster:
# Standard deployment (uses local images) make deploy-dev # or make deploy-prod # Registry-based deployment (requires ghcr.io access) make deploy-prod-registry
- cert-manager v1.18+ with ClusterIssuer (automatically installed in dev/test clusters)
- Appropriate storage classes available in your cluster
- Neo4j Enterprise Edition (evaluation license acceptable for testing)
Note: Development and test clusters created with make dev-cluster or make test-cluster automatically include cert-manager v1.18.5 and a self-signed ClusterIssuer (ca-cluster-issuer) for TLS testing. The operator works with Neo4j Enterprise 5.26+ and 2025.x versions.
All examples require an admin secret. Create it first:
kubectl create secret generic neo4j-admin-secret \
--from-literal=username=neo4j \
--from-literal=password=your-secure-passwordChoose an example and deploy:
# Minimal cluster (2 servers for high availability)
kubectl apply -f examples/clusters/minimal-cluster.yaml
# Three-server cluster for production (with TLS)
kubectl apply -f examples/clusters/three-node-cluster.yaml
# Three-server cluster for testing (TLS disabled)
kubectl apply -f examples/clusters/three-node-simple.yaml
# Multi-server cluster for production (with TLS and advanced features)
kubectl apply -f examples/clusters/multi-server-cluster.yaml
# Multi-zone deployment with topology placement
kubectl apply -f examples/clusters/topology-placement-cluster.yaml
# Six-server cluster for large deployments
kubectl apply -f examples/clusters/six-server-cluster.yaml
# Two-server cluster (minimum for high availability)
kubectl apply -f examples/clusters/two-server-cluster.yaml
# Expose via OpenShift Route
kubectl apply -f examples/clusters/route-cluster.yamlOnce deployed, access Neo4j through port forwarding:
# Port forward to the cluster
kubectl port-forward svc/your-cluster-name-client 7474:7474 7687:7687
# Open Neo4j Browser
open http://localhost:7474
# On OpenShift with Routes enabled
oc get route -n <namespace>All clusters automatically use Kubernetes Discovery for cluster member discovery. The operator handles all configuration automatically:
-
RBAC Resources:
- ServiceAccount:
{cluster-name}-discovery - Role:
{cluster-name}-discovery(with service list permissions) - RoleBinding:
{cluster-name}-discovery
- ServiceAccount:
-
Discovery Services:
- Discovery service:
{cluster-name}-discovery(ClusterIP withneo4j.com/clustering=truelabel) - Headless service:
{cluster-name}-headless(for pod-to-pod communication)
- Discovery service:
-
Neo4j Configuration:
dbms.cluster.discovery.resolver_type=K8S dbms.kubernetes.label_selector=neo4j.com/clustering=true dbms.kubernetes.discovery.v2.service_port_name=tcp-discovery dbms.cluster.discovery.version=V2_ONLY
- ✅ Dynamic discovery - automatic adaptation to scaling
- ✅ Cloud-native integration - uses Kubernetes API
- ✅ Zero configuration - no manual setup required
- ✅ Automatic RBAC - proper security permissions
No manual discovery configuration needed or supported! Simply deploy a cluster and the operator handles everything. Any manual discovery settings in spec.config are automatically overridden to ensure consistent Kubernetes discovery.
- Use case: Development, testing, minimum high availability
- Topology: 2 servers (minimum for clustering)
- Mode: Server-based clustering (servers self-organize)
- TLS: Disabled for simplicity
- Resources: 2Gi RAM, 500m CPU
- Use case: Production, high availability
- Topology: 3 servers (optimal fault tolerance)
- Mode: Server-based clustering with TLS
- TLS: cert-manager enabled
- Resources: 4Gi RAM, 1 CPU
- Features: Production configuration, monitoring enabled
- Use case: Testing, development, environments without cert-manager
- Topology: 3 servers (optimal fault tolerance)
- Mode: Server-based clustering, TLS disabled
- TLS: Disabled for simplicity
- Resources: 2Gi RAM, 500m CPU
- Features: Testing configuration, quick deployment
- Use case: Read-heavy workloads, horizontal scaling
- Topology: 5 servers (can host databases with read replicas)
- Mode: Server-based clustering for flexible database topologies
- TLS: Disabled for simplicity
- Resources: 3Gi RAM, 750m CPU
- Features: Optimized for read performance
- Use case: Production workload with advanced features
- Topology: 5 servers (automatic role organization)
- Mode: Server-based clustering with automatic discovery
- TLS: cert-manager enabled
- Resources: 4Gi RAM, 2 CPU
- Features: LoadBalancer service, automatic RBAC, production config
- Use case: Multi-zone production deployment with placement constraints
- Topology: 3 servers with topology spread constraints
- Mode: Server-based clustering with anti-affinity rules
- TLS: cert-manager enabled
- Resources: 4Gi RAM, 2 CPU
- Features: Zone distribution, topology constraints, fault tolerance
The operator now allows even numbers of primary nodes but issues warnings about reduced fault tolerance. Understanding these implications is crucial for production deployments.
| Configuration | Fault Tolerance | Use Case | Recommendation |
|---|---|---|---|
| 2 Servers | None | Development/Testing | ✅ Minimum for clustering |
| 3 Servers | ✅ 1 node failure | Production | ✅ Recommended minimum |
| 4 Servers | - | ||
| 5 Servers | ✅ 2 node failures | High availability | ✅ Mission-critical |
| 6 Servers | - | ||
| 7+ Servers | ✅ 3+ node failures | Maximum availability | ✅ Extreme requirements |
When deploying with even numbers of servers, the operator will emit warnings:
Warning: Even number of servers (4) may reduce fault tolerance.
For optimal cluster quorum, consider using an odd number (3, 5, or 7) of servers.
- Use odd numbers of servers for production
- 2 servers minimum for property sharding deployments (3+ recommended for HA)
- Scale with databases, not excessive servers
- Monitor cluster health continuously
- Test failover scenarios regularly
For detailed fault tolerance analysis, see: Fault Tolerance Guide
Update the storage configuration for your environment:
storage:
className: your-storage-class # e.g., gp2, standard, fast-ssd
size: "50Gi" # Adjust based on data requirementsAdjust resource allocation based on your workload:
resources:
requests:
memory: "4Gi" # Initial allocation
cpu: "1"
limits:
memory: "8Gi" # Maximum allocation
cpu: "4"For development/testing, use the automatically configured self-signed issuer:
tls:
mode: cert-manager
issuerRef:
name: ca-cluster-issuer # Self-signed issuer for development
kind: ClusterIssuerFor production, replace with your own ClusterIssuer:
tls:
mode: cert-manager
issuerRef:
name: letsencrypt-prod # Your production issuer
kind: ClusterIssuerAdd Neo4j-specific settings:
config:
dbms.logs.query.enabled: "INFO"
dbms.transaction.timeout: "60s"
metrics.enabled: "true"| Use Case | Servers | Database Topologies | Notes |
|---|---|---|---|
| Development | 2 | Simple databases | Minimum for clustering |
| Testing | 2-3 | Various topologies | Test different configurations |
| Small Production | 3 | 1-2 primaries, 0-1 secondaries | Minimal HA cluster |
| Large Production | 5-7 | Multiple databases with different topologies | Flexible infrastructure |
| Read-Heavy | 5+ | Databases with read replicas | Horizontal read scaling |
Neo4j clusters use parallel pod startup with coordinated formation:
- Parallel Startup: All server pods start simultaneously for faster deployment
- Discovery Phase: Servers discover each other via Kubernetes service discovery
- Self-Organization: Servers automatically form cluster and assign roles as needed
- Total Time: Typical cluster formation completes in 2-3 minutes
| Phase | Activity | Timing |
|---|---|---|
| Resource Creation | StatefulSets, Services, ConfigMaps | 0-30 seconds |
| Pod Startup | All pods start in parallel | 30-60 seconds |
| Cluster Formation | Coordination and membership | 1-3 minutes |
Note: The operator uses parallel pod management for efficient cluster formation while maintaining data consistency.
- Pod stuck in Pending: Check storage class and PVC binding
- License errors: Verify
NEO4J_ACCEPT_LICENSE_AGREEMENT=yes - TLS issues: Ensure cert-manager and issuer are configured
- Memory issues: Increase resource limits if pods are OOMKilled
- Cluster formation slow: All server pods start in parallel - expect 2-3 minutes total formation time
- Server pods not ready: Check resource availability and network connectivity between pods
# Check cluster status
kubectl get neo4jenterprisecluster
# View cluster details
kubectl describe neo4jenterprisecluster your-cluster-name
# Check pod logs
kubectl logs -l neo4j.com/cluster=your-cluster-name
# Check operator logs
kubectl logs -n neo4j-operator-system deployment/neo4j-operator-controller-managerclusters/- Production-ready cluster configurations with various topologiesstandalone/- Single-node Neo4j deployments for developmentbackup-restore/- Backup and restore operation examplesdatabase/- Database creation and management examplesfleet-management/- Aura Fleet Management integration examplesend-to-end/- Complete deployment scenarios for production usetesting/- Test configurations used for operator development and validation
For more information, see: