This guide helps you migrate from previous versions of the Neo4j Kubernetes Operator to the latest version with the new CRD structure.
The Neo4j Kubernetes Operator now separates single-node and clustered deployments into two distinct CRDs:
Neo4jEnterpriseCluster: For clustered deployments requiring high availabilityNeo4jEnterpriseStandalone: For single-node deployments in single mode
Previous behavior (no longer supported):
apiVersion: neo4j.neo4j.com/v1alpha1
kind: Neo4jEnterpriseCluster
metadata:
name: single-node-cluster
spec:
topology:
servers: 1New behavior - Choose one of these options:
Option A: Migrate to Neo4jEnterpriseStandalone (recommended for development/testing):
apiVersion: neo4j.neo4j.com/v1alpha1
kind: Neo4jEnterpriseStandalone
metadata:
name: single-node-standalone
spec:
# Same configuration as before, but without topology
image:
repo: neo4j
tag: "5.26-enterprise"
storage:
className: standard
size: "10Gi"
# ... other configurationOption B: Migrate to Minimal Cluster (recommended for production):
apiVersion: neo4j.neo4j.com/v1alpha1
kind: Neo4jEnterpriseCluster
metadata:
name: minimal-cluster
spec:
topology:
servers: 2 # Minimum required
# ... other configurationNeo4jEnterpriseCluster now enforces minimum topology requirements:
- Minimum: 2 servers (self-organize into primary/secondary roles)
- Recommended: 3+ servers for production fault tolerance
Invalid configurations (will fail validation):
# ❌ This will fail validation
topology:
servers: 1Valid configurations:
# ✅ Minimum cluster topology
topology:
servers: 2
# ✅ Larger cluster
topology:
servers: 5All Neo4j 5.26+ deployments now use V2_ONLY discovery mode automatically. You no longer need to configure this manually.
Before:
apiVersion: neo4j.neo4j.com/v1alpha1
kind: Neo4jEnterpriseCluster
metadata:
name: dev-neo4j
spec:
topology:
servers: 1
image:
repo: neo4j
tag: "5.26-enterprise"
storage:
className: standard
size: "10Gi"
resources:
requests:
memory: "2Gi"
cpu: "500m"
tls:
mode: disabledAfter:
apiVersion: neo4j.neo4j.com/v1alpha1
kind: Neo4jEnterpriseStandalone
metadata:
name: dev-neo4j
spec:
# Remove topology field
image:
repo: neo4j
tag: "5.26-enterprise"
storage:
className: standard
size: "10Gi"
resources:
requests:
memory: "2Gi"
cpu: "500m"
tls:
mode: disabledBefore:
apiVersion: neo4j.neo4j.com/v1alpha1
kind: Neo4jEnterpriseCluster
metadata:
name: prod-neo4j
spec:
topology:
servers: 1
image:
repo: neo4j
tag: "5.26-enterprise"
storage:
className: fast-ssd
size: "50Gi"
resources:
requests:
memory: "4Gi"
cpu: "2"
tls:
mode: cert-manager
issuerRef:
name: prod-issuer
kind: ClusterIssuerAfter:
apiVersion: neo4j.neo4j.com/v1alpha1
kind: Neo4jEnterpriseCluster
metadata:
name: prod-neo4j
spec:
topology:
servers: 2 # Minimum cluster topology
image:
repo: neo4j
tag: "5.26-enterprise"
storage:
className: fast-ssd
size: "50Gi"
resources:
requests:
memory: "4Gi"
cpu: "2"
tls:
mode: cert-manager
issuerRef:
name: prod-issuer
kind: ClusterIssuerNo changes required - existing multi-node clusters will continue to work as before:
apiVersion: neo4j.neo4j.com/v1alpha1
kind: Neo4jEnterpriseCluster
metadata:
name: prod-cluster
spec:
topology:
servers: 5
# ... rest of configuration unchangedFirst, identify what you currently have:
# List all existing clusters
kubectl get neo4jenterprisecluster -A
# Check topology of each cluster
kubectl get neo4jenterprisecluster -A -o custom-columns=NAME:.metadata.name,NAMESPACE:.metadata.namespace,SERVERS:.spec.topology.serversBefore making any changes, create backups:
# Create backup for each cluster
kubectl apply -f - <<EOF
apiVersion: neo4j.neo4j.com/v1alpha1
kind: Neo4jBackup
metadata:
name: migration-backup-$(date +%Y%m%d)
namespace: <your-namespace>
spec:
target:
kind: Cluster
name: <your-cluster-name>
storage:
type: pvc
pvc:
name: migration-backup-pvc
storageClassName: standard
size: 50Gi
EOFFor clusters that need to stay clustered:
-
Update the cluster topology:
kubectl patch neo4jenterprisecluster <cluster-name> -p '{"spec":{"topology":{"servers":2}}}'
-
Wait for the additional server to be ready:
kubectl wait neo4jenterprisecluster <cluster-name> --for=condition=Ready --timeout=600s
-
Verify the cluster is healthy:
kubectl get neo4jenterprisecluster <cluster-name> -o yaml
For production environments requiring zero downtime:
-
Create new deployment with correct topology:
# Create new cluster with minimum topology kubectl apply -f new-cluster.yaml -
Wait for new cluster to be ready:
kubectl wait neo4jenterprisecluster <new-cluster-name> --for=condition=Ready --timeout=600s
-
Restore data to new cluster:
kubectl apply -f restore-to-new-cluster.yaml
-
Update application connections:
# Update your application's connection string # From: bolt://old-cluster-client:7687 # To: bolt://new-cluster-client:7687
-
Remove old cluster:
kubectl delete neo4jenterprisecluster <old-cluster-name>
For development/testing environments:
-
Create standalone deployment:
kubectl apply -f standalone-deployment.yaml
-
Wait for standalone to be ready:
kubectl wait neo4jenterprisestandalone <standalone-name> --for=condition=Ready --timeout=300s
-
Migrate data (if needed):
# Export data from old cluster kubectl exec -it <old-cluster-pod> -- neo4j-admin dump --to=/tmp/export.dump # Import data to standalone kubectl exec -it <standalone-pod> -- neo4j-admin load --from=/tmp/export.dump
-
Update application connections:
# Update connection string to standalone service -
Remove old cluster:
kubectl delete neo4jenterprisecluster <old-cluster-name>
No changes required - environment variables work the same way in both CRDs.
Neo4jEnterpriseCluster:
- Clustering configurations are automatically set
- V2_ONLY discovery is automatically configured
- User configurations are merged with cluster defaults
Neo4jEnterpriseStandalone:
- Uses unified clustering infrastructure with single member (Neo4j 5.26+)
- Fixed at 1 replica, clustering scale-out is not supported
- User configurations are merged with standalone defaults
No changes required - TLS configuration works the same way in both CRDs.
No changes required - authentication configuration works the same way in both CRDs.
# Check deployment status
kubectl get neo4jenterprisecluster
kubectl get neo4jenterprisestandalone
# Check pod status
# Clusters
kubectl get pods -l neo4j.com/cluster=<cluster-name>
# Standalone
kubectl get pods -l app=<standalone-name>
# Check service endpoints
kubectl get svc -l app.kubernetes.io/name=neo4j# Test cluster connectivity
kubectl port-forward svc/<cluster-name>-client 7474:7474 7687:7687
# Test standalone connectivity
kubectl port-forward svc/<standalone-name>-service 7474:7474 7687:7687
# Test with Neo4j Browser
open http://localhost:7474# Connect to Neo4j and run basic queries
cypher-shell -u neo4j -p <password> -a bolt://localhost:7687
# Run test queries
MATCH (n) RETURN count(n) as nodeCount;
MATCH ()-[r]->() RETURN count(r) as relationshipCount;Error: Neo4jEnterpriseCluster requires minimum cluster topology
Solution: Either add another server or migrate to Neo4jEnterpriseStandalone:
# Option 1: Add server
kubectl patch neo4jenterprisecluster <name> -p '{"spec":{"topology":{"servers":2}}}'
# Option 2: Migrate to standalone
kubectl apply -f standalone-replacement.yamlError: Pods restarting due to discovery configuration
Solution: Ensure you're using Neo4j 5.26+ and let the operator handle discovery configuration automatically.
Error: Applications can't connect to new endpoints
Solution: Update application connection strings:
# For clusters
# Old: bolt://single-node-cluster-client:7687
# New: bolt://minimal-cluster-client:7687
# For standalone
# Old: bolt://single-node-cluster-client:7687
# New: bolt://standalone-neo4j-service:7687Error: Storage not accessible after migration
Solution: Ensure PVCs are properly preserved:
# Check PVC status
# Clusters
kubectl get pvc -l neo4j.com/cluster=<cluster-name>
# Standalone
kubectl get pvc neo4j-data-<standalone-name>-0
# If needed, manually migrate data
kubectl exec -it <old-pod> -- neo4j-admin dump --to=/tmp/migration.dump
kubectl cp <old-pod>:/tmp/migration.dump ./migration.dump
kubectl cp ./migration.dump <new-pod>:/tmp/migration.dump
kubectl exec -it <new-pod> -- neo4j-admin load --from=/tmp/migration.dumpIf you need to rollback a cluster migration:
- Create backup of current cluster
- Deploy old single-node configuration (if supported in your operator version)
- Restore data from backup
- Update application connections
If you need to rollback a standalone migration:
- Create backup of standalone deployment
- Deploy new cluster with minimum topology
- Restore data to cluster
- Update application connections
- Always backup before migration
- Test in staging environment first
- Use blue-green deployment for zero downtime
- Monitor during migration for issues
- Update monitoring and alerting for new endpoints
- Document changes for your team
- Plan rollback procedures in advance
If you encounter issues during migration:
-
Check logs:
kubectl logs -l app.kubernetes.io/name=neo4j-operator kubectl logs -l neo4j.com/cluster=<cluster-name> kubectl logs -l app=<standalone-name>
-
Check status:
kubectl describe neo4jenterprisecluster <name> kubectl describe neo4jenterprisestandalone <name>
-
Community support:
After completing your migration:
- Update monitoring dashboards and alerts
- Update documentation and runbooks
- Train your team on the new CRD structure
- Consider proper resource configuration for cluster deployments
- Implement proper backup strategies for your deployment type
For more information, see: