A .NET 10 application with Elasticsearch and Kafka integration for permission management.
This application is a Permission Management System that allows organizations to manage employee permissions efficiently. The system follows a modern microservices architecture with clear separation of concerns.
Backend (Web API - .NET 10):
- Primary Database (SQL Server): Stores all permission data persistently
Permissionstable: Employee permission records (ID, Employee Name, Employee Last Name, Permission Type, Permission Date)PermissionTypestable: Available permission types (ID, Description)
- Elasticsearch Integration:
- Automatically synchronizes permission data for fast search and analytics
- Creates and maintains the
n5elasticindex - Indexes permissions when they are retrieved, created, or modified
- Enables full-text search capabilities on employee names and permission data
- Kafka Integration:
- Logs all operations for audit and event tracking
- Publishes events to the
n5kafkatopic for each operation:"get"- When retrieving permissions"request"- When creating a new permission"modify"- When updating an existing permission
- Enables event-driven architecture and real-time monitoring
- CQRS Pattern: Uses MediatR for command/query separation
- RESTful API: Provides endpoints for:
GET /api/permission- Retrieve all permissions (syncs to Elasticsearch)POST /api/permission- Create new permission (logs to Kafka, syncs to Elasticsearch)PUT /api/permission- Update permission (logs to Kafka, syncs to Elasticsearch)GET /api/permissionType- Retrieve all permission typesPOST /api/permissionType- Create new permission type
Frontend (React + Vite):
- Modern React application built with Vite for fast development and optimized builds
- Material-UI components for a polished user interface
- React Router for navigation between pages (Get, Create, Modify)
- User interface for managing permissions
- Consumes the REST API endpoints
- Displays permission lists, forms for creating/editing permissions
- Real-time updates and validation
-
Create/Update Permission Flow:
- User submits permission data via frontend → Backend API
- Backend validates permission type exists in SQL Server
- Backend saves/updates data in SQL Server (primary storage)
- Backend publishes operation event to Kafka (audit trail)
- Backend synchronizes data to Elasticsearch (search index)
-
Retrieve Permissions Flow:
- User requests permissions via frontend → Backend API
- Backend publishes "get" event to Kafka (audit trail)
- Backend retrieves data from SQL Server
- Backend synchronizes all permissions to Elasticsearch (bulk sync)
- Backend returns enriched data (with permission type descriptions) to frontend
-
Search/Analytics Flow:
- Elasticsearch provides fast search capabilities on indexed permission data
- Kafka events can be consumed by other services for real-time monitoring, analytics, or event-driven workflows
sequenceDiagram
participant User
participant Frontend
participant Backend API
participant SQL Server
participant Kafka
participant Elasticsearch
User->>Frontend: Submit permission form
Frontend->>Backend API: POST/PUT /api/permission
Backend API->>Kafka: Publish operation event ("request"/"modify")
Backend API->>SQL Server: Validate permission type exists
SQL Server-->>Backend API: Permission type validated
Backend API->>SQL Server: Save/Update permission
SQL Server-->>Backend API: Permission saved/updated
Backend API->>Elasticsearch: Check/create index
Backend API->>Elasticsearch: Index permission document
Elasticsearch-->>Backend API: Document indexed
Backend API-->>Frontend: Return permission data
Frontend-->>User: Display success message
sequenceDiagram
participant User
participant Frontend
participant Backend API
participant SQL Server
participant Kafka
participant Elasticsearch
User->>Frontend: Request permissions list
Frontend->>Backend API: GET /api/permission
Backend API->>Kafka: Publish "get" event
Backend API->>SQL Server: Query all permissions
Backend API->>SQL Server: Query all permission types
SQL Server-->>Backend API: Return permissions & types
Backend API->>Backend API: Enrich data (join types)
Backend API->>Elasticsearch: Check/create index
Backend API->>Elasticsearch: Bulk index all permissions
Elasticsearch-->>Backend API: Bulk index complete
Backend API-->>Frontend: Return enriched permissions
Frontend-->>User: Display permissions list
graph TB
subgraph ClientLayer["Client Layer"]
User[User]
Frontend[React Frontend]
end
subgraph APILayer["API Layer"]
API[".NET 10 Web API<br/>CQRS + MediatR"]
end
subgraph DataLayer["Data Layer"]
SQL[("SQL Server<br/>Primary Database")]
ES[Elasticsearch<br/>Search Index]
end
subgraph EventLayer["Event Layer"]
Kafka[Apache Kafka<br/>Event Streaming]
end
User -->|HTTP Requests| Frontend
Frontend -->|REST API| API
API -->|CRUD Operations| SQL
API -->|Sync Data| ES
API -->|Publish Events| Kafka
ES -.->|Fast Search| API
Kafka -.->|Event Log| External[External Services]
- .NET 10 SDK
- Docker (for SQL Server, Elasticsearch, Kafka, and Frontend)
- Node.js 20+ and npm (for React frontend development)
Note: If you already have a SQL Server instance running (either locally or in another Docker container), you can skip this section and proceed to Database Setup. The
docker-compose.ymlfile does not include SQL Server to avoid port conflicts.
If you want to add SQL Server to your docker-compose.yml, add this service:
sqlserver:
image: mcr.microsoft.com/mssql/server:2022-latest
container_name: n5-sqlserver
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=StrongPassword123!
- MSSQL_PID=Express
ports:
- "1433:1433"
volumes:
- sqlserver_data:/var/opt/mssql
- ./N5.Scripts:/docker-entrypoint-initdb.d
networks:
- n5-networkRun with:
docker compose up -d sqlserverdocker run -d \
--name n5-sqlserver \
-e "ACCEPT_EULA=Y" \
-e "SA_PASSWORD=StrongPassword123!" \
-e "MSSQL_PID=Express" \
-p 1433:1433 \
-v sqlserver_data:/var/opt/mssql \
-v "$(pwd)/N5.Scripts:/docker-entrypoint-initdb.d" \
mcr.microsoft.com/mssql/server:2022-latest# Check if container is running
docker ps | grep n5-sqlserver
# Check logs
docker logs n5-sqlserver
# Connect using sqlcmd (if installed)
sqlcmd -S localhost,1433 -U sa -P "StrongPassword123!" -Q "SELECT @@VERSION"If using Docker SQL Server (from this guide):
Option A: Using Docker Exec (Recommended)
-
Copy the database script to the container:
docker cp N5.Scripts/N5Database.sql n5-sqlserver:/tmp/
-
Execute the script:
docker exec -it n5-sqlserver /opt/mssql-tools18/bin/sqlcmd \ -S localhost -U sa -P "StrongPassword123!" -C \ -i /tmp/N5Database.sql
Option B: Using SQL Server Management Studio (SSMS) or Azure Data Studio
-
Connect to SQL Server:
- Server:
localhost,1433 - Authentication: SQL Server Authentication
- Login:
sa - Password:
StrongPassword123!
- Server:
-
Execute the script
N5.Scripts/N5Database.sql
Option C: Using sqlcmd locally
sqlcmd -S localhost,1433 -U sa -P "StrongPassword123!" -i N5.Scripts/N5Database.sqlIf using an existing SQL Server instance:
- Connect to your SQL Server (using SSMS, Azure Data Studio, or sqlcmd)
- Execute the script
N5.Scripts/N5Database.sql - Update the connection string in
N5.WebApi/appsettings.jsonwith your SQL Server details
Update N5.WebApi/appsettings.json with the appropriate connection string:
For Docker SQL Server:
"ConnectionStrings": {
"N5DB": "Server=localhost,1433;Database=N5;User Id=sa;Password=StrongPassword123!;TrustServerCertificate=True;"
}For existing SQL Server (Windows Authentication):
"ConnectionStrings": {
"N5DB": "Server=YOUR_SERVER\\SQLEXPRESS;Database=N5;Trusted_Connection=True;"
}For existing SQL Server (SQL Authentication):
"ConnectionStrings": {
"N5DB": "Server=YOUR_SERVER,1433;Database=N5;User Id=YOUR_USER;Password=YOUR_PASSWORD;TrustServerCertificate=True;"
}Security Note: Change the default password in production! The default password
StrongPassword123!should be replaced with a strong password.
Create a docker-compose.yml file in the project root:
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:9.2.4
container_name: n5-elasticsearch
environment:
- discovery.type=single-node
- xpack.security.enabled=true
- ELASTIC_PASSWORD=migusanv
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ports:
- "9200:9200"
- "9300:9300"
volumes:
- elasticsearch_data:/usr/share/elasticsearch/data
networks:
- n5-network
volumes:
elasticsearch_data:
networks:
n5-network:
driver: bridgeRun with:
docker compose up -d elasticsearchdocker run -d \
--name n5-elasticsearch \
-p 9200:9200 \
-p 9300:9300 \
-e "discovery.type=single-node" \
-e "xpack.security.enabled=true" \
-e "ELASTIC_PASSWORD=migusanv" \
-e "ES_JAVA_OPTS=-Xms512m -Xmx512m" \
docker.elastic.co/elasticsearch/elasticsearch:9.2.4curl -u elastic:migusanv http://localhost:9200Or visit: http://localhost:9200 (username: elastic, password: migusanv)
Update N5.WebApi/appsettings.json:
"ElasticSearch": {
"Host": "http://127.0.0.1",
"Port": "9200",
"Username": "elastic",
"Password": "migusanv",
"Indexname": "n5elastic"
}Kafka is configured to use KRaft mode (Kafka Raft).
Add to your docker-compose.yml:
kafka:
image: confluentinc/cp-kafka:latest
container_name: n5-kafka
ports:
- "9092:9092"
environment:
KAFKA_PROCESS_ROLES: broker,controller
KAFKA_NODE_ID: 1
KAFKA_CONTROLLER_QUORUM_VOTERS: 1@localhost:9093
KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9092,CONTROLLER://0.0.0.0:9093
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9092
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
KAFKA_CONTROLLER_LISTENER_NAMES: CONTROLLER
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
KAFKA_AUTO_CREATE_TOPICS_ENABLE: true
CLUSTER_ID: MkU3OEVBNTcwNTJENDM2Qk
volumes:
- kafka_data:/var/lib/kafka/data
networks:
- n5-networkRun with:
docker compose up -d kafkadocker run -d \
--name n5-kafka \
-p 9092:9092 \
-e KAFKA_PROCESS_ROLES=broker,controller \
-e KAFKA_NODE_ID=1 \
-e KAFKA_CONTROLLER_QUORUM_VOTERS=1@localhost:9093 \
-e KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:9092,CONTROLLER://0.0.0.0:9093 \
-e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://localhost:9092 \
-e KAFKA_INTER_BROKER_LISTENER_NAME=PLAINTEXT \
-e KAFKA_CONTROLLER_LISTENER_NAMES=CONTROLLER \
-e KAFKA_LISTENER_SECURITY_PROTOCOL_MAP=CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT \
-e KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1 \
-e KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR=1 \
-e KAFKA_TRANSACTION_STATE_LOG_MIN_ISR=1 \
-e KAFKA_AUTO_CREATE_TOPICS_ENABLE=true \
-e CLUSTER_ID=MkU3OEVBNTcwNTJENDM2Qk \
-v kafka_data:/var/lib/kafka/data \
confluentinc/cp-kafka:latest# List topics
docker exec -it n5-kafka kafka-topics --list --bootstrap-server localhost:9092
# Create topic (if needed)
docker exec -it n5-kafka kafka-topics --create --topic n5kafka --bootstrap-server localhost:9092 --partitions 1 --replication-factor 1
# Consume messages
docker exec -it n5-kafka kafka-console-consumer --bootstrap-server localhost:9092 --topic n5kafka --from-beginningUpdate N5.WebApi/appsettings.json:
"Kafka": {
"Host": "localhost:9092",
"Topic": "n5kafka"
}The docker-compose.yml file includes Elasticsearch, Kafka (using KRaft mode), Web API, and Frontend services. Note: SQL Server is not included as it should be running separately or you may already have an existing SQL Server instance.
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:9.2.4
container_name: n5-elasticsearch
environment:
- discovery.type=single-node
- xpack.security.enabled=true
- ELASTIC_PASSWORD=migusanv
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ports:
- "9200:9200"
- "9300:9300"
volumes:
- elasticsearch_data:/usr/share/elasticsearch/data
networks:
- n5-network
healthcheck:
test: ["CMD-SHELL", "curl -u elastic:migusanv http://localhost:9200 || exit 1"]
interval: 10s
timeout: 5s
retries: 5
kafka:
image: confluentinc/cp-kafka:latest
container_name: n5-kafka
ports:
- "9092:9092"
environment:
KAFKA_PROCESS_ROLES: broker,controller
KAFKA_NODE_ID: 1
KAFKA_CONTROLLER_QUORUM_VOTERS: 1@localhost:9093
KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9092,CONTROLLER://0.0.0.0:9093
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9092
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
KAFKA_CONTROLLER_LISTENER_NAMES: CONTROLLER
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
KAFKA_AUTO_CREATE_TOPICS_ENABLE: true
CLUSTER_ID: MkU3OEVBNTcwNTJENDM2Qk
volumes:
- kafka_data:/var/lib/kafka/data
networks:
- n5-network
healthcheck:
test: ["CMD-SHELL", "kafka-broker-api-versions --bootstrap-server localhost:9092 || exit 1"]
interval: 10s
timeout: 5s
retries: 5
volumes:
elasticsearch_data:
kafka_data:
networks:
n5-network:
driver: bridgeStart all services (Elasticsearch, Kafka, Web API, and Frontend):
docker compose up -d --buildStart only specific services:
# Start only Elasticsearch and Kafka
docker compose up -d elasticsearch kafka
# Start Web API and Frontend (after Elasticsearch and Kafka are running)
docker compose up -d webapi frontendStop all services:
docker compose downStop and remove volumes (This will delete all data):
docker compose down -vView logs:
# All services
docker compose logs -f
# Specific service
docker compose logs -f frontend
docker compose logs -f webapi-
Ensure SQL Server is running:
- If you have an existing SQL Server container, make sure it's running on port
1433 - If you need to create a new SQL Server container, see the Running SQL Server with Docker section above
- Verify SQL Server is accessible:
docker ps | grep sqlserver # or sqlcmd -S localhost,1433 -U sa -P "YourPassword" -Q "SELECT @@VERSION"
- If you have an existing SQL Server container, make sure it's running on port
-
Create the database (if not already created):
- Connect to your SQL Server instance
- Execute the script
N5.Scripts/N5Database.sql - See the Database Setup section for detailed instructions
-
Start all services:
docker compose up -d --build
-
Wait for services to be ready:
# Check Elasticsearch curl -u elastic:migusanv http://localhost:9200 # Check Kafka docker exec -it n5-kafka kafka-topics --list --bootstrap-server localhost:9092 # Check Web API curl http://localhost:8080/swagger # Check Frontend curl http://localhost:3000
-
Verify all services are running:
docker ps
You should see:
n5-elasticsearch,n5-kafka,n5-webapi, andn5-frontend(plus your SQL Server container if running in Docker) -
Access the application:
- Frontend: http://localhost:3000
- API: http://localhost:8080
- Swagger: http://localhost:8080/swagger
- Open the solution
N5.slnin Visual Studio or your preferred IDE - Set
N5.WebApias the startup project - Run the application (F5 or
dotnet run)
The API will be available at the configured port (usually https://localhost:5001 or http://localhost:5000)
Build the Docker image:
# From the project root directory
docker build -f N5.WebApi/Dockerfile -t n5-webapi .Run the container (Option A - Join the n5-network):
First, add SQL Server to the n5-network (if not already connected):
docker network connect n5_n5-network sqlserverThen run the webapi container:
docker run -d \
--name n5-webapi \
--network n5_n5-network \
-p 8080:80 \
-p 8443:443 \
-e ASPNETCORE_ENVIRONMENT=Development \
-e ASPNETCORE_URLS=http://+:80 \
-e 'ConnectionStrings__N5DB=Server=sqlserver,1433;Database=N5;User Id=sa;Password=StrongPassword123!;TrustServerCertificate=True;' \
-e ElasticSearch__Host="http://n5-elasticsearch" \
-e ElasticSearch__Port="9200" \
-e ElasticSearch__Username="elastic" \
-e ElasticSearch__Password="migusanv" \
-e Kafka__Host="n5-kafka:9092" \
n5-webapiRun the container (Option B - Use host.docker.internal):
docker run -d \
--name n5-webapi \
-p 8080:80 \
-p 8443:443 \
-e ASPNETCORE_ENVIRONMENT=Development \
-e ASPNETCORE_URLS=http://+:80 \
-e ConnectionStrings__N5DB="Server=host.docker.internal,1433;Database=N5;User Id=sa;Password=StrongPassword123!;TrustServerCertificate=True;" \
-e ElasticSearch__Host="http://host.docker.internal" \
-e ElasticSearch__Port="9200" \
-e ElasticSearch__Username="elastic" \
-e ElasticSearch__Password="migusanv" \
-e Kafka__Host="host.docker.internal:9092" \
n5-webapiNote:
- Option A is recommended if your Elasticsearch and Kafka containers are in the
n5-network(from docker-compose.yml) - Option B works if the containers are running on the host machine
- Replace
StrongPassword123!with your actual SQL Server password if different
Or using Docker Compose (already included in docker-compose.yml):
The docker-compose.yml file includes both webapi and frontend services:
webapi:
build:
context: .
dockerfile: N5.WebApi/Dockerfile
container_name: n5-webapi
ports:
- "8080:80"
- "8443:443"
environment:
- ASPNETCORE_ENVIRONMENT=Development
- ASPNETCORE_URLS=http://+:80
- ConnectionStrings__N5DB=Server=sqlserver,1433;Database=N5;User Id=sa;Password=StrongPassword123!;TrustServerCertificate=True;
- ElasticSearch__Host=http://n5-elasticsearch
- ElasticSearch__Port=9200
- ElasticSearch__Username=elastic
- ElasticSearch__Password=migusanv
- Kafka__Host=n5-kafka:9092
depends_on:
- elasticsearch
- kafka
networks:
- n5-network
frontend:
build:
context: ./N5.Presentation
dockerfile: Dockerfile
args:
VITE_API_END_POINT: http://n5-webapi:80
container_name: n5-frontend
ports:
- "3000:80"
depends_on:
- webapi
networks:
- n5-networkNote:
- When running in Docker, use container names (e.g.,
sqlserver,n5-elasticsearch,n5-kafka,n5-webapi) when containers are in the same network, orhost.docker.internal(Windows/Mac only) for services on the host - Important for WSL/Linux:
host.docker.internaldoesn't work in Linux/WSL. You must connect SQL Server to the same network (docker network connect n5_n5-network sqlserver) and use the container name (sqlserver) in the connection string - The API will be available at
http://localhost:8080(HTTP) andhttps://localhost:8443(HTTPS) - The Frontend will be available at
http://localhost:3000 - Swagger UI is available at
http://localhost:8080/swagger(only in Development mode)
-
Navigate to the
N5.Presentationfolder:cd N5.Presentation -
Install dependencies (if not already installed):
npm install
-
Update the environment configuration file
N5.Presentation/environments/.dev.envif needed:VITE_API_END_POINT=http://localhost:8080
-
Start the development server:
npm run dev
For local environment (port 5000):
npm run dev:local
-
Open http://localhost:5173 in your browser
Note: Vite uses port 5173 by default (not 3000 like Create React App)
Using Docker Compose (Recommended):
The docker-compose.yml includes a frontend service. To start all services including the frontend:
docker compose up -d --buildThe frontend will be available at http://localhost:3000
Build the Docker image manually:
# From the N5.Presentation directory
docker build -t n5-frontend .Run the container:
docker run -d \
--name n5-frontend \
-p 3000:80 \
--network n5_n5-network \
n5-frontendEnvironment Variables for Docker:
The frontend Dockerfile accepts a build argument for the API endpoint:
docker build \
--build-arg VITE_API_END_POINT=http://n5-webapi:80 \
-t n5-frontend \
./N5.PresentationNote:
- When running in Docker, the frontend connects to the API using the Docker service name (
n5-webapi) - The frontend is served via Nginx in production mode
- For development, use
npm run devlocally
If you need to reset the database:
DELETE FROM PermissionTypes;
DELETE FROM Permissions;
DBCC CHECKIDENT (PermissionTypes, RESEED, 0);
DBCC CHECKIDENT (Permissions, RESEED, 0);N5/
├── N5.Application/ # Application layer (CQRS, MediatR)
├── N5.Domain/ # Domain entities and mappings
├── N5.Infrastructure/ # Infrastructure (Repositories, Services)
├── N5.WebApi/ # Web API (Controllers, Startup)
├── N5.Presentation/ # React frontend
├── N5.Test/ # Unit tests
└── N5.Scripts/ # Database scripts
- .NET 10 - Backend framework
- Entity Framework Core 10 - ORM
- MediatR - CQRS pattern implementation
- Elasticsearch 9.2.4 - Search and analytics engine
- Apache Kafka - Event streaming platform
- SQL Server - Database
- React 19 - Frontend framework
- Vite 6 - Build tool and development server
- Material-UI (MUI) 6 - UI component library
- React Router 7 - Client-side routing
- Axios - HTTP client for API requests
- Nginx - Web server for production builds (Docker)
1. List all Kafka topics:
docker exec -it n5-kafka kafka-topics --bootstrap-server localhost:9092 --list2. Describe the Kafka topic (verify it exists and see details):
docker exec -it n5-kafka kafka-topics \
--bootstrap-server localhost:9092 \
--describe \
--topic n5kafka3. Consume messages from Kafka in real-time:
docker exec -it n5-kafka kafka-console-consumer \
--bootstrap-server localhost:9092 \
--topic n5kafka \
--from-beginning4. Check message count in the topic:
docker exec -it n5-kafka kafka-run-class kafka.tools.GetOffsetShell \
--broker-list localhost:9092 \
--topic n5kafkaTo test Kafka integration:
- Open a terminal and run the consumer (step 3 above)
- Make an API request (e.g.,
GET http://localhost:8080/api/permissionType) - You should see a JSON message in the consumer showing the operation that was logged
1. Check Elasticsearch cluster health:
curl -u elastic:migusanv http://localhost:9200/_cluster/health?pretty2. List all indices:
curl -u elastic:migusanv http://localhost:9200/_cat/indices?v3. Check if the n5elastic index exists:
curl -u elastic:migusanv http://localhost:9200/n5elastic?pretty4. Search documents in the index:
curl -u elastic:migusanv http://localhost:9200/n5elastic/_search?pretty- Ensure Elasticsearch container is running:
docker ps - Check Elasticsearch logs:
docker logs n5-elasticsearch - Verify credentials match
appsettings.json
- Ensure Kafka container is running:
docker ps | grep n5-kafka - Check Kafka logs:
docker logs n5-kafka - Verify the topic exists:
docker exec -it n5-kafka kafka-topics --list --bootstrap-server localhost:9092 - Verify topic details:
docker exec -it n5-kafka kafka-topics --bootstrap-server localhost:9092 --describe --topic n5kafka - Check if messages are being produced: Use the consumer command in the "Verifying Services" section above
- If messages aren't appearing, check the webapi logs:
docker logs n5-webapi | grep -i kafka
- Ensure SQL Server container is running:
docker ps | grep n5-sqlserver - Check SQL Server logs:
docker logs n5-sqlserver - Verify connection string in
appsettings.jsonmatches Docker configuration - Ensure database and tables are created from scripts
- Wait for SQL Server to be fully ready (check health status)
- On Windows, you might need to use
localhost,1433instead oflocalhost:1433in connection strings
This project is licensed under the MIT License - see the LICENSE file for details.