By: Milad Roudgarian
GopherNet is a backend service built to manage the rental and lifecycle of gopher burrows in a structured, scalable, and concurrent environment. The system simulates a digital platform where gophers can "rent" underground burrows, each of which evolves over time.
Each burrow has:
- A depth that increases over time (only when rented)
- A fixed width
- A lifespan of 25 days
- One renter allowed per burrow (no sharing)
The system keeps burrow status up to date and allows gophers to rent available ones. It also runs background tasks every minute to update burrow depth and age.
- Go – Core programming language
- Gin – REST API framework
- gRPC – Internal service communication
- Swagger (Go Swag) – Auto-generated API documentation
- Hexagonal Architecture – Clean separation of core logic and infrastructure
- Zap – Fast and structured logging
- Goroutines + Worker Pool – Concurrency pattern for efficient background processing
- File I/O – JSON used for input data, text files for periodic reports
- PostgreSQL – Primary relational database for storing burrow data
- GORM – ORM for Database integration
- Redis – In-memory store for fast access to burrow state or caching
- Docker – Containerized deployment
- Docker Compose – Orchestration for multi-service setup (API, DB, Redis, etc.)
Every 10 minutes, GopherNet generates a report showing:
- Total combined depth
- Number of available burrows
- Largest and smallest burrows by volume
Reports are saved as plain text files.
On termination, the server stops background tasks, flushes logs, and exits cleanly.
- To start service by
Docker Compose
make upNote: wait a few seconds for all Docker Compose services to be up
- The swagger address to access APIs:
http://localhost:8080/public/swagger/index.html
The required Username and Password:
Username: admin
Password: admin
- All the
LOGfiles and the reports(txtfiles) are available in current path in./dockerdirectory.
The assessment implemented as mono-repo with two microservices.
-
Startup Tasks:
- Loads initial data from
data/initial.jsonas seeder - Executes database migrations
- Loads initial data from
-
CronJobs
- Updates (runs every minute):
- Dug depth
- Burrow age states
- Collapsed burrows status
- Reports(runs every 10 minutes):
- Generates and exports insights report (
report.txt)
- Generates and exports insights report (
- Updates (runs every minute):
-
gRPC Server
- implemented the unary call that retrive latest report and parse its text and send as response to the
gRPCclient.
- implemented the unary call that retrive latest report and parse its text and send as response to the
- Startup Tasks:
-
Many APIs implemented to handle items below:
- Create new Burrow
- Rent the free Burrow by the
User - Vacate the Burrow
- Show the Burrows list (some filters available by query params)
- Show the Burrow info
- Show the last available report(generated by
Scheduler) viagRPC
-
- All the Burrows' lifetimes(age), occupancy, and collapse are evaluated by
RedisTTL in both services. - If a burrow is vacated, the depth calculation will be paused, and it will be continued at the next rent.
- Each Burrow is allocated to only one User. Also, each User can only rent one Burrow.
- The CronJobs are handled by
Goroutine Schedulerbased on theworker pool. It is evaluated as :- Worker pool pattern with configurable concurrency (workers parameter)
- Thread-safe task management using mutex (sync.Mutex)
- Wait group (sync.WaitGroup) for task completion tracking
- Buffered channel for task distribution
- Auto-cleanup of completed tasks
To run the services locally, follow these steps:
-
Disable the Docker app configuration:
- Comment out the
- docker/app.ymlline indocker-compose.yml.
- Comment out the
-
Start Docker containers:
- Run the following command to bring up the Docker environment:
make up
- Run the following command to bring up the Docker environment:
-
Handle desired directories:
- Run the following command to catch probable missed directories:
mkdir -p ./api/{docs,logs} && mkdir -p ./scheduler/{assets/reports,logs}
- Run the following command to catch probable missed directories:
-
Prepare environment files:
- Generate
.envfiles for both services:cd ./api/ && make env && cd ./../scheduler && make env && cd ..
- In both services, modify the APP_DEBUG:
APP_DEBUG=true
- Generate
-
Generate gRPC protobuf files:
- In both services, execute:
cd ./api/ && make proto && cd ./../scheduler && make proto && cd ..
- In both services, execute:
-
Install Go dependencies:
- In both services, run:
cd ./api/ && go mod tidy && cd ./../scheduler && go mod tidy && cd ..
- In both services, run:
-
Generate Swagger documentation (API service only):
- Run the following in the API service:
cd ./api/ && make swag && cd ..
- Run the following in the API service:
-
Run the services:
- Start each service manually:
go run ./cmd/main.go
Note: Start the Scheduler service before the API service.
- Start each service manually:
To be used as the Kubernetes livenessProbe of deployment
curl -v 0.0.0.0:8080/handshakeResult
* Trying 0.0.0.0:8080...
* Connected to 0.0.0.0 (0.0.0.0) port 8080
> GET /handshake HTTP/1.1
> Host: 0.0.0.0:8080
> User-Agent: curl/8.7.1
> Accept: */*
>
* Request completely sent off
< HTTP/1.1 200 OK
< Access-Control-Allow-Credentials: true
< Access-Control-Allow-Headers: Content-Type, Content-Length, Accept-Encoding, X-CSRF-Token, Authorization, accept, origin, Cache-Control, X-Requested-With
< Access-Control-Allow-Methods: POST, GET, OPTIONS, PUT, DELETE, UPDATE
< Access-Control-Allow-Origin: *
< Access-Control-Max-Age: 21600
< Content-Type: application/json; charset=utf-8
< Date: Mon, 26 May 2025 15:17:19 GMT
< Content-Length: 84
<
* Connection #0 to host 0.0.0.0 left intact
{"status":"OK","message":"connection established","timestamp":"2025-05-26 15:17:19"}
or
{
"status": "OK",
"message": "connection established",
"timestamp": "2025-05-26 15:16:14"
}Note: Logs handled with the zap-logger and contains both stdout and log file cores.