Local pipeline project with Docker Golang API. Run pipelines locally or as a daemon with real-time monitoring.
terminal from terminalgif.com
Pin can run as a long-running daemon service with SSE (Server-Sent Events) support for real-time pipeline monitoring and HTTP-triggered execution.
- Long-running service: Keep pin running as a daemon
- HTTP API: Trigger pipelines via REST endpoints
- Real-time events: Monitor pipeline execution via Server-Sent Events
- Remote monitoring: Connect from multiple clients simultaneously
- Production ready: Graceful shutdown and error handling
# Start daemon mode
pin apply --daemon
# Trigger pipeline from another terminal
curl -X POST -H "Content-Type: application/yaml" \
--data-binary @pipeline.yaml \
http://localhost:8081/trigger
# Monitor real-time events
curl -N http://localhost:8081/events| Endpoint | Method | Description |
|---|---|---|
/events |
GET | Server-Sent Events stream for real-time updates |
/health |
GET | Health check and connected client count |
/trigger |
POST | Trigger pipeline execution with YAML config |
/ |
GET | API information and available endpoints |
The daemon broadcasts various events during pipeline execution:
- daemon_start: Service started successfully
- pipeline_trigger: New pipeline execution requested
- job_container_start: Container started for job
- log: Real-time log messages from jobs
- job_completed: Job finished successfully
- job_failed: Job failed with error details
- pipeline_complete: Entire pipeline finished
- daemon_stop: Service shutting down
// Connect to event stream
const eventSource = new EventSource("http://localhost:8081/events");
eventSource.onmessage = function (event) {
const data = JSON.parse(event.data);
console.log(`[${data.level}] ${data.message}`);
};
// Events received:
// {"level":"info","message":"Pipeline execution started","job":"build"}
// {"level":"info","message":"Container started","job":"build"}
// {"level":"success","message":"Job completed successfully","job":"build"}# Run daemon with specific pipeline
pin apply --daemon -f production.yaml
# Run daemon without initial pipeline (HTTP-only mode)
pin apply --daemon
# Monitor from remote machine
curl -N http://your-server:8081/events
# Trigger deployments via API
curl -X POST -H "Content-Type: application/yaml" \
--data-binary @deployment.yaml \
http://your-server:8081/triggerYou can download latest release from here
Clone the pin
git clone https://github.com/muhammedikinci/pinDownload packages
go mod downloadBuild executable
go build -o pin ./cmd/cli/.Or you can run directly
go run ./cmd/cli/. apply -n "test" -f ./testdata/test.yamlPin includes built-in YAML validation to catch configuration errors before pipeline execution.
Pin automatically validates your pipeline configuration before execution:
- β
Required fields: Ensures either
imageordockerfileis specified - β Field types: Validates all fields have correct data types
- β Port formats: Checks port configurations match supported formats
- β Script validation: Ensures scripts are not empty
- β Boolean fields: Validates boolean configurations
# Valid configuration passes validation
$ pin apply -f pipeline.yaml
Pipeline validation successful
β build Starting...
# Invalid configuration shows helpful errors
$ pin apply -f invalid.yaml
Pipeline validation failed: validation error in job 'build': either 'image' or 'dockerfile' must be specifiedworkflow:
- run
logsWithTime: true
# Optional: Specify custom Docker host
docker:
host: "tcp://localhost:2375"
run:
image: golang:alpine3.15
copyFiles: true
soloExecution: true
script:
- go mod download
- go run .
- ls
port:
- 8082:8080You can create separate jobs like the run stage and if you want to run these jobs in the pipeline you must add its name to workflow.
Configure Docker daemon connection settings.
default: system default (usually unix:///var/run/docker.sock on Linux/macOS)
Specify a custom Docker host to connect to a different Docker daemon. This is useful for:
- Remote Docker: Connect to Docker running on another machine
- Docker Desktop: Connect to Docker Desktop on different ports
- CI/CD environments: Connect to specific Docker instances
- Development: Switch between local and remote Docker instances
# TCP connection to remote Docker daemon
docker:
host: "tcp://192.168.1.100:2375"
# TCP connection with TLS (secure)
docker:
host: "tcp://docker.example.com:2376"
# Unix socket (Linux/macOS default)
docker:
host: "unix:///var/run/docker.sock"
# Windows named pipe
docker:
host: "npipe://./pipe/docker_engine"
# SSH connection to remote host
docker:
host: "ssh://user@docker-host"# Connect to local Docker Desktop
workflow:
- build
docker:
host: "tcp://localhost:2375"
build:
image: golang:alpine
script:
- go build .
# Connect to remote Docker daemon
workflow:
- deploy
docker:
host: "tcp://production-docker:2375"
deploy:
image: alpine:latest
script:
- echo "Deploying to remote Docker"- Use TLS (port 2376) for remote connections in production
- Ensure Docker daemon is properly secured when exposing TCP ports
- Consider using SSH tunneling for secure remote connections
default: false
If you want to copy all projects filed to the docker container, you must set this configuration to true
default: false
When you add multiple commands to the script field, commands are running in the container as a shell script. If soloExecution is set to true each command works in a different shell script.
# shell#1
cd cmd
ls# shell#1
cd cmd# shell#2
lsIf you want to see all files in the cmd folder you must set soloExecution to false or you can use this:
# shell#1
cd cmd && lsdefault: false
logsWithTime => true
β 2022/05/08 11:36:30 Image is available
β 2022/05/08 11:36:30 Start creating container
β 2022/05/08 11:36:33 Starting the container
β 2022/05/08 11:36:35 Execute command: ls -alogsWithTime => false
β Image is available
β Start creating container
β Starting the container
β Execute command: ls -adefault: empty mapping
You can use this feature for port forwarding from container to your machine with flexible host and port configuration.
- Standard format:
"hostPort:containerPort" - Custom host format:
"hostIP:hostPort:containerPort"
# Standard port mapping (binds to all interfaces)
port: "8080:80"
# Multiple ports with different configurations
port:
- "8082:8080" # Standard format
- "127.0.0.1:8083:8080" # Bind only to localhost
- "192.168.1.100:8084:8080" # Bind to specific IP address
# Mix of standard and custom host formats
run:
image: nginx:alpine
port:
- "8080:80" # Available on all network interfaces
- "127.0.0.1:8081:80" # Only accessible from localhost
- "0.0.0.0:8082:80" # Explicitly bind to all interfaces- Security: Bind services only to localhost (
127.0.0.1:8080:80) - Network isolation: Bind to specific network interfaces (
192.168.1.100:8080:80) - Development: Expose different ports for different environments
default: empty mapping
You can use this feature to ignore copying the specific files in your project to the container.
Sample configuration yaml
run:
image: node:current-alpine3.15
copyFiles: true
soloExecution: true
port:
- 8080:8080
copyIgnore:
- server.js
- props
- README.md
- helper/.*/.pyActual folder structure in project
index.js
server.js
README.md
helper:
- test.py
- mock
test2.py
- api:
index.js
- props:
index.jsFolder structure in container
index.js
helper:
- mock (empty)
- api:
index.jsdefault: false
If you want to run parallel job, you must add parallel field and the stage must be in workflow(position doesn't matter)
workflow:
- testStage
- parallelJob
- run
---
parallelJob:
image: node:current-alpine3.15
copyFiles: true
soloExecution: true
parallel: true
script:
- ls -aYou can specify environment variables for your jobs in the YAML configuration. These variables will be available inside the container during job execution.
Example:
workflow:
- run
run:
image: golang:alpine3.15
copyFiles: true
soloExecution: true
script:
- go mod download
- go run .
- echo "Environment variables:"
- echo "MY_VAR: $MY_VAR"
- echo "ANOTHER_VAR: $ANOTHER_VAR"
port:
- 8082:8080
env:
- MY_VAR=value
- ANOTHER_VAR=another_valueIn this example, the environment variables MY_VAR and ANOTHER_VAR are set and printed during job execution.
Pin supports automatic job retries with configurable parameters for handling transient failures.
default: no retry (attempts: 1)
Configure automatic retry behavior for jobs that fail due to temporary issues like network problems, resource constraints, or external service unavailability.
retry:
attempts: 3 # Number of attempts (1-10, default: 1)
delay: 5 # Initial delay in seconds (0-300, default: 1)
backoff: 2.0 # Exponential backoff multiplier (0.1-10.0, default: 1.0)# Simple retry - 3 attempts with 2 second delays
workflow:
- unstable-service
unstable-service:
image: alpine:latest
retry:
attempts: 3
delay: 2
script:
- echo "Attempting to connect to service..."
- curl https://unstable-api.example.com/health
# Advanced retry with exponential backoff
workflow:
- network-dependent
network-dependent:
image: alpine:latest
retry:
attempts: 5 # Try 5 times total
delay: 1 # Start with 1 second delay
backoff: 2.0 # Double delay each retry (1s, 2s, 4s, 8s)
script:
- wget https://external-resource.com/data.zip- Linear Delays: With
backoff: 1.0, delays remain constant - Exponential Backoff: With
backoff > 1.0, delays increase exponentially - Failure Logging: Each retry attempt is logged with reason and next attempt time
- Final Failure: After all attempts fail, the job fails with the last error
- Network Operations: Downloads, API calls, external service connections
- Resource Competition: Database connections, file locks, temporary resource unavailability
- CI/CD Pipelines: Flaky tests, temporary infrastructure issues
- External Dependencies: Third-party services, cloud resources
You can specify conditions for job execution using the condition field. Jobs will only run if the condition evaluates to true.
Example:
workflow:
- build
- test
- deploy
build:
image: golang:alpine3.15
copyFiles: true
script:
- go build -o app .
test:
image: golang:alpine3.15
copyFiles: true
script:
- go test ./...
deploy:
image: alpine:latest
condition: $BRANCH == "main"
script:
- echo "Deploying to production..."
- ./deploy.sh- Equality:
$VAR == "value"- Check if variable equals value - Inequality:
$VAR != "value"- Check if variable does not equal value - AND:
$VAR1 == "value1" && $VAR2 == "value2"- Both conditions must be true - OR:
$VAR1 == "value1" || $VAR2 == "value2"- At least one condition must be true - Variable existence:
$VAR- Check if variable exists and is not empty/false/0
# Run only on main branch
deploy:
condition: $BRANCH == "main"
# Run on main or develop branch
deploy:
condition: $BRANCH == "main" || $BRANCH == "develop"
# Run only when both conditions are met
deploy:
condition: $BRANCH == "main" && $DEPLOY == "true"
# Run when variable exists
cleanup:
condition: $CLEANUP_ENABLED
# Run when environment is not test
deploy:
condition: $ENV != "test"You can set environment variables before running pin:
BRANCH=main pin apply -f pipeline.yamlYou can use a custom Dockerfile to build your own image for the job instead of pulling a pre-built image.
Example:
workflow:
- custom-build
custom-build:
dockerfile: "./Dockerfile"
copyFiles: true
script:
- echo "Hello from custom Docker image!"
- ls -la- dockerfile: Path to your custom Dockerfile
- Automatic image building: Pin will build the image from your Dockerfile before running the job
- Build context: The directory containing the Dockerfile will be used as the build context
- Image naming: Built images are automatically tagged as
<job-name>-custom:latest
FROM alpine:latest
RUN apk add --no-cache \
bash \
curl \
git \
make
WORKDIR /app
USER nobody
CMD ["/bin/bash"]Note: When using dockerfile, you don't need to specify the image field. Pin will use the built image automatically.
go test ./...For comprehensive documentation, examples, and guides:
- π Complete Documentation - Full documentation index
- π Examples - Practical examples and use cases
- π API Reference - HTTP API documentation for daemon mode
- π§ Troubleshooting - Common issues and solutions
- π― Use Cases - Real-world applications and workflows
Contributions are welcome! Please feel free to submit a Pull Request.
- GitHub Issues - Bug reports and feature requests
- GitHub Discussions - Community discussions
Muhammed Δ°kinci - [email protected]

