Skip to content

BlaiseSegal/Philosophers

Repository files navigation

The Dining Philosophers Problem

The Dining Philosophers problem is an allegory serving as a case study to illustrate the complexities of concurrent programming, particularly the challenges related to resource sharing, process synchronization, and the prevention of systemic deadlock.

In its most common form, the scenario involves five philosophers engaged in deep thought, sitting around a circular table. In front of each is a plate of spaghetti, a dish that, by convention, requires the use of two forks to be eaten. In total, there are only five forks on the table, each placed between two adjacent philosophers. A philosopher's life cycle is simple and perpetual: they alternate between two fundamental states, thinking and eating. Thinking requires no external resources and may last an indeterminate, random amount of time. However, when a philosopher becomes hungry and wishes to eat, they must acquire both forks immediately to their left and right. A crucial constraint is that a fork can be used by only one philosopher at a time. Once the meal is finished, after a finite but non-deterministic time, the philosopher puts both forks back on the table, making them available to their neighbors, and resumes thinking.

A central element of the problem is the absence of direct communication between the philosophers; they are "silent." Their actions are asynchronous, and no philosopher can know the intentions or state of others, except for the availability of the adjacent forks. This constraint of incomplete information transforms the problem from a mere scheduling puzzle into a challenge of distributed algorithm design.

Problem Formalization

  • Agents and Processes: Each philosopher is an abstraction representing a process or a thread, an independent unit of execution within an operating system. These processes run concurrently, competing for system resources.

  • Shared Resources: The forks symbolize shared, critical, and limited resources. They have two essential properties: they are non-preemptible (once a process acquires a resource, it cannot be forcibly taken away), and their use is mutually exclusive (only one process may use a resource at a given time).

  • Allocation Constraints: The fundamental constraint is that a process (philosopher) must acquire a specific set of two resources (the left and right forks) to execute its critical section (eating). This requirement for multiple acquisitions is at the core of the complexity.

Algorithmic Goal: The challenge is to design a protocol---one unique algorithm followed by each process---that guarantees the flawless progression of the system. The algorithm must satisfy two crucial properties: absence of deadlock (where the entire system freezes) and absence of starvation (where one or more processes are indefinitely deprived of resources).

The problem is therefore not simply about competition for resources, but about competition for interdependent resources. Philosophers do not request just any two forks; they require a specific pair defined by their position at the table. This circular arrangement creates a topology of cyclic dependencies (philosopher 0 depends on shared resources with philosophers 1 and 4, philosopher 1 on those with 0 and 2, etc.), which is the structural cause of circular wait and, consequently, deadlock.

Origins and Historical Context

The problem was first formulated in 1965 by Edsger W. Dijkstra. The initial formulation was rooted in very concrete system engineering challenges and made no mention of philosophers. It was presented as an exam exercise for his students, describing five computers competing for access to shared peripherals such as tape drives. Shortly afterward, Tony Hoare, another pioneer in computer science, reformulated the problem using the more accessible and memorable metaphor of philosophers at a table, giving it the form in which it is universally known today.

The true complexity of the problem, and what makes it such a rich subject of study, arises from the combination of asynchrony and incomplete information. If philosophers' actions were synchronized by a global clock or if they could communicate to coordinate, the solution would be trivial: a central scheduler could simply assign turns to eat. The challenge, as formulated, is to find a decentralized algorithm where each philosopher makes decisions based solely on purely local information (the state of their two forks) without any global view of the system's state. This constraint makes it highly relevant for modern distributed systems, where components must operate autonomously with limited knowledge of the overall system.

Algorithmic Solutions and Synchronization Primitives

The solution of the Dining Philosophers problem has led to several algorithms, each illustrating a different approach to handling concurrency and preventing deadlocks. These solutions rely on synchronization primitives such as semaphores and monitors and can be classified based on which Coffman condition they invalidate.

Solution I: The Arbiter (The Waiter)

This solution is among the simplest to conceptualize. It introduces a central coordinating entity, an arbiter, who regulates access to the table itself.

Principle

The fundamental idea is to prevent the situation where all N philosophers attempt to pick up a fork simultaneously. To achieve this, the number of philosophers allowed to be "at the table" (i.e., attempting to acquire forks) is limited to a maximum of N-1.

We favored the "arbiter" approach for three main reasons:

  1. Multi-process modeling: In our implementation, each philosopher is a distinct process; having a central server via named pipes simplifies synchronization and avoids sharing locks between processes (something a simple C# monitor or a resource hierarchy does not easily solve).
  2. Observability and governance: The arbiter handles all acquisition/release requests, allowing us to publish metrics (latencies, timeouts, starvation) and finely trace every interaction. With a hierarchy or local monitor, this information would be fragmented across workers.
  3. Scalability and control: The central server can enforce policies (backoff, deadlines, priorities) and respond quickly to p99 spikes without major code refactoring. From a maintenance standpoint, iterating on a single orchestration point is simpler than with distributed locks.

Solution II: Resource Hierarchy

This solution is particularly elegant because it is entirely decentralized and requires no central arbiter. It breaks the symmetry of the problem by imposing a global order on resource acquisition.

Principle

The goal is to directly break Coffman's circular wait condition. To do so, a hierarchy is established among resources (the forks), and all processes must acquire them in accordance with this order.

Solution III: The Monitor (Dijkstra's State-Based Solution)

This solution uses a higher-level synchronization primitive, the monitor, which encapsulates shared data and the procedures that manipulate it, ensuring automatic mutual exclusion. The key idea is to transform the acquisition of two forks into a logically atomic operation: either a philosopher obtains both, or they wait without holding any.

Discussion of Trade-Offs

The choice involves trade-offs: the Arbiter solution prioritizes simplicity and starvation prevention at the cost of maximal concurrency. The Resource Hierarchy solution maximizes concurrency and decentralization but risks starvation. The Monitor solution offers superior programming abstraction but requires careful design and remains a centralized approach.

🍽️ Philosophers

This project turns the classic Dining Philosophers problem using C#/.NET 9. It demonstrates how to combine concurrency patterns, observability tooling, and architectural practices in a measurable, testable solution.

Highlights:

  • Concurrency & IPC: multi-process orchestration via typed named pipes, coordination policies, backoff strategies.
  • Observability: OpenTelemetry instrumentation (Prometheus, Grafana), business metrics (p95/p99 acquisition latency, starvation, active workers), structured logging with correlation IDs.
  • Modular Architecture: separation between core domain logic (Philosophers.Core), infrastructure (Philosophers.Infrastructure), application runtimes (PhilosophersDinner, PhilosopherWorker), and shared utilities (Philosophers.Shared).
  • Performance Engineering: automated load scenarios, CPU/memory tracking, retry policies that tame tail latencies.

🧩 Solution Layout

Philosophers.sln
├── Philosophers.Core           # Domain: simulator, retry policies, abstract metrics
├── Philosophers.Infrastructure # Typed IPC, fork coordinator, launcher telemetry, path resolver
├── Philosophers.Shared         # Structured logging, exit codes, shared utilities
├── PhilosophersDinner          # Launcher host: pipes, worker manager, Prometheus host
├── PhilosopherWorker           # Worker host: DI runtime, IPC client, observer, retry policy
└── Philosophers.Tests          # Unit tests (IPC parser, retry, metrics, simulator) + load scenarios

Key components:

  • NamedPipeServer consumes a strongly-typed contract (AcquireRequest, ReleaseRequest, PingRequest) and delegates validation / metrics to PhilosopherCommandHandler.
  • WorkerRuntime provisions a buffered pipe client, injects a configurable backoff policy (BackoffRetryPolicy), and pushes metrics via IWorkerMetrics.
  • LauncherRuntime orchestrates IWorkerPathResolver, IMetricsHost (Prometheus endpoint), IWorkerProcessManager (worker lifecycle), and the named pipe server.
  • PhilosopherSimulator combines IPhilosopherClient, IPhilosopherObserver, IWorkerMetrics, and IAcquisitionRetryPolicy to monitor every stage of a philosopher’s lifecycle.

🛠️ Getting Started

  1. Configure environment

    cp .env.example .env
    # Adjust variables (ports, Grafana credentials, number of philosophers, ...)
  2. Launch full stack (observability + app)

    docker compose up -d --build
    docker compose ps
  3. Explore Grafana dashboard Philosophers – Observability includes:

    • p95 / p99 acquisition latency
    • Meals throughput
    • Acquisitions vs timeouts
    • Starvations (5m) & active IPC clients
    • CPU usage, GC pauses, .NET heap size
  4. Shutdown & cleanup

    docker compose down -v

📦 Software Stack

Component Responsibility
PhilosophersDinner Launcher: arbitration, IPC server, worker processes, metrics
PhilosopherWorker Philosopher runtime: simulation, retry policy, worker metrics
Philosophers.Core Domain: simulator, backoff policies, metric abstractions
Philosophers.Infrastructure Typed IPC, fork coordinator, launcher telemetry, path resolver
Prometheus Metric scraping & storage
Grafana Visualization (pre-provisioned dashboards)
dotnet-monitor (optional) Runtime diagnostics (dumps, traces, metrics)

🧪 Tests & Load Scenarios

  • Unit tests (dotnet test --configuration Release) cover:

    • Exponential backoff (BackoffRetryPolicyTests)
    • Typed IPC parser (IpcMessageParserTests)
    • OTEL worker metrics (PhiloWorkerMetricsTests)
    • Deterministic simulator behaviour (PhilosopherSimulatorTests)
    • Optional load scenario (CoordinatorLoadTests, marked Skip to avoid CI overhead)
  • Performance benchmarks

    • Run the full Docker stack and observe p95/p99 under sustained load.
    • Prometheus alerts monitor timeout bursts and starvation events.

⚙️ Useful Commands

# Build the full solution
dotnet build --configuration Release

# Run unit tests
dotnet test --configuration Release

# Launch the launcher locally without Docker
dotnet run --project PhilosophersDinner -- 5 2000 100 100 20

# Run a worker standalone
dotnet run --project PhilosopherWorker -- 1 2000 100 100 20

🧠 Concepts & Practices Showcased

  • Responsibility separation between domain, infrastructure, and app hosts.
  • Explicit DI scoping: root providers per executable, explicit injection of ILoggerFactory, IWorkerMetrics, IAcquisitionRetryPolicy.
  • Resilience engineering: configurable backoff, cancellation support, metrics for critical sections.
  • Structured logging with minimal allocations (scopes with correlation IDs on every request/connection).
  • Allocation tuning: ReadOnlySpan parsing, buffered streams, pooled tasks around SemaphoreSlim.
  • Complete observability: native OTEL instrumentation, dashboards, and Prometheus alerts ready out-of-the-box.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published