Skip to content
This repository was archived by the owner on Mar 11, 2020. It is now read-only.

readme.md: fix typos #10

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,14 +1,14 @@
# Swarm Example Cluster: Microservices App

This is a sample Swarm cluster that illustrates how Swarm can be used as the foundation for a high-traffic microservice-architecture web application. It is based on the Docker Cats-vs-Dogs voting example application, but re-architected to accomodate arbitrarily large scale through the use of parallel vote capture frontends and asynchronous background workers processing each vote.
This is a sample Swarm cluster that illustrates how Swarm can be used as the foundation for a high-traffic microservice-architecture web application. It is based on the Docker Cats-vs-Dogs voting example application, but re-architected to accommodate arbitrarily large scale through the use of parallel vote capture frontends and asynchronous background workers processing each vote.

# Use Case

Imagine that your company is planning to buy an ad during the Superbowl to drive people to a web survey about whether they prefer cats or dogs as pets. (Perhaps your company sells pet food.) You need to ensure that millions of people can vote nearly simultaneously without your website becoming unavailable. You don't need exact real time results because you will announce them the next day, but you do need confidence that every vote will eventually get counted.

# Architecture

An Interlock load balancer (ha\_proxy plugin) sits in front of N web containers, each of which runs a simple Python (Flask) app that accepts votes and queues them into a redis container on the same node. These N web (+ redis) nodes capture votes quickly and can scale up to any value of N since they operate independently. Any level of expected voting traffic can thus be accomodated.
An Interlock load balancer (ha\_proxy plugin) sits in front of N web containers, each of which runs a simple Python (Flask) app that accepts votes and queues them into a redis container on the same node. These N web (+ redis) nodes capture votes quickly and can scale up to any value of N since they operate independently. Any level of expected voting traffic can thus be accommodated.

Asynchronously, M background workers running on separate nodes scan through those N redis containers, dequeueing votes, de-duplicating them (to prevent double voting) and committing the results to a single postgres container that runs on its own node.

Expand Down