Sail is an open-source unified and distributed multimodal computation framework created by LakeSail.
Our mission is to unify batch processing, stream processing, and compute-intensive AI workloads. Sail is a compute engine that is:
- Compatible with the Spark Connect protocol, supporting the Spark SQL and DataFrame API with no code rewrites required.
- ~4x faster than Spark in benchmarks (up to 8x in specific workloads).
- 94% cheaper on infrastructure costs.
- 100% Rust-native with no JVM overhead, delivering memory safety, instant startup, and predictable performance.
✨Using Sail? Tell us your story and get free merch!✨
The documentation of the latest Sail version can be found here.
Sail is available as a Python package on PyPI. You can install it along with PySpark in your Python environment.
pip install pysail
pip install "pyspark[connect]"
Alternatively, you can install the lightweight client package pyspark-client
since Spark 4.0.
The pyspark-connect
package, which is equivalent to pyspark[connect]
, is also available since Spark 4.0.
You can install Sail from source to optimize performance for your specific hardware architecture. The detailed Installation Guide walks you through this process step-by-step.
If you need to deploy Sail in production environments, the Deployment Guide provides comprehensive instructions for deploying Sail on Kubernetes clusters and other infrastructure configurations.
Option 1: Command Line Interface. You can start the local Sail server using the sail
command.
sail spark server --port 50051
Option 2: Python API. You can start the local Sail server using the Python API.
from pysail.spark import SparkConnectServer
server = SparkConnectServer(port=50051)
server.start(background=False)
Option 3: Kubernetes. You can deploy Sail on Kubernetes and run Sail in cluster mode for distributed processing. Please refer to the Kubernetes Deployment Guide for instructions on building the Docker image and writing the Kubernetes manifest YAML file.
kubectl apply -f sail.yaml
kubectl -n sail port-forward service/sail-spark-server 50051:50051
Once you have a running Sail server, you can connect to it in PySpark. No changes are needed in your PySpark code!
from pyspark.sql import SparkSession
spark = SparkSession.builder.remote("sc://localhost:50051").getOrCreate()
spark.sql("SELECT 1 + 1").show()
Please refer to the Getting Started guide for further details.
Sail supports a variety of storage backends for reading and writing data. You can read more details in our Storage Guide.
Here are the storage options supported:
- AWS S3
- Cloudflare R2
- Azure
- Google Cloud Storage
- Hugging Face
- HDFS
- File systems
- HTTP/HTTPS
- In‐memory storage
Sail provides native support for Delta Lake, offering a reliable storage layer with strong data management guarantees and ensuring interoperability with existing Delta datasets.
For more details on usage and best practices, see the Delta Lake Guide.
Derived TPC-H results show that Sail outperforms Apache Spark in every query:
- Execution Time: ~4× faster across diverse SQL workloads.
- Hardware Cost: 94% lower with significantly lower peak memory usage and zero shuffle spill.
Metric | Spark | Sail |
---|---|---|
Total Query Time | 387.36 s | 102.75 s |
Query Speed-Up | Baseline | 43% – 727% |
Peak Memory Usage | 54 GB | 22 GB (1 s) |
Disk Write (Shuffle Spill) | > 110 GB | 0 GB |
These results come from a derived TPC-H benchmark (22 queries, scale factor 100, Parquet format) on AWS r8g.4xlarge
instances.
See the full analysis and graphs on our Benchmark Results page.
Contributions are more than welcome!
Please submit GitHub issues for bug reports and feature requests. You are also welcome to ask questions in GitHub discussions.
Feel free to create a pull request if you would like to make a code change. You can refer to the Development Guide to get started.
Additionally, please join our Slack Community if you haven’t already!
When Spark was invented over 15 years ago, it was revolutionary. It redefined distributed data processing and powered ETL, machine learning, and analytics pipelines across industries.
But Spark’s JVM-based architecture now struggles to meet modern demands for performance and cloud efficiency:
- Garbage collection pauses introduce latency spikes.
- Serialization overhead slows data exchange between JVM and Python.
- Heavy executors drive up cloud costs and complicate scaling.
- Row-based processing performs poorly on analytical workloads and leaves hardware efficiency untapped.
- Slow startup delays workloads, hurting interactive and on-demand use cases.
Sail solves these problems with a modern, Rust-native design.
Sail offers a drop-in replacement for Spark SQL and the Spark DataFrame API. Existing PySpark code works out of the box once you connect the Spark session to Sail over the Spark Connect protocol.
- Spark SQL Dialect Support. A custom Rust parser (built with parser combinators and Rust procedural macros) covers Spark SQL syntax with production-grade accuracy.
- DataFrame API Support. Spark DataFrame operations run on Sail with identical semantics.
- Python UDF, UDAF, UDWF, and UDTF Support. Python, Pandas, and Arrow UDFs all follow the same conventions as Spark.
- Rust-Native Engine. No garbage collection pauses, no JVM memory tuning, and low memory footprint.
- Columnar Format and Vectorized Execution. Built on top of Apache Arrow and Apache DataFusion, the columnar in-memory format and SIMD instructions unlock blazing-fast query execution.
- Lightning-Fast Python UDFs. Python code runs inside Sail with zero serialization overhead as Arrow array pointers enable zero-copy data sharing.
- Performant Data Shuffling. Workers exchange Arrow columnar data directly, minimizing shuffle costs for joins and aggregations.
- Lightweight, Stateless Workers. Workers start in seconds, consume only a few megabytes of memory at idle, and scale elastically to cut cloud costs and simplify operations.
- Concurrency and Memory Safety You Can Trust. Rust’s ownership model prevents null pointers, race conditions, and unsafe memory access for unmatched reliability.
Curious about how Sail stacks up against Spark? Explore our Why Sail? page. Ready to bring your existing workloads over? Our Migration Guide shows you how.
- Architecture – Overview of Sail’s design for both local and cluster modes, and how it transitions seamlessly between them.
- Query Planning – Detailed explanation of how Sail parses SQL and Spark relations, builds logical and physical plans, and handles execution for local and cluster modes.
- SQL and DataFrame Features – Complete reference for Spark SQL and DataFrame API compatibility.
- LakeSail Blog – Updates on Sail releases, benchmarks, and technical insights.