Unity Catalog OSS provides an alternative to the PostgreSQL JDBC catalog with enhanced features for multi-format support and interoperability.
| Feature | PostgreSQL JDBC | Unity Catalog OSS |
|---|---|---|
| Catalog Protocol | JDBC (direct SQL) | REST API (HTTP) |
| Formats | Iceberg only | Delta + Iceberg + Hudi (UniForm) |
| Auth | Hardcoded credentials | Credential vending |
| Interoperability | Spark only | Spark + DuckDB + Trino + Dremio |
| Governance | Manual | Built-in access control |
┌─────────────────────────────────────────────────────────────┐
│ Your Applications │
│ (Spark, DuckDB, Trino, Python, etc.) │
└─────────────────────────┬───────────────────────────────────┘
│ REST API
▼
┌─────────────────────────────────────────────────────────────┐
│ Unity Catalog Server (port 8080) │
│ ┌─────────────────┐ ┌─────────────────┐ │
│ │ UC REST API │ │ Iceberg REST │ │
│ │ /catalogs │ │ Catalog API │ │
│ │ /schemas │ │ /iceberg/v1/ │ │
│ │ /tables │ │ │ │
│ └─────────────────┘ └─────────────────┘ │
│ │ │
│ ┌───────────┴───────────┐ │
│ │ Credential Vending │ │
│ └───────────┬───────────┘ │
└──────────────────────────┼──────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────┐
│ SeaweedFS (S3-compatible) │
│ s3://lakehouse/warehouse │
└─────────────────────────────────────────────────────────────┘
# Copy example config
cp config/unity-catalog/server.properties.example config/unity-catalog/server.properties
# Edit with your SeaweedFS credentials
nano config/unity-catalog/server.propertiesUpdate the S3 settings:
s3.bucketPath.0=s3://lakehouse/warehouse
s3.region.0=us-east-1
s3.accessKey.0=your_seaweedfs_access_key
s3.secretKey.0=your_seaweedfs_secret_key
s3.endpoint.0=http://localhost:8333./lakehouse start unity-catalog# Check status
./lakehouse status
# Test the API
curl http://localhost:8080/api/2.1/unity-catalog/catalogs
# Run full tests
./lakehouse testCopy the Unity Catalog Spark config:
cp config/spark/spark-defaults-uc.conf.example config/spark/spark-defaults.confOr add these settings to your existing config:
spark.sql.catalog.iceberg org.apache.iceberg.spark.SparkCatalog
spark.sql.catalog.iceberg.catalog-impl org.apache.iceberg.rest.RESTCatalog
spark.sql.catalog.iceberg.uri http://localhost:8080/api/2.1/unity-catalog/iceberg
spark.sql.catalog.iceberg.warehouse unity
spark.sql.catalog.iceberg.token not_usedfrom pyspark.sql import SparkSession
spark = SparkSession.builder \
.appName("UC-Demo") \
.getOrCreate()
# Create schema
spark.sql("CREATE SCHEMA IF NOT EXISTS iceberg.bronze")
# Create table
spark.sql("""
CREATE TABLE iceberg.bronze.orders (
order_id STRING,
customer_id STRING,
total DECIMAL(10,2),
created_at TIMESTAMP
) USING ICEBERG
""")
# Insert data
spark.sql("""
INSERT INTO iceberg.bronze.orders VALUES
('ord-001', 'cust-1', 99.99, current_timestamp()),
('ord-002', 'cust-2', 149.99, current_timestamp())
""")
# Query
spark.sql("SELECT * FROM iceberg.bronze.orders").show()# Start Unity Catalog
./lakehouse start unity-catalog
# Stop Unity Catalog
./lakehouse stop unity-catalog
# View logs
./lakehouse logs unity-catalog
# Check status
./lakehouse statuscurl http://localhost:8080/api/2.1/unity-catalog/catalogscurl -X POST http://localhost:8080/api/2.1/unity-catalog/schemas \
-H "Content-Type: application/json" \
-d '{
"name": "bronze",
"catalog_name": "unity"
}'curl "http://localhost:8080/api/2.1/unity-catalog/tables?catalog_name=unity&schema_name=bronze"Unity Catalog OSS works with DuckDB for local analytics:
-- Install extensions
INSTALL uc_catalog FROM core_nightly;
LOAD uc_catalog;
INSTALL delta;
LOAD delta;
-- Connect to Unity Catalog
CREATE SECRET (
TYPE UC,
TOKEN 'not_used',
ENDPOINT 'http://127.0.0.1:8080',
AWS_REGION 'us-east-1'
);
ATTACH 'unity' AS unity (TYPE UC_CATALOG);
-- Query tables
SELECT * FROM unity.bronze.orders;You can run both catalogs simultaneously:
- Keep your existing PostgreSQL JDBC catalog as
iceberg - Add Unity Catalog as a new catalog
unity
# Existing JDBC catalog
spark.sql.catalog.iceberg org.apache.iceberg.spark.SparkCatalog
spark.sql.catalog.iceberg.type jdbc
spark.sql.catalog.iceberg.uri jdbc:postgresql://localhost:5432/iceberg_catalog
# New Unity Catalog (different name)
spark.sql.catalog.unity org.apache.iceberg.spark.SparkCatalog
spark.sql.catalog.unity.catalog-impl org.apache.iceberg.rest.RESTCatalog
spark.sql.catalog.unity.uri http://localhost:8080/api/2.1/unity-catalog/icebergThen migrate tables:
# Read from old catalog
df = spark.table("iceberg.bronze.orders")
# Write to new catalog
df.writeTo("unity.bronze.orders").createOrReplace()Once validated:
- Stop applications
- Update
spark-defaults.confto use Unity Catalog config - Restart Spark cluster
- Verify all queries work
- Decommission PostgreSQL JDBC catalog
| Property | Description | Example |
|---|---|---|
server.port |
HTTP port | 8080 |
s3.bucketPath.N |
S3 bucket path | s3://lakehouse/warehouse |
s3.accessKey.N |
S3 access key | your_key |
s3.secretKey.N |
S3 secret key | your_secret |
s3.endpoint.N |
Custom S3 endpoint | http://localhost:8333 |
s3.region.N |
AWS region | us-east-1 |
| Property | Description |
|---|---|
spark.sql.catalog.iceberg.catalog-impl |
Must be org.apache.iceberg.rest.RESTCatalog |
spark.sql.catalog.iceberg.uri |
Unity Catalog Iceberg endpoint |
spark.sql.catalog.iceberg.warehouse |
Catalog name in Unity Catalog |
spark.sql.catalog.iceberg.token |
Auth token (use not_used for local) |
# Check logs
docker logs unity-catalog
# Verify config exists
ls -la config/unity-catalog/server.properties
# Check port availability
nc -z localhost 8080 && echo "Port in use" || echo "Port available"-
Verify Unity Catalog is running:
curl http://localhost:8080/api/2.1/unity-catalog/catalogs
-
Check Spark config:
grep -i "catalog.iceberg" config/spark/spark-defaults.conf -
Ensure correct endpoint in Spark:
http://localhost:8080/api/2.1/unity-catalog/iceberg
-
Check schema exists:
curl "http://localhost:8080/api/2.1/unity-catalog/schemas?catalog_name=unity" -
Verify table was created:
curl "http://localhost:8080/api/2.1/unity-catalog/tables?catalog_name=unity&schema_name=bronze"