Skip to content

Commit

Permalink
GITBOOK-41: snap and batch updated
Browse files Browse the repository at this point in the history
  • Loading branch information
LexLuthr authored and gitbook-bot committed Feb 12, 2025
1 parent 3fce199 commit 0f0f3ff
Show file tree
Hide file tree
Showing 2 changed files with 47 additions and 36 deletions.
65 changes: 39 additions & 26 deletions documentation/en/snap-deals.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,36 +30,23 @@ To enable the snap deals pipeline in a Curio cluster, user needs to enable the s
Data can be ingested using either the Snap Deals pipeline or the PoRep pipeline at any given time, but not both simultaneously.
{% endhint %}

### Enable snap tasks

1. Add the `upgrade` layer already shipped with Curio to the `/etc/curio.env` file on the Curio nodes where GPU resources are available.\
 
### Configuration

```
CURIO_LAYERS=gui,seal,post,upgrade <----- Add the "upgrade" layer
CURIO_ALL_REMAINING_FIELDS_ARE_OPTIONAL=true
CURIO_DB_HOST=yugabyte1,yugabyte2,yugabyte3
CURIO_DB_USER=yugabyte
CURIO_DB_PASSWORD=yugabyte
CURIO_DB_PORT=5433
CURIO_DB_NAME=yugabyte
CURIO_REPO_PATH=~/.curio
CURIO_NODE_NAME=ChangeMe
FIL_PROOFS_USE_MULTICORE_SDR=1
```
\
2. Restart the Curio services on the node.\
&#x20;
{% hint style="warning" %}
When switching between Snap and PoRep deal pipeline, you must ensure that no sectors are being sealed or snapped. All pipelines must be empty before making a switch.
{% endhint %}

```
systemctl restart curio
```
#### Curio Market

1. Enable snap deals on base layer
2. Save the layer and exit. [Enable snap tasks](snap-deals.md#enable-snap-tasks) and restart all the nodes.

```
[Ingest]
DoSnap = true
```

### Update the Curio market adapter
#### Boost Adapter (Deprecated)

1. Create or update the market layer ([if one is already created](enabling-market.md#enable-market-adapter-in-curio)) for the minerID where you wish to use snap deals pipeline.\

Expand Down Expand Up @@ -108,4 +95,30 @@ Data can be ingested using either the Snap Deals pipeline or the PoRep pipeline

&#x20;
2. Add the new market configuration layer to the appropriate nodes based on the [best practices](best-practices.md).
3. Restart the Curio service.

### Enable snap tasks

1. Add the `upgrade` layer already shipped with Curio to the `/etc/curio.env` file on the Curio nodes where GPU resources are available.\
&#x20;

```
CURIO_LAYERS=gui,seal,post,upgrade <----- Add the "upgrade" layer
CURIO_ALL_REMAINING_FIELDS_ARE_OPTIONAL=true
CURIO_DB_HOST=yugabyte1,yugabyte2,yugabyte3
CURIO_DB_USER=yugabyte
CURIO_DB_PASSWORD=yugabyte
CURIO_DB_PORT=5433
CURIO_DB_NAME=yugabyte
CURIO_REPO_PATH=~/.curio
CURIO_NODE_NAME=ChangeMe
FIL_PROOFS_USE_MULTICORE_SDR=1
```

\

2. Restart the Curio services on the node.\
&#x20;

```
systemctl restart curio
```
18 changes: 8 additions & 10 deletions documentation/en/supraseal.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,9 @@ description: This page explains how to setup supraseal batch sealer in Curio
# Batch Sealing with SupraSeal

{% hint style="danger" %}
**Disclaimer:** SupraSeal batch sealing is currently in **BETA**. Use with caution and expect potential issues or changes in future versions. Currently some additional manual system configuration is required.
**Disclaimer:** SupraSeal batch sealing is currently in **BETA**. Use with caution and expect potential issues or changes in future versions. Currently some additional manual system configuration is required.\
\
Batch Sealing only supports "CC" sectors as of now. Please make sure that "SnapDeals" are enabled in the cluster if you wish to onboard data with SupraSeal enabled. If SnapDeals are not enabled, deals will be routed to SupraSeal pipeline which will discard the actual data and seal empty sectors.
{% endhint %}

SupraSeal is an optimized batch sealing implementation for Filecoin that allows sealing multiple sectors in parallel. It can significantly improve sealing throughput compared to sealing sectors individually.
Expand Down Expand Up @@ -43,12 +45,10 @@ You need 2 sets of NVMe drives:
* Fast with sufficient capacity (\~70G x batchSize x pipelines)
* Can be remote storage if fast enough (\~500MiB/s/GPU)

The following table shows the number of NVMe drives required for different batch sizes. The drive count column indicates `N + M` where `N` is the number of drives for layer data (SPDK) and `M` is the number of drives for P2 output (filesystem).
The iops/drive column shows the minimum iops **per drive** required for the batch size.
Batch size indicated with `2x` means dual-pipeline drive setup. IOPS requirements are calculated simply by dividing total target 10M IOPS by the number of drives. In reality, depending on CPU core speed this may be too low or higher than neccesary. When ordering a system with barely enough IOPS plan to have free drive slots in case you need to add more drives later.
The following table shows the number of NVMe drives required for different batch sizes. The drive count column indicates `N + M` where `N` is the number of drives for layer data (SPDK) and `M` is the number of drives for P2 output (filesystem). The iops/drive column shows the minimum iops **per drive** required for the batch size. Batch size indicated with `2x` means dual-pipeline drive setup. IOPS requirements are calculated simply by dividing total target 10M IOPS by the number of drives. In reality, depending on CPU core speed this may be too low or higher than neccesary. When ordering a system with barely enough IOPS plan to have free drive slots in case you need to add more drives later.

| Batch Size | 3.84TB | 7.68TB | 12.8TB | 15.36TB | 30.72TB |
|--------------|--------|--------|--------|---------|---------|
| ------------ | ------ | ------ | ------ | ------- | ------- |
| 32 | 4 + 1 | 2 + 1 | 1 + 1 | 1 + 1 | 1 + 1 |
| ^ iops/drive | 2500K | 5000K | 10000K | 10000K | 10000K |
| 64 (2x 32) | 7 + 2 | 4 + 1 | 2 + 1 | 2 + 1 | 1 + 1 |
Expand All @@ -58,7 +58,6 @@ Batch size indicated with `2x` means dual-pipeline drive setup. IOPS requirement
| 2x 128 | 26 + 6 | 13 + 3 | 8 + 2 | 7 + 2 | 4 + 1 |
| ^ iops/drive | 385K | 770K | 1250K | 1429K | 2500K |


## Hardware Recommendations

Currently, the community is trying to determine the best hardware configurations for batch sealing. Some general observations are:
Expand Down Expand Up @@ -131,7 +130,6 @@ CUDA 12.x is required, 11.x won't work. The build process depends on GCC 11.x sy
* On newer Ubuntu install `gcc-11` and `g++-11` packages
* In addtion to general build dependencies (listed on the [installation page](installation.md)), you need `libgmp-dev` and `libconfig++-dev`


### Building

Build the batch-capable Curio binary:
Expand Down Expand Up @@ -163,7 +161,7 @@ env NRHUGE=36 ./scripts/setup.sh

### Benchmark NVME IOPS

Please make sure to benchmark the raw NVME IOPS before proceeding with further configuration to verify that IOPS requirements are fulfilled.&#x20;
Please make sure to benchmark the raw NVME IOPS before proceeding with further configuration to verify that IOPS requirements are fulfilled.

```bash
cd extern/supraseal/deps/spdk-v24.05/
Expand Down Expand Up @@ -194,7 +192,6 @@ Total : 8006785.90 31276.51 71.91 1

With ideally >10M IOPS total for all devices.


### PC2 output storage

Attach scratch space storage for PC2 output (batch sealer needs \~70GB per sector in batch - 32GiB for the sealed sector, and 36GiB for the cache directory with TreeC/TreeR and aux files)
Expand Down Expand Up @@ -432,6 +429,7 @@ cd extern/supraseal/deps/spdk-v24.05/
```

Go through the menus like this

```
NVMe Management Options
[1: list controllers]
Expand Down Expand Up @@ -481,10 +479,10 @@ y
```

Then you might see a difference in performance like this:

```
Latency(us)
Device Information : IOPS MiB/s Average min max
PCIE (0000:c1:00.0) NSID 1 from core 0: 721383.71 2817.91 88.68 11.20 591.51 ## before
PCIE (0000:86:00.0) NSID 1 from core 0: 1205271.62 4708.09 53.07 11.87 446.84 ## after
```

0 comments on commit 0f0f3ff

Please sign in to comment.