Skip to content

Commit 0f0f3ff

Browse files
LexLuthrgitbook-bot
authored andcommitted
GITBOOK-41: snap and batch updated
1 parent 3fce199 commit 0f0f3ff

File tree

2 files changed

+47
-36
lines changed

2 files changed

+47
-36
lines changed

documentation/en/snap-deals.md

Lines changed: 39 additions & 26 deletions
Original file line numberDiff line numberDiff line change
@@ -30,36 +30,23 @@ To enable the snap deals pipeline in a Curio cluster, user needs to enable the s
3030
Data can be ingested using either the Snap Deals pipeline or the PoRep pipeline at any given time, but not both simultaneously.
3131
{% endhint %}
3232

33-
### Enable snap tasks
34-
35-
1. Add the `upgrade` layer already shipped with Curio to the `/etc/curio.env` file on the Curio nodes where GPU resources are available.\
36-
 
33+
### Configuration
3734

38-
```
39-
CURIO_LAYERS=gui,seal,post,upgrade <----- Add the "upgrade" layer
40-
CURIO_ALL_REMAINING_FIELDS_ARE_OPTIONAL=true
41-
CURIO_DB_HOST=yugabyte1,yugabyte2,yugabyte3
42-
CURIO_DB_USER=yugabyte
43-
CURIO_DB_PASSWORD=yugabyte
44-
CURIO_DB_PORT=5433
45-
CURIO_DB_NAME=yugabyte
46-
CURIO_REPO_PATH=~/.curio
47-
CURIO_NODE_NAME=ChangeMe
48-
FIL_PROOFS_USE_MULTICORE_SDR=1
49-
```
50-
51-
\
52-
53-
2. Restart the Curio services on the node.\
54-
&#x20;
35+
{% hint style="warning" %}
36+
When switching between Snap and PoRep deal pipeline, you must ensure that no sectors are being sealed or snapped. All pipelines must be empty before making a switch.
37+
{% endhint %}
5538

56-
```
57-
systemctl restart curio
58-
```
39+
#### Curio Market
5940

41+
1. Enable snap deals on base layer
42+
2. Save the layer and exit. [Enable snap tasks](snap-deals.md#enable-snap-tasks) and restart all the nodes.
6043

44+
```
45+
[Ingest]
46+
DoSnap = true
47+
```
6148

62-
### Update the Curio market adapter
49+
#### Boost Adapter (Deprecated)
6350

6451
1. Create or update the market layer ([if one is already created](enabling-market.md#enable-market-adapter-in-curio)) for the minerID where you wish to use snap deals pipeline.\
6552

@@ -108,4 +95,30 @@ Data can be ingested using either the Snap Deals pipeline or the PoRep pipeline
10895

10996
&#x20;
11097
2. Add the new market configuration layer to the appropriate nodes based on the [best practices](best-practices.md).
111-
3. Restart the Curio service.
98+
99+
### Enable snap tasks
100+
101+
1. Add the `upgrade` layer already shipped with Curio to the `/etc/curio.env` file on the Curio nodes where GPU resources are available.\
102+
&#x20;
103+
104+
```
105+
CURIO_LAYERS=gui,seal,post,upgrade <----- Add the "upgrade" layer
106+
CURIO_ALL_REMAINING_FIELDS_ARE_OPTIONAL=true
107+
CURIO_DB_HOST=yugabyte1,yugabyte2,yugabyte3
108+
CURIO_DB_USER=yugabyte
109+
CURIO_DB_PASSWORD=yugabyte
110+
CURIO_DB_PORT=5433
111+
CURIO_DB_NAME=yugabyte
112+
CURIO_REPO_PATH=~/.curio
113+
CURIO_NODE_NAME=ChangeMe
114+
FIL_PROOFS_USE_MULTICORE_SDR=1
115+
```
116+
117+
\
118+
119+
2. Restart the Curio services on the node.\
120+
&#x20;
121+
122+
```
123+
systemctl restart curio
124+
```

documentation/en/supraseal.md

Lines changed: 8 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,9 @@ description: This page explains how to setup supraseal batch sealer in Curio
55
# Batch Sealing with SupraSeal
66

77
{% hint style="danger" %}
8-
**Disclaimer:** SupraSeal batch sealing is currently in **BETA**. Use with caution and expect potential issues or changes in future versions. Currently some additional manual system configuration is required.
8+
**Disclaimer:** SupraSeal batch sealing is currently in **BETA**. Use with caution and expect potential issues or changes in future versions. Currently some additional manual system configuration is required.\
9+
\
10+
Batch Sealing only supports "CC" sectors as of now. Please make sure that "SnapDeals" are enabled in the cluster if you wish to onboard data with SupraSeal enabled. If SnapDeals are not enabled, deals will be routed to SupraSeal pipeline which will discard the actual data and seal empty sectors.
911
{% endhint %}
1012

1113
SupraSeal is an optimized batch sealing implementation for Filecoin that allows sealing multiple sectors in parallel. It can significantly improve sealing throughput compared to sealing sectors individually.
@@ -43,12 +45,10 @@ You need 2 sets of NVMe drives:
4345
* Fast with sufficient capacity (\~70G x batchSize x pipelines)
4446
* Can be remote storage if fast enough (\~500MiB/s/GPU)
4547

46-
The following table shows the number of NVMe drives required for different batch sizes. The drive count column indicates `N + M` where `N` is the number of drives for layer data (SPDK) and `M` is the number of drives for P2 output (filesystem).
47-
The iops/drive column shows the minimum iops **per drive** required for the batch size.
48-
Batch size indicated with `2x` means dual-pipeline drive setup. IOPS requirements are calculated simply by dividing total target 10M IOPS by the number of drives. In reality, depending on CPU core speed this may be too low or higher than neccesary. When ordering a system with barely enough IOPS plan to have free drive slots in case you need to add more drives later.
48+
The following table shows the number of NVMe drives required for different batch sizes. The drive count column indicates `N + M` where `N` is the number of drives for layer data (SPDK) and `M` is the number of drives for P2 output (filesystem). The iops/drive column shows the minimum iops **per drive** required for the batch size. Batch size indicated with `2x` means dual-pipeline drive setup. IOPS requirements are calculated simply by dividing total target 10M IOPS by the number of drives. In reality, depending on CPU core speed this may be too low or higher than neccesary. When ordering a system with barely enough IOPS plan to have free drive slots in case you need to add more drives later.
4949

5050
| Batch Size | 3.84TB | 7.68TB | 12.8TB | 15.36TB | 30.72TB |
51-
|--------------|--------|--------|--------|---------|---------|
51+
| ------------ | ------ | ------ | ------ | ------- | ------- |
5252
| 32 | 4 + 1 | 2 + 1 | 1 + 1 | 1 + 1 | 1 + 1 |
5353
| ^ iops/drive | 2500K | 5000K | 10000K | 10000K | 10000K |
5454
| 64 (2x 32) | 7 + 2 | 4 + 1 | 2 + 1 | 2 + 1 | 1 + 1 |
@@ -58,7 +58,6 @@ Batch size indicated with `2x` means dual-pipeline drive setup. IOPS requirement
5858
| 2x 128 | 26 + 6 | 13 + 3 | 8 + 2 | 7 + 2 | 4 + 1 |
5959
| ^ iops/drive | 385K | 770K | 1250K | 1429K | 2500K |
6060

61-
6261
## Hardware Recommendations
6362

6463
Currently, the community is trying to determine the best hardware configurations for batch sealing. Some general observations are:
@@ -131,7 +130,6 @@ CUDA 12.x is required, 11.x won't work. The build process depends on GCC 11.x sy
131130
* On newer Ubuntu install `gcc-11` and `g++-11` packages
132131
* In addtion to general build dependencies (listed on the [installation page](installation.md)), you need `libgmp-dev` and `libconfig++-dev`
133132

134-
135133
### Building
136134

137135
Build the batch-capable Curio binary:
@@ -163,7 +161,7 @@ env NRHUGE=36 ./scripts/setup.sh
163161

164162
### Benchmark NVME IOPS
165163

166-
Please make sure to benchmark the raw NVME IOPS before proceeding with further configuration to verify that IOPS requirements are fulfilled.&#x20;
164+
Please make sure to benchmark the raw NVME IOPS before proceeding with further configuration to verify that IOPS requirements are fulfilled.
167165

168166
```bash
169167
cd extern/supraseal/deps/spdk-v24.05/
@@ -194,7 +192,6 @@ Total : 8006785.90 31276.51 71.91 1
194192

195193
With ideally >10M IOPS total for all devices.
196194

197-
198195
### PC2 output storage
199196

200197
Attach scratch space storage for PC2 output (batch sealer needs \~70GB per sector in batch - 32GiB for the sealed sector, and 36GiB for the cache directory with TreeC/TreeR and aux files)
@@ -432,6 +429,7 @@ cd extern/supraseal/deps/spdk-v24.05/
432429
```
433430

434431
Go through the menus like this
432+
435433
```
436434
NVMe Management Options
437435
[1: list controllers]
@@ -481,10 +479,10 @@ y
481479
```
482480

483481
Then you might see a difference in performance like this:
482+
484483
```
485484
Latency(us)
486485
Device Information : IOPS MiB/s Average min max
487486
PCIE (0000:c1:00.0) NSID 1 from core 0: 721383.71 2817.91 88.68 11.20 591.51 ## before
488487
PCIE (0000:86:00.0) NSID 1 from core 0: 1205271.62 4708.09 53.07 11.87 446.84 ## after
489488
```
490-

0 commit comments

Comments
 (0)