Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -15,3 +15,7 @@ Cargo.lock

# Ignore config files
**/*.json

# Figs
*.pdf
*.png
4 changes: 1 addition & 3 deletions Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,5 @@ cp bin/release/driver exec
# https://hub.docker.com/_/debian/
FROM debian:stable-slim AS mastic
COPY --from=build /opt/mastic/exec /opt/mastic/bin
COPY --from=build /opt/mastic/src/configs/attribute-based-metrics.toml /opt/mastic/bin/
COPY --from=build /opt/mastic/src/configs/plain-metrics.toml /opt/mastic/bin/
COPY --from=build /opt/mastic/src/configs/weighted-heavy-hitters.toml /opt/mastic/bin/
COPY --from=build /opt/mastic/src/configs/*.toml /opt/mastic/bin/
WORKDIR /opt/mastic
22 changes: 21 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,6 +49,9 @@ cargo 1.74.0
Next, build from sources using:
```bash
❯❯ cargo build --release

...
Finished `release` profile [optimized] target(s) in ...s
```

## Running
Expand Down Expand Up @@ -146,7 +149,7 @@ mode = "plain_metrics"
# ...
```

#### Plain Metrics with Prios: Aggregators
#### Plain Metrics with Prio: Aggregators
Run the aggregators in two separate shells. They will wait and be ready to
process client requests.
```bash
Expand All @@ -172,6 +175,23 @@ This branch can do Plain Heavy Hitters by setting the histogram size to 1, but a
more efficient implementation uses the `Count` circuit and is in the [`Count`
branch](https://github.com/TrustworthyComputing/mastic/tree/Count).

## Troubleshooting
Mastic relies on the [tarpc](https://github.com/google/tarpc) library which has
a limit on the size of the RPC messages. As such, you might see an error similar
to the following:
```shell
thread 'main' panicked at src/bin/driver.rs:335:
called `Result::unwrap()` on an `Err` value: Disconnected
```
which is caused by the RPC batch sizes.

To fix this, reduce the batch sizes of either the reports or the FLPs (or both).
```toml
add_report_share_batch_size = 1000
query_flp_batch_size = 100000
```
**Note:** this does not affect the online running time, but it affects the
upload time from the `driver` to the Mastic servers.

## Disclaimer

Expand Down
33 changes: 32 additions & 1 deletion ARTIFACT-EVALUATION.md → artifact/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -248,7 +248,10 @@ histogram as in the print message: `4 histogram buckets`. Each string e.g.,

Lastly, you can run both weighted heavy hitters and attribute based metrics with malicious clients by passing the `--malicious` and a percentage.

### Experiments

# Experiments

## Understanding the configuration files
Our experiments can be reproduced by using our config files: https://github.com/TrustworthyComputing/mastic/tree/main/src/configs and the values provided in the paper.

For instance:
Expand Down Expand Up @@ -301,3 +304,31 @@ zipf_exponent = 1.03
```
etc. These parameters are sufficient to reproduce all our results -- all our
experiments in the paper specify the parameters used.

## Reproducing Experiments and Figures
To reproduce our experiments, use the configs from the [configs](./configs/)
directory and the scripts from the [plots](../plots/) directory.

## Troubleshooting
As mentioned in the **Troubleshooting** section of the [README file](../README.md) file,
Mastic relies on the [tarpc](https://github.com/google/tarpc) library which has
a limit on the size of the RPC messages. As such, you might see an error similar
to the following:
```shell
thread 'main' panicked at src/bin/driver.rs:335:
called `Result::unwrap()` on an `Err` value: Disconnected
```
which is caused by the RPC batch sizes.

In case you run into this issue, you can fix this easily by reducing the batch
sizes of either the reports or the FLPs (or both).
```toml
add_report_share_batch_size = 1000
query_flp_batch_size = 100000
```
**Note:** this does not affect the online running time, but it affects the
upload time from the `driver` to the Mastic servers. In other words, this does
not change the experiments but will make setting up the experiments faster. For
this reason, most of the provided configs use the default batch sizes, which may
cause crashes with more clients or bits, but this can be simply resolved by
reducing the batch sizes.
14 changes: 14 additions & 0 deletions artifact/configs/figure-3/Mastic-m=1,n=128.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
# For more efficiency, use the "count" branch in case m = 1.

mode.weighted_heavy_hitters.threshold = 0.01

data_bits = 128
hist_buckets = 1

server_0 = "0.0.0.0:8000"
server_1 = "0.0.0.0:8001"

add_report_share_batch_size = 100
query_flp_batch_size = 100000
zipf_unique_buckets = 1000
zipf_exponent = 1.03
14 changes: 14 additions & 0 deletions artifact/configs/figure-3/Mastic-m=1,n=256.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
# For more efficiency, use the "count" branch in case m = 1.

mode.weighted_heavy_hitters.threshold = 0.01

data_bits = 256
hist_buckets = 1

server_0 = "0.0.0.0:8000"
server_1 = "0.0.0.0:8001"

add_report_share_batch_size = 100
query_flp_batch_size = 100000
zipf_unique_buckets = 1000
zipf_exponent = 1.03
14 changes: 14 additions & 0 deletions artifact/configs/figure-3/Mastic-m=1,n=64.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
# For more efficiency, use the "count" branch in case m = 1.

mode.weighted_heavy_hitters.threshold = 0.01

data_bits = 64
hist_buckets = 1

server_0 = "0.0.0.0:8000"
server_1 = "0.0.0.0:8001"

add_report_share_batch_size = 100
query_flp_batch_size = 100000
zipf_unique_buckets = 1000
zipf_exponent = 1.03
12 changes: 12 additions & 0 deletions artifact/configs/figure-3/Mastic-m=10,n=128.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
mode.weighted_heavy_hitters.threshold = 0.01

data_bits = 128
hist_buckets = 9

server_0 = "0.0.0.0:8000"
server_1 = "0.0.0.0:8001"

add_report_share_batch_size = 50
query_flp_batch_size = 100000
zipf_unique_buckets = 1000
zipf_exponent = 1.03
12 changes: 12 additions & 0 deletions artifact/configs/figure-3/Mastic-m=10,n=256.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
mode.weighted_heavy_hitters.threshold = 0.01

data_bits = 256
hist_buckets = 9

server_0 = "0.0.0.0:8000"
server_1 = "0.0.0.0:8001"

add_report_share_batch_size = 50
query_flp_batch_size = 100000
zipf_unique_buckets = 1000
zipf_exponent = 1.03
12 changes: 12 additions & 0 deletions artifact/configs/figure-3/Mastic-m=10,n=64.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
mode.weighted_heavy_hitters.threshold = 0.01

data_bits = 64
hist_buckets = 9

server_0 = "0.0.0.0:8000"
server_1 = "0.0.0.0:8001"

add_report_share_batch_size = 50
query_flp_batch_size = 100000
zipf_unique_buckets = 1000
zipf_exponent = 1.03
12 changes: 12 additions & 0 deletions artifact/configs/figure-3/Mastic-m=30,n=128.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
mode.weighted_heavy_hitters.threshold = 0.01

data_bits = 128
hist_buckets = 29

server_0 = "0.0.0.0:8000"
server_1 = "0.0.0.0:8001"

add_report_share_batch_size = 50
query_flp_batch_size = 100000
zipf_unique_buckets = 1000
zipf_exponent = 1.03
12 changes: 12 additions & 0 deletions artifact/configs/figure-3/Mastic-m=30,n=256.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
mode.weighted_heavy_hitters.threshold = 0.01

data_bits = 256
hist_buckets = 29

server_0 = "0.0.0.0:8000"
server_1 = "0.0.0.0:8001"

add_report_share_batch_size = 50
query_flp_batch_size = 100000
zipf_unique_buckets = 1000
zipf_exponent = 1.03
12 changes: 12 additions & 0 deletions artifact/configs/figure-3/Mastic-m=30,n=64.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
mode.weighted_heavy_hitters.threshold = 0.01

data_bits = 64
hist_buckets = 29

server_0 = "0.0.0.0:8000"
server_1 = "0.0.0.0:8001"

add_report_share_batch_size = 50
query_flp_batch_size = 100000
zipf_unique_buckets = 1000
zipf_exponent = 1.03
12 changes: 12 additions & 0 deletions artifact/configs/figure-3/Mastic-m=5,n=128.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
mode.weighted_heavy_hitters.threshold = 0.01

data_bits = 128
hist_buckets = 4

server_0 = "0.0.0.0:8000"
server_1 = "0.0.0.0:8001"

add_report_share_batch_size = 100
query_flp_batch_size = 100000
zipf_unique_buckets = 1000
zipf_exponent = 1.03
12 changes: 12 additions & 0 deletions artifact/configs/figure-3/Mastic-m=5,n=256.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
mode.weighted_heavy_hitters.threshold = 0.01

data_bits = 256
hist_buckets = 4

server_0 = "0.0.0.0:8000"
server_1 = "0.0.0.0:8001"

add_report_share_batch_size = 100
query_flp_batch_size = 100000
zipf_unique_buckets = 1000
zipf_exponent = 1.03
12 changes: 12 additions & 0 deletions artifact/configs/figure-3/Mastic-m=5,n=64.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
mode.weighted_heavy_hitters.threshold = 0.01

data_bits = 64
hist_buckets = 4

server_0 = "0.0.0.0:8000"
server_1 = "0.0.0.0:8001"

add_report_share_batch_size = 100
query_flp_batch_size = 100000
zipf_unique_buckets = 1000
zipf_exponent = 1.03
14 changes: 14 additions & 0 deletions artifact/configs/figure-4/Mastic-m=1.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
# For more efficiency, use the "count" branch in case m = 1.

mode.weighted_heavy_hitters.threshold = 0.01

data_bits = 256
hist_buckets = 1

server_0 = "0.0.0.0:8000"
server_1 = "0.0.0.0:8001"

add_report_share_batch_size = 100
query_flp_batch_size = 100000
zipf_unique_buckets = 1000
zipf_exponent = 1.03
12 changes: 12 additions & 0 deletions artifact/configs/figure-4/Mastic-m=10.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
mode.weighted_heavy_hitters.threshold = 0.01

data_bits = 256
hist_buckets = 9

server_0 = "0.0.0.0:8000"
server_1 = "0.0.0.0:8001"

add_report_share_batch_size = 100
query_flp_batch_size = 100000
zipf_unique_buckets = 1000
zipf_exponent = 1.03
12 changes: 12 additions & 0 deletions artifact/configs/figure-4/Mastic-m=30.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
mode.weighted_heavy_hitters.threshold = 0.01

data_bits = 256
hist_buckets = 29

server_0 = "0.0.0.0:8000"
server_1 = "0.0.0.0:8001"

add_report_share_batch_size = 50
query_flp_batch_size = 10000
zipf_unique_buckets = 1000
zipf_exponent = 1.03
12 changes: 12 additions & 0 deletions artifact/configs/figure-4/Mastic-m=5.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
mode.weighted_heavy_hitters.threshold = 0.01

data_bits = 256
hist_buckets = 4

server_0 = "0.0.0.0:8000"
server_1 = "0.0.0.0:8001"

add_report_share_batch_size = 100
query_flp_batch_size = 100000
zipf_unique_buckets = 1000
zipf_exponent = 1.03
14 changes: 14 additions & 0 deletions artifact/configs/figure-5/Mastic-m=1.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
# For more efficiency, use the "count" branch in case m = 1.

mode.weighted_heavy_hitters.threshold = 0.01

data_bits = 256
hist_buckets = 1

server_0 = "0.0.0.0:8000"
server_1 = "0.0.0.0:8001"

add_report_share_batch_size = 100
query_flp_batch_size = 100000
zipf_unique_buckets = 1000
zipf_exponent = 1.03
12 changes: 12 additions & 0 deletions artifact/configs/figure-6/Mastic-A=1.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
mode.attribute_based_metrics.num_attributes = 1

data_bits = 1
hist_buckets = 100

server_0 = "0.0.0.0:8000"
server_1 = "0.0.0.0:8001"

add_report_share_batch_size = 500
query_flp_batch_size = 10000
zipf_unique_buckets = 1000
zipf_exponent = 1.03
12 changes: 12 additions & 0 deletions artifact/configs/figure-6/Mastic-A=1024.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
mode.attribute_based_metrics.num_attributes = 1024

data_bits = 10
hist_buckets = 100

server_0 = "0.0.0.0:8000"
server_1 = "0.0.0.0:8001"

add_report_share_batch_size = 50
query_flp_batch_size = 10000
zipf_unique_buckets = 1000
zipf_exponent = 1.03
12 changes: 12 additions & 0 deletions artifact/configs/figure-6/Mastic-A=128.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
mode.attribute_based_metrics.num_attributes = 128

data_bits = 7
hist_buckets = 100

server_0 = "0.0.0.0:8000"
server_1 = "0.0.0.0:8001"

add_report_share_batch_size = 100
query_flp_batch_size = 100000
zipf_unique_buckets = 1000
zipf_exponent = 1.03
11 changes: 11 additions & 0 deletions artifact/configs/figure-6/Prio-A=1.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
mode = "plain_metrics"

hist_buckets = 100

server_0 = "0.0.0.0:8000"
server_1 = "0.0.0.0:8001"

add_report_share_batch_size = 1000
query_flp_batch_size = 100000
zipf_unique_buckets = 1000
zipf_exponent = 1.03
11 changes: 11 additions & 0 deletions artifact/configs/figure-6/Prio-A=1024.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
mode = "plain_metrics"

hist_buckets = 102400

server_0 = "0.0.0.0:8000"
server_1 = "0.0.0.0:8001"

add_report_share_batch_size = 5
query_flp_batch_size = 10000
zipf_unique_buckets = 1000
zipf_exponent = 1.03
Loading
Loading