Skip to content

Proxy-based TLSN#1122

Open
th4s wants to merge 3 commits intofeat/prf-ems-supportfrom
feat/proxy-approach
Open

Proxy-based TLSN#1122
th4s wants to merge 3 commits intofeat/prf-ems-supportfrom
feat/proxy-approach

Conversation

@th4s
Copy link
Copy Markdown
Member

@th4s th4s commented Mar 10, 2026

This PR adds the proxy-based approach to TLSNotary.

  • Core config (crates/core): Extends TlsCommitProtocolConfig with a Proxy variant and adds ProxyTlsConfig
  • Dependency wiring (crates/tlsn/src/deps/): Replaces the old monolithic mpz.rs with a deps/ module that has ProverDeps and VerifierDeps enums, each with Mpc and Proxy variants, encapsulating setup/allocation logic per mode.
  • Proxy TLS client (crates/tlsn/src/prover/client/proxy/): A new ProxyTlsClient implementing the TlsClient trait. Uses custom rustls CryptoProvider wrappers (InterceptingKxGroup, InterceptingPrf) to intercept the pre-master secret, verify-data, and handshake hashes during a real TLS 1.2 handshake.
  • Proxy protocol logic (crates/tlsn/src/proxy/): Core proxy module with ProxyProver and ProxyVerifier (ZK-based PRF verification and key derivation), plus a TlsParser that parses raw TLS records to reconstruct the TlsTranscript.
  • Prover/Verifier refactoring: Both Prover and Verifier now branch on the protocol config to instantiate either MPC or proxy dependencies. Tag verification is moved out of the MPC client into the shared finish() path. ProverFuture gains a Finishing state to handle the now-async finalization.
  • Verifier proxy forwarding (crates/tlsn/src/verifier.rs): In proxy mode, the Verifier connects to the server, forwards TLS traffic bidirectionally, parses the TLS records, and then runs ZK verification of the verify_data.
  • Tests, examples, harness, WASM: Adds a proxy integration test, a proxy example, harness benchmark/test plugin support, and wasm bindings for proxy mode configuration.

@th4s th4s force-pushed the feat/proxy-approach branch from 57f9b36 to 0eb1fe0 Compare March 11, 2026 10:39
@th4s th4s force-pushed the feat/prf-ems-support branch from 25c70f3 to d1e3098 Compare March 11, 2026 10:40
@th4s th4s force-pushed the feat/proxy-approach branch 3 times, most recently from 3182609 to 3e68ecd Compare March 12, 2026 11:42
@th4s th4s force-pushed the feat/proxy-approach branch from 6352912 to ddf2b04 Compare March 12, 2026 12:11
@th4s th4s marked this pull request as ready for review March 12, 2026 12:38
@th4s th4s requested review from heeckhau, sinui0 and themighty1 March 12, 2026 12:39
@th4s
Copy link
Copy Markdown
Member Author

th4s commented Mar 12, 2026

Benches

Config used: bench.toml (or bench_proxy.toml)

[[group]]
name = "cable"
bandwidth = 20
protocol_latency = 20
upload-size = 1024
download-size = 2048

[[bench]]
group = "cable"


[[group]]
name = "mobile_5g"
bandwidth = 30
protocol_latency = 30
upload-size = 1024
download-size = 2048

[[bench]]
group = "mobile_5g"


[[group]]
name = "fiber"
bandwidth = 100
protocol_latency = 15
upload-size = 1024
download-size = 2048

[[bench]]
group = "fiber"

Results

MPC Native

cable [mpc] (20 Mbps, 20ms latency, 1KB↑ 2KB↓):
Median: 14.51s

fiber [mpc] (100 Mbps, 15ms latency, 1KB↑ 2KB↓):
Median: 3.65s

mobile_5g [mpc] (30 Mbps, 30ms latency, 1KB↑ 2KB↓):
Median: 10.38s

MPC Browser

cable [mpc] (20 Mbps, 20ms latency, 1KB↑ 2KB↓):
Median: 15.79s

fiber [mpc] (100 Mbps, 15ms latency, 1KB↑ 2KB↓):
Median: 4.98s

mobile_5g [mpc] (30 Mbps, 30ms latency, 1KB↑ 2KB↓):
Median: 11.64s

Proxy Native

cable_proxy [proxy] (20 Mbps, 20ms latency, 1KB↑ 2KB↓):
Median: 1.57s

fiber_proxy [proxy] (100 Mbps, 15ms latency, 1KB↑ 2KB↓):
Median: 0.94s

mobile_5g_proxy [proxy] (30 Mbps, 30ms latency, 1KB↑ 2KB↓):
Median: 1.55s

Proxy Browser

cable_proxy [proxy] (20 Mbps, 20ms latency, 1KB↑ 2KB↓):
Median: 2.04s

fiber_proxy [proxy] (100 Mbps, 15ms latency, 1KB↑ 2KB↓):
Median: 1.40s

mobile_5g_proxy [proxy] (30 Mbps, 30ms latency, 1KB↑ 2KB↓):
Median: 1.95s

@th4s
Copy link
Copy Markdown
Member Author

th4s commented Mar 12, 2026

TODOs before merge

Comment on lines +121 to +129
tls_connection
.write_all(b"GET / HTTP/1.1\r\nConnection: close\r\n\r\n")
.await
.unwrap();

let mut response = vec![0u8; 1024];
tls_connection.read_to_end(&mut response).await.unwrap();

tls_connection.close().await.unwrap();
Copy link
Copy Markdown
Member Author

@th4s th4s Mar 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ideally we want to write, close and then read. I did not find the right solution to do this, so left it at write, read and close for now.

The problem: Even though packets arrive in the correct order on the server side (I checked) namely first app_data then close_notify_alert, the server parses it in one go and because he sees the close_notify he does NOT answer the app_data request.

The problem is not visible on the MPC test I guess, because there is an async latency overhead because of the network topology. The server parses the records one after the other and thus sends a response.

Not clear what the best approach here is? Introducing artificial time-based delays seems wrong and would not be a proper solution. The right fix is on the server side I think, but the question is how do we want to handle this on the client side?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant