Conversation
57f9b36 to
0eb1fe0
Compare
25c70f3 to
d1e3098
Compare
3182609 to
3e68ecd
Compare
6352912 to
ddf2b04
Compare
BenchesConfig used: ResultsMPC Nativecable [mpc] (20 Mbps, 20ms latency, 1KB↑ 2KB↓): fiber [mpc] (100 Mbps, 15ms latency, 1KB↑ 2KB↓): mobile_5g [mpc] (30 Mbps, 30ms latency, 1KB↑ 2KB↓): MPC Browsercable [mpc] (20 Mbps, 20ms latency, 1KB↑ 2KB↓): fiber [mpc] (100 Mbps, 15ms latency, 1KB↑ 2KB↓): mobile_5g [mpc] (30 Mbps, 30ms latency, 1KB↑ 2KB↓): Proxy Nativecable_proxy [proxy] (20 Mbps, 20ms latency, 1KB↑ 2KB↓): fiber_proxy [proxy] (100 Mbps, 15ms latency, 1KB↑ 2KB↓): mobile_5g_proxy [proxy] (30 Mbps, 30ms latency, 1KB↑ 2KB↓): Proxy Browsercable_proxy [proxy] (20 Mbps, 20ms latency, 1KB↑ 2KB↓): fiber_proxy [proxy] (100 Mbps, 15ms latency, 1KB↑ 2KB↓): mobile_5g_proxy [proxy] (30 Mbps, 30ms latency, 1KB↑ 2KB↓): |
TODOs before merge
|
| tls_connection | ||
| .write_all(b"GET / HTTP/1.1\r\nConnection: close\r\n\r\n") | ||
| .await | ||
| .unwrap(); | ||
|
|
||
| let mut response = vec![0u8; 1024]; | ||
| tls_connection.read_to_end(&mut response).await.unwrap(); | ||
|
|
||
| tls_connection.close().await.unwrap(); |
There was a problem hiding this comment.
Ideally we want to write, close and then read. I did not find the right solution to do this, so left it at write, read and close for now.
The problem: Even though packets arrive in the correct order on the server side (I checked) namely first app_data then close_notify_alert, the server parses it in one go and because he sees the close_notify he does NOT answer the app_data request.
The problem is not visible on the MPC test I guess, because there is an async latency overhead because of the network topology. The server parses the records one after the other and thus sends a response.
Not clear what the best approach here is? Introducing artificial time-based delays seems wrong and would not be a proper solution. The right fix is on the server side I think, but the question is how do we want to handle this on the client side?
This PR adds the proxy-based approach to TLSNotary.
crates/core): ExtendsTlsCommitProtocolConfigwith a Proxy variant and addsProxyTlsConfigcrates/tlsn/src/deps/): Replaces the old monolithicmpz.rswith adeps/module that hasProverDepsandVerifierDepsenums, each with Mpc and Proxy variants, encapsulating setup/allocation logic per mode.crates/tlsn/src/prover/client/proxy/): A newProxyTlsClientimplementing theTlsClienttrait. Uses custom rustlsCryptoProviderwrappers (InterceptingKxGroup,InterceptingPrf) to intercept the pre-master secret, verify-data, and handshake hashes during a real TLS 1.2 handshake.crates/tlsn/src/proxy/): Core proxy module withProxyProverandProxyVerifier(ZK-based PRF verification and key derivation), plus aTlsParserthat parses raw TLS records to reconstruct theTlsTranscript.ProverFuturegains a Finishing state to handle the now-async finalization.crates/tlsn/src/verifier.rs): In proxy mode, the Verifier connects to the server, forwards TLS traffic bidirectionally, parses the TLS records, and then runs ZK verification of the verify_data.