Skip to content

Normalise rpc debug trace format #839

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 17 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 11 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 5 additions & 0 deletions .changeset/famous-ties-know.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
---
"@nomicfoundation/edr": minor
---

Normalise JSON-RPC format for rpcDebugTrace on the EDR side.
30 changes: 30 additions & 0 deletions crates/edr_napi/index.d.ts
Original file line number Diff line number Diff line change
Expand Up @@ -141,6 +141,36 @@ export interface DebugTraceLogItem {
/** Map of all stored values with keys and values encoded as hex strings. */
storage?: Record<string, string>
}
export interface RpcDebugTraceResult {
failed: boolean
gas: bigint
pass: boolean
gasUsed: bigint
returnValue: string
structLogs: Array<RpcDebugTraceLogItem>
}
export interface RpcDebugTraceLogItem {
/** Program Counter */
pc: bigint
/** Name of the operation */
op: string
/** Gas left before executing this operation as hex number. */
gas: bigint
/** Gas cost of this operation as hex number. */
gasCost: bigint
/** Array of all values (hex numbers) on the stack */
stack?: Array<string>
/** Depth of the call stack */
depth: bigint
/** Size of memory array */
memSize: bigint
/** Description of an error as a hex string. */
error?: string
/** Array of all allocated values as hex strings. */
memory?: Array<string>
/** Map of all stored values with keys and values encoded as hex strings. */
storage?: Record<string, string>
}
/** Ethereum execution log. */
export interface ExecutionLog {
address: Buffer
Expand Down
34 changes: 34 additions & 0 deletions crates/edr_napi/src/debug_trace.rs
Original file line number Diff line number Diff line change
Expand Up @@ -36,3 +36,37 @@ pub struct DebugTraceLogItem {
/// Map of all stored values with keys and values encoded as hex strings.
pub storage: Option<HashMap<String, String>>,
}

#[napi(object)]
pub struct RpcDebugTraceResult {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As this type is no longer part of the API, do we really need to expose it?

I assume Hardhat has its own definition, given that it's a standard.

cc: @fvictorio

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You are right, Hardhat has its own definition

pub failed: bool,
pub gas: BigInt,
pub pass: bool,
pub gas_used: BigInt,
pub return_value: String,
pub struct_logs: Vec<RpcDebugTraceLogItem>,
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you please add documentation for the struct and its pub member fields?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done


#[napi(object)]
pub struct RpcDebugTraceLogItem {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As this type is no longer part of the API, do we really need to expose it?

I assume Hardhat has its own definition, given that it's a standard.

cc: @fvictorio

/// Program Counter
pub pc: BigInt,
/// Name of the operation
pub op: String,
/// Gas left before executing this operation as hex number.
pub gas: BigInt,
/// Gas cost of this operation as hex number.
pub gas_cost: BigInt,
/// Array of all values (hex numbers) on the stack
pub stack: Option<Vec<String>>,
/// Depth of the call stack
pub depth: BigInt,
/// Size of memory array
pub mem_size: BigInt,
/// Description of an error as a hex string.
pub error: Option<String>,
/// Array of all allocated values as hex strings.
pub memory: Option<Vec<String>>,
/// Map of all stored values with keys and values encoded as hex strings.
pub storage: Option<HashMap<String, String>>,
}
117 changes: 113 additions & 4 deletions crates/edr_napi/src/provider.rs
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,10 @@ mod config;

use std::sync::Arc;

use edr_provider::{time::CurrentTime, InvalidRequestReason};
use edr_evm::DebugTraceResult;
use edr_provider::{
time::CurrentTime, InvalidRequestReason, RpcDebugTraceLogItem, RpcDebugTraceResult,
};
use edr_rpc_eth::jsonrpc;
use edr_solidity::contract_decoder::ContractDecoder;
use napi::{
Expand Down Expand Up @@ -102,6 +105,9 @@ impl Provider {
#[doc = "Handles a JSON-RPC request and returns a JSON-RPC response."]
#[napi]
pub async fn handle_request(&self, json_request: String) -> napi::Result<Response> {
let method_name = serde_json::from_str::<JsonRpcMethodExtractor>(&json_request)
.map_or_else(|_| "".to_string(), |m| m.method);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
.map_or_else(|_| "".to_string(), |m| m.method);
.map_or_else(|_| String::new(), |m| m.method);

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed this as well


let provider = self.provider.clone();
let request = match serde_json::from_str(&json_request) {
Ok(request) => request,
Expand Down Expand Up @@ -204,16 +210,48 @@ impl Provider {
}
})
.map_err(|error| napi::Error::new(Status::GenericFailure, error.to_string()))
.map(|data| {
.and_then(|data| {
let solidity_trace = solidity_trace.map(|trace| SolidityTraceData {
trace,
contract_decoder: Arc::clone(&self.contract_decoder),
});
Response {

// Normalise JSON-RPC format for rpcDebugTrace
let data = if method_name == "debug_traceTransaction"
|| method_name == "debug_traceCall"
{
let parsed_data: Result<DebugTraceResult, _> = match &data {
Either::A(json_string) => {
serde_json::from_str::<JsonRpcResponse<DebugTraceResult>>(json_string)
.map(|r| r.result)
}
Either::B(json_value) => serde_json::from_value::<
JsonRpcResponse<DebugTraceResult>,
>(json_value.clone())
.map(|r| r.result),
};
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is incredibly inefficient to:

  1. serialize the result to JSON (string or object)
  2. deserialize the result from JSON (string or object)
  3. normalize result
  4. serialize the result

Could you please investigate making this change one level up, before we first serialize the result? That would be in edr_provider

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have moved the normalisation into edr_provider


match parsed_data {
Ok(trace_result) => {
let transformed = normalise_rpc_debug_trace(trace_result)
.map_err(|e| napi::Error::new(Status::GenericFailure, e))?;

let transformed = JsonRpcResponse {
result: transformed,
};
Either::B(serde_json::to_value(transformed)?)
}
Err(_) => data,
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you please split this into a function with a descriptive name?

Copy link
Author

@AmosAidoo AmosAidoo Mar 25, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No need for these lines now after moving normalisation to edr_provider.

} else {
data
};

Ok(Response {
solidity_trace,
data,
traces: traces.into_iter().map(Arc::new).collect(),
}
})
})
}

Expand Down Expand Up @@ -314,6 +352,16 @@ struct SolidityTraceData {
contract_decoder: Arc<ContractDecoder>,
}

#[derive(serde::Deserialize, serde::Serialize)]
struct JsonRpcResponse<T> {
result: T,
}

#[derive(serde::Deserialize)]
struct JsonRpcMethodExtractor {
method: String,
}

#[napi]
pub struct Response {
// N-API is known to be slow when marshalling `serde_json::Value`s, so we try to return a
Expand Down Expand Up @@ -374,3 +422,64 @@ impl Response {
}
}
}

// Rust port of https://github.com/NomicFoundation/hardhat/blob/024d72b09c6edefb00c012e9514a0948c255d0ab/v-next/hardhat/src/internal/builtin-plugins/network-manager/edr/utils/convert-to-edr.ts#L176
fn normalise_rpc_debug_trace(trace: DebugTraceResult) -> Result<serde_json::Value, String> {
let mut struct_logs = Vec::new();

for log in trace.logs {
let rpc_log = RpcDebugTraceLogItem {
pc: log.pc,
op: log.op_name.clone(),
op_name: log.op_name,
gas: u64::from_str_radix(log.gas.trim_start_matches("0x"), 16).unwrap_or(0),
gas_cost: u64::from_str_radix(log.gas_cost.trim_start_matches("0x"), 16).unwrap_or(0),
stack: log.stack.map(|values| {
values
.into_iter()
// Removing this trim temporarily as the Hardhat test assumes 0x is there
// .map(|value| value.trim_start_matches("0x").to_string())
.collect()
}),
depth: log.depth,
mem_size: log.mem_size,
error: log.error,
memory: log.memory,
storage: log.storage.map(|storage| {
storage
.into_iter()
// Removing this trim temporarily as the Hardhat test assumes 0x is there
// .map(|(key, value)| {
// let stripped_key = key.strip_prefix("0x").unwrap_or(&key).to_string();
// let stripped_value =
// value.strip_prefix("0x").unwrap_or(&value).to_string();
// (stripped_key, stripped_value)
// })
.collect()
}),
};

struct_logs.push(rpc_log);
}

// REVM trace adds initial STOP that Hardhat doesn't expect
if !struct_logs.is_empty() && struct_logs[0].op == "STOP" {
struct_logs.remove(0);
}

let return_value = trace
.output
.map(|b| b.to_string().trim_start_matches("0x").to_string())
.unwrap_or_default();

let rpc_debug_trace = RpcDebugTraceResult {
failed: !trace.pass,
gas: trace.gas_used,
pass: trace.pass,
gas_used: trace.gas_used,
return_value,
struct_logs,
};

serde_json::to_value(rpc_debug_trace).map_err(|e| e.to_string())
}
145 changes: 145 additions & 0 deletions crates/edr_napi/test/provider.ts
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,8 @@ import {
ContractAndFunctionName,
MineOrdering,
Provider,
RpcDebugTraceLogItem,
RpcDebugTraceResult,
SpecId,
SubscriptionEvent,
} from "..";
Expand Down Expand Up @@ -405,6 +407,104 @@ describe("Provider", () => {
const rawTraces = traceCallResponse.traces;
assert.lengthOf(rawTraces, 1);
});

it("should have its JSON-RPC format normalised when debug_traceTransaction is used", async function () {
const provider = await Provider.withConfig(
context,
providerConfig,
loggerConfig,
{},
(_event: SubscriptionEvent) => {}
);

const sendTxResponse = await provider.handleRequest(
JSON.stringify({
id: 1,
jsonrpc: "2.0",
method: "eth_sendTransaction",
params: [
{
from: "0xf39fd6e51aad88f6f4ce6ab8827279cfffb92266",
// PUSH1 0x42
// PUSH0
// MSTORE
// PUSH1 0x20
// PUSH0
// RETURN
data: "0x60425f5260205ff3",
gas: "0x" + 1_000_000n.toString(16),
},
],
})
);

let responseData;

if (typeof sendTxResponse.data === "string") {
responseData = JSON.parse(sendTxResponse.data);
} else {
responseData = sendTxResponse.data;
}

const txHash = responseData.result;

const traceTransactionResponse = await provider.handleRequest(
JSON.stringify({
id: 1,
jsonrpc: "2.0",
method: "debug_traceTransaction",
params: [txHash],
})
);

let edrTrace: RpcDebugTraceResult;
if (typeof traceTransactionResponse.data === "string") {
edrTrace = JSON.parse(traceTransactionResponse.data).result;
} else {
edrTrace = traceTransactionResponse.data.result;
}

assertJsonRpcFormatNormalised(edrTrace);
});

it("should have its JSON-RPC format normalised when debug_traceCall is used", async function () {
const provider = await Provider.withConfig(
context,
providerConfig,
loggerConfig,
{},
(_event: SubscriptionEvent) => {}
);

const traceCallResponse = await provider.handleRequest(
JSON.stringify({
id: 1,
jsonrpc: "2.0",
method: "debug_traceCall",
params: [
{
from: "0xf39fd6e51aad88f6f4ce6ab8827279cfffb92266",
// PUSH1 0x42
// PUSH0
// MSTORE
// PUSH1 0x20
// PUSH0
// RETURN
data: "0x60425f5260205ff3",
gas: "0x" + 1_000_000n.toString(16),
},
],
})
);

let edrTrace: RpcDebugTraceResult;
if (typeof traceCallResponse.data === "string") {
edrTrace = JSON.parse(traceCallResponse.data).result;
} else {
edrTrace = traceCallResponse.data.result;
}
assertJsonRpcFormatNormalised(edrTrace);
});
});
});

Expand All @@ -415,3 +515,48 @@ function assertEqualMemory(stepMemory: Buffer | undefined, expected: Buffer) {

assert.isTrue(stepMemory.equals(expected));
}

function assertJsonRpcFormatNormalised(trace: RpcDebugTraceResult) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think these assertions can happen without knowing the type RpcDebugTraceResult, but am not 100%. If so, we can just remove the type from EDR's N-API, reducing our API surface.

assert.isBoolean(trace.failed);
assert.typeOf(trace.gas, "number");
assert.isString(trace.returnValue);
assert.isArray(trace.structLogs);

trace.structLogs.forEach((log: RpcDebugTraceLogItem) => {
assert.typeOf(log.pc, "number");
assert.typeOf(log.op, "string");
assert.typeOf(log.gas, "number");
assert.typeOf(log.gasCost, "number");
assert.typeOf(log.depth, "number");
assert.typeOf(log.memSize, "number");

if ("stack" in log) {
assert.isArray(log.stack);
log.stack?.forEach((item) => {
assert.isString(item);
// assert.isFalse(item.startsWith("0x"));
});
}

if ("memory" in log) {
assert.isArray(log.memory);
log.memory?.forEach((item) => {
assert.isString(item);
});
}

if ("storage" in log) {
assert.isObject(log.storage);
Object.entries(log.storage!).forEach(([key, value]) => {
assert.isString(key);
assert.isString(value);
// assert.isFalse(key.startsWith("0x"));
// assert.isFalse(value.startsWith("0x"));
Comment on lines +551 to +552
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why are these two disabled?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I disabled these two because at this moment, the normalisation is still being done in Hardhat so it is still correct to have 0x at the start of the strings.
Here is the reference in Hardhat: https://github.com/NomicFoundation/hardhat/blob/024d72b09c6edefb00c012e9514a0948c255d0ab/v-next/hardhat/src/internal/builtin-plugins/network-manager/edr/utils/convert-to-edr.ts#L197-L203

The Hardhat tests also operate on this assumption so I believe these two asserts should be disabled until the normalisation is completely removed from Hardhat.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is a process for incorporating changes to Hardhat into your PR. It's explained here. That's what would be required in order to test your changes in Hardhat and make sure everything is working as expected.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah perfect. Thank you

});
}

if ("error" in log) {
assert.isString(log.error);
}
});
}
Loading
Loading