Conversation
📝 WalkthroughWalkthroughAdds a new EvalTx RPC handler to the UTXORPC submit service (method Possibly related PRs
🚥 Pre-merge checks | ✅ 4✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 10
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
go.sumis excluded by!**/*.sum
📒 Files selected for processing (2)
go.mod(3 hunks)internal/utxorpc/submit.go(2 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
internal/utxorpc/submit.go (1)
internal/node/node.go (1)
GetConnection(31-75)
🪛 GitHub Actions: golangci-lint
internal/utxorpc/submit.go
[error] 256-256: golangci-lint: Consider pre-allocating txEvalRedeemers (prealloc)
🪛 GitHub Actions: nilaway
internal/utxorpc/submit.go
[error] 229-229: nilaway: Potential nil panic detected. Observed nil flow from source to dereference point at internal/utxorpc/submit.go:229.
🪛 GitHub Check: lint
internal/utxorpc/submit.go
[failure] 91-91:
evaluateScript - costModel is unused (unparam)
[failure] 256-256:
Consider pre-allocating txEvalRedeemers (prealloc)
🪛 GitHub Check: nilaway
internal/utxorpc/submit.go
[failure] 257-257:
�[31merror: �[0mPotential nil panic detected. Observed nil flow from source to dereference point:
[failure] 229-229:
�[31merror: �[0mPotential nil panic detected. Observed nil flow from source to dereference point:
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: cubic · AI code reviewer
- GitHub Check: Analyze (go)
| // Get protocol parameters for cost models | ||
| oConn.LocalStateQuery().Client.Start() | ||
| protoParams, err := oConn.LocalStateQuery().Client.GetCurrentProtocolParams() | ||
| if err != nil { | ||
| return connect.NewResponse(resp), err | ||
| } |
There was a problem hiding this comment.
Missing error handling for LocalStateQuery().Client.Start().
The Start() call may fail and should have its return value checked, similar to how other client operations are handled.
// Get protocol parameters for cost models
- oConn.LocalStateQuery().Client.Start()
+ if err := oConn.LocalStateQuery().Client.Start(); err != nil {
+ return connect.NewResponse(resp), fmt.Errorf("start local state query: %w", err)
+ }
protoParams, err := oConn.LocalStateQuery().Client.GetCurrentProtocolParams()📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Get protocol parameters for cost models | |
| oConn.LocalStateQuery().Client.Start() | |
| protoParams, err := oConn.LocalStateQuery().Client.GetCurrentProtocolParams() | |
| if err != nil { | |
| return connect.NewResponse(resp), err | |
| } | |
| // Get protocol parameters for cost models | |
| if err := oConn.LocalStateQuery().Client.Start(); err != nil { | |
| return connect.NewResponse(resp), fmt.Errorf("start local state query: %w", err) | |
| } | |
| protoParams, err := oConn.LocalStateQuery().Client.GetCurrentProtocolParams() | |
| if err != nil { | |
| return connect.NewResponse(resp), err | |
| } |
🤖 Prompt for AI Agents
In internal/utxorpc/submit.go around lines 220 to 225, the call
oConn.LocalStateQuery().Client.Start() is not checking its returned error;
update the code to capture the error from Start(), check if it's non-nil, and
return the same response/error pattern used elsewhere (e.g., return
connect.NewResponse(resp), err) so startup failures are propagated and handled
consistently.
There was a problem hiding this comment.
2 issues found across 3 files
Prompt for AI agents (all 2 issues)
Check if these issues are valid — if so, understand the root cause of each and fix them.
<file name="internal/utxorpc/submit.go">
<violation number="1" location="internal/utxorpc/submit.go:156">
P1: Calling `Int64()` on a big integer can silently truncate values that exceed int64 range. Check if the value fits in int64 first, and use `BigInt_BigUInt` or `BigInt_BigNInt` for larger values.</violation>
<violation number="2" location="internal/utxorpc/submit.go:269">
P1: The redeemer index does not correspond to the script index in `allScripts`. The redeemer index refers to the input/policy index being validated, not the script position in the witness set. Scripts should be matched by their hash, not by array position.</violation>
</file>
Reply to cubic to teach it or ask questions. Re-run a review with @cubic-dev-ai review this PR
3207ff0 to
75e427c
Compare
|
@cubic-dev-ai review this PR |
@wolf31o2 I've started the AI code review. It'll take a few minutes to complete. |
There was a problem hiding this comment.
3 issues found across 8 files
Prompt for AI agents (all 3 issues)
Check if these issues are valid — if so, understand the root cause of each and fix them.
<file name="internal/utxorpc/submit.go">
<violation number="1" location="internal/utxorpc/submit.go:174">
P1: Potential integer overflow: `v.Inner.Int64()` will silently truncate big integers that exceed int64 range. Consider using `BigInt_BigUInt` or `BigInt_BigNInt` for arbitrary precision, or at minimum check if the value fits with `v.Inner.IsInt64()` before conversion.</violation>
<violation number="2" location="internal/utxorpc/submit.go:315">
P1: Incorrect script lookup: Redeemer indices are per-purpose and don't directly map to the concatenated `allScripts` array. Scripts should be looked up by their hash matching the script reference in the transaction input/policy, not by redeemer index.</violation>
<violation number="3" location="internal/utxorpc/submit.go:325">
P1: Incorrect datum source: For spending scripts, `datum` is incorrectly set to `value.Data.Data` (the redeemer). The datum should be retrieved from the UTxO being spent at the index specified by `key.Index`, not from the redeemer itself.</violation>
</file>
Reply to cubic to teach it or ask questions. Re-run a review with @cubic-dev-ai review this PR
75e427c to
c194d6c
Compare
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@go.mod`:
- Line 44: The go-ethereum dependency in go.mod is pinned to v1.16.7 which
contains HIGH severity DoS vulnerabilities; update the go-ethereum module to
v1.16.8 and ensure its consumer plutigo/cek is bumped (or resolves) to a release
that pulls v1.16.8 (suggest changing plutigo/cek to v0.0.21 or run go get to
upgrade transitive deps), then run go mod tidy and verify go.sum to ensure the
CVE-fixed version is present; target the go.mod entry for
github.com/ethereum/go-ethereum and the plutigo/cek requirement to locate and
update the versions.
In `@internal/utxorpc/submit.go`:
- Around line 185-207: The Integer branch in Submit's conversion incorrectly
serializes negative big integers as BigUInt; update the case in
internal/utxorpc/submit.go (the case *data.Integer block) to detect negative
values (e.g., v.Inner.Sign() < 0) and, when the value doesn't fit in int64,
return a cardano.PlutusData with BigInt using the BigNInt variant
(cardano.BigInt_BigNInt) and supply the appropriate byte representation for the
negative big.Int; keep the existing int64 path and positive BigUInt path, and
add the "math/big" import as noted.
♻️ Duplicate comments (3)
internal/utxorpc/submit.go (3)
270-275: Missing error handling forLocalStateQuery().Client.Start().The
Start()call may fail and its return value should be checked.Proposed fix
// Get protocol parameters for cost models - oConn.LocalStateQuery().Client.Start() + if err := oConn.LocalStateQuery().Client.Start(); err != nil { + return connect.NewResponse(resp), fmt.Errorf("start local state query: %w", err) + } protoParams, err := oConn.LocalStateQuery().Client.GetCurrentProtocolParams()
389-392: Script lookup logic remains incorrect.The code uses the sorted redeemer index
ito look up scripts fromallScripts, but redeemer indices refer to input/policy positions, not script array positions. Scripts should be matched by their hash against the script reference in the UTxO being spent or the policy ID being minted.This will cause incorrect script-redeemer pairing when the transaction has multiple scripts or when scripts appear in different orders.
394-417: Placeholder ScriptContext will produce incorrect evaluation results.The
contextDatais a dummy empty constructor (data.NewConstr(0)) rather than a proper PlutusScriptContextcontaining transaction information, inputs, outputs, and signatories. Scripts that inspect the context will fail or produce incorrect ExUnits estimates.Consider either:
- Building a proper
ScriptContextstructure from transaction data- Returning an explicit error indicating that full script evaluation is not yet supported
- Adding prominent documentation that ExUnits estimates may be inaccurate
🧹 Nitpick comments (1)
internal/utxorpc/submit.go (1)
419-428: Dead code:versionCostModelis built but never used.The cost model is extracted from protocol parameters and filtered by version, but it's never passed to
evaluateScript. The function currently uses a hardcodeddefaultSlippageconstant instead of the actual protocol cost model.Either remove the unused
versionCostModelconstruction, or integrate it intoevaluateScriptfor accurate budget calculation. The current implementation may produce inaccurate ExUnits estimates.- // Filter cost model for this version - versionCostModel := make(map[uint][]int64) - if model, ok := costModels[version]; ok { - versionCostModel[version] = model - } - steps, memory, err := evaluateScript( scriptBytes, args, )
c194d6c to
63bbe8d
Compare
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@internal/utxorpc/submit.go`:
- Around line 405-408: The code dereferences tx.Inputs()[key.Index] without
bounds checking which can panic for malformed transactions; before assigning
input := tx.Inputs()[key.Index] add a guard that validates key.Index is within 0
<= key.Index < len(tx.Inputs()) and if not return connect.NewResponse(resp) with
a clear error (e.g., "redeemer index out of range" including key.Index and
len(tx.Inputs())); keep the subsequent resolvedUtxos lookup unchanged.
- Around line 117-121: The code fetches costModels but never applies them—update
evaluateScript to accept the fetched costModels (or a typed wrapper) and pass
them from the caller; inside evaluateScript, replace the call that constructs
the CEK machine with cek.NewMachineWithVersionCosts[syn.DeBruijn](...) by a
constructor that applies the provided cost models (or use a new helper to build
a machine from costModels), and if the provided costModels are
incompatible/unsupported, return an error immediately (fail fast). Specifically
modify the call sites that currently call evaluateScript and the machine
creation site (cek.NewMachineWithVersionCosts[syn.DeBruijn]) so the costModels
variable is threaded through and used to configure the machine rather than
relying only on version defaults.
♻️ Duplicate comments (3)
internal/utxorpc/submit.go (3)
285-287: Handle LocalStateQuery.Start() errors.
Line 286 ignores a possible startup failure, which can cascade into confusing query errors.gouroboros LocalStateQuery Client Start returns error✅ Suggested fix
- oConn.LocalStateQuery().Client.Start() + if err := oConn.LocalStateQuery().Client.Start(); err != nil { + return connect.NewResponse(resp), fmt.Errorf("start local state query: %w", err) + }
396-399: Redeemer index is not a script array index.
Usingi(or redeemer index) to pick fromallScriptswill mispair scripts and redeemers. Scripts must be matched by hash (e.g., from the UTxO’s script reference or policy ID).Cardano redeemer index meaning script witness mapping
403-423: Placeholder ScriptContext will yield incorrect evaluation.
Most validators depend on a realScriptContext; placeholders will skew or fail evaluation. Build the proper context or return a clear “unsupported until context is available” error.Plutus ScriptContext datum redeemer context argument order V2 V3
63bbe8d to
addaefb
Compare
|
@cubic-dev-ai review please |
@wolf31o2 I have started the AI code review. It will take a few minutes to complete. |
There was a problem hiding this comment.
1 issue found across 7 files
Prompt for AI agents (all issues)
Check if these issues are valid — if so, understand the root cause of each and fix them.
<file name="internal/utxorpc/submit.go">
<violation number="1" location="internal/utxorpc/submit.go:493">
P2: EvalTx builds script arguments with a placeholder ScriptContext, so ExUnits are computed against a dummy context rather than the real transaction. This makes the evaluation results unreliable for scripts that read ScriptContext/TxInfo.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
addaefb to
ea89c14
Compare
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@internal/utxorpc/submit.go`:
- Around line 201-206: The upper bound for the validity interval is incorrectly
marked closed; update the construction of upperBound (the data.NewConstr call
that uses upperExtended) to use an open/exclusive closure by replacing the
closure constructor from data.NewConstr(1) to data.NewConstr(0); specifically
locate the upperBound variable creation and change its closure parameter so the
interval follows the half-open pattern used by Cardano TTLs.
- Around line 769-803: The redeemer-to-script matching is wrong: you sort pairs
only by key.Index and then use loop index i to pick scripts from
allScripts/scriptVersions, which mismatches purposes (key.Tag) and indices;
instead, sort pairs by Tag then Index (e.g., compare key.Tag first, then
key.Index) and stop using i to index allScripts; build a lookup keyed by
purpose+index (e.g., a map with keys derived from key.Tag and key.Index) that
maps to the correct script bytes and version, then when constructing each
cardano.Redeemer (redeemerPair, convertPlutusData, cardano.Redeemer) fetch the
script and scriptVersion from that purpose-indexed map to evaluate/extract the
proper script for that redeemer.
♻️ Duplicate comments (1)
internal/utxorpc/submit.go (1)
653-654: Missing error handling forLocalStateQuery().Client.Start().The
Start()call on line 654 does not check its return value. This was flagged in a past review but appears unaddressed.🐛 Proposed fix
// Get protocol parameters for cost models - oConn.LocalStateQuery().Client.Start() + if err := oConn.LocalStateQuery().Client.Start(); err != nil { + return connect.NewResponse(resp), fmt.Errorf("start local state query: %w", err) + } protoParams, err := oConn.LocalStateQuery().Client.GetCurrentProtocolParams()
🧹 Nitpick comments (4)
internal/utxorpc/submit.go (4)
99-138: Consider pre-allocating asset pair slices.The
assetPairsslice at line 123 is appended in a loop without pre-allocation. This is a minor optimization opportunity.♻️ Suggested optimization
for _, policyId := range assets.Policies() { - assetPairs := [][2]data.PlutusData{} + assetNames := assets.Assets(policyId) + assetPairs := make([][2]data.PlutusData, 0, len(assetNames)) - for _, assetName := range assets.Assets(policyId) { + for _, assetName := range assetNames { amount := assets.Asset(policyId, assetName)
241-250: Reward and Certifying script purposes are placeholders.The Reward (tag 2) and Cert (tag 3) branches return placeholder data that won't match actual on-chain expectations. The default fallback also returns
Constr(0)(Minting) which could be misleading if an unexpected tag is encountered.Would you like me to help implement proper Reward and Certifying script purpose construction, or should we add explicit error handling for unsupported purposes?
523-530: Potential overflow on Constr tag conversion.The Constr tag is converted from
uint64(v.Tag) touint32with anolint:goseccomment. While Plutus constructor tags are typically small, values exceedinguint32max would silently truncate.♻️ Suggested bounds check
+ if v.Tag > math.MaxUint32 { + return nil, fmt.Errorf("constr tag %d exceeds uint32 range", v.Tag) + } return &cardano.PlutusData{ PlutusData: &cardano.PlutusData_Constr{ Constr: &cardano.Constr{ Tag: uint32(v.Tag), //nolint:gosec
672-691: Consider batching UTxO queries for better performance.The current implementation queries each input's UTxO individually (line 675-677), resulting in N network round-trips for N inputs. If
GetUTxOByTxInsupports multiple inputs, batching would improve performance.♻️ Suggested batch query
// Resolve UTxOs for inputs to get datums resolvedUtxos := make(map[string]ledger.Utxo) - for _, input := range tx.Inputs() { - utxos, err := oConn.LocalStateQuery().Client.GetUTxOByTxIn( - []ledger.TransactionInput{input}, - ) - if err != nil { - return connect.NewResponse( - resp, - ), fmt.Errorf( - "failed to query UTxO for input %s: %w", - input.String(), - err, - ) - } - for k, v := range utxos.Results { - _ = k // k is UtxoId - resolvedUtxos[input.String()] = ledger.Utxo{Id: input, Output: v} - } + inputs := tx.Inputs() + utxos, err := oConn.LocalStateQuery().Client.GetUTxOByTxIn(inputs) + if err != nil { + return connect.NewResponse(resp), fmt.Errorf("failed to query UTxOs: %w", err) + } + for i, input := range inputs { + if output, ok := utxos.Results[/* appropriate key */]; ok { + resolvedUtxos[input.String()] = ledger.Utxo{Id: input, Output: output} + } }Note: The exact API for mapping batch results back to inputs would need verification.
89a9c55 to
13a5b81
Compare
|
@cubic-dev-ai review |
@wolf31o2 I have started the AI code review. It will take a few minutes to complete. |
There was a problem hiding this comment.
2 issues found across 7 files
Prompt for AI agents (all issues)
Check if these issues are valid — if so, understand the root cause of each and fix them.
<file name="internal/utxorpc/submit.go">
<violation number="1" location="internal/utxorpc/submit.go:222">
P1: Spending inputs are not sorted before indexing by redeemer index. In Cardano, redeemer indices refer to lexicographically sorted inputs. The minting code correctly sorts policies, but spending inputs are used unsorted, causing incorrect input-redeemer mapping.</violation>
<violation number="2" location="internal/utxorpc/submit.go:262">
P1: Reference inputs are not resolved from the chain. The code only resolves UTxOs for `tx.Inputs()` but `buildTxInfo` also expects reference inputs in `resolvedUtxos`. Scripts using reference inputs will have incomplete context, causing evaluation failures.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
13a5b81 to
e6b30fa
Compare
|
@cubic-dev-ai review this please |
@wolf31o2 I have started the AI code review. It will take a few minutes to complete. |
e6b30fa to
43f8224
Compare
|
@cubic-dev-ai review this PR |
@wolf31o2 I have started the AI code review. It will take a few minutes to complete. |
43f8224 to
a98ce1a
Compare
There was a problem hiding this comment.
Actionable comments posted: 7
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
internal/api/localstatequery.go (1)
473-483:⚠️ Potential issue | 🟠 MajorNo pagination or result-size limit when fetching the entire UTxO set.
When no
addressis provided,GetUTxOWhole()fetches the entire chain UTxO set, and all matching results are returned in a single JSON response with no limit. On mainnet this could produce enormous responses, potentially causing OOM or timeouts.Consider adding a configurable result limit (e.g., max 100 items) with pagination support, or at minimum documenting this as a known limitation in the API docs.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/api/localstatequery.go` around lines 473 - 483, When no addresses are supplied the handler calls LocalStateQuery().Client.GetUTxOWhole() and returns an unbounded UTxO set; add a configurable result limit and basic pagination: read optional query params (e.g., limit and page or offset) in the same handler, enforce a default/cap (e.g., default 100, hard cap from a config value), validate and reject requests > cap, and pass the limit/offset into the fetch (or if GetUTxOWhole() has no paging, fetch then slice the result before marshalling) so the response never returns the entire chain; update any API docs to reflect the new query params.
🧹 Nitpick comments (3)
internal/utxorpc/submit.go (2)
727-731: Minor: Double iteration over redeemers just to count.The redeemers are iterated once solely to count (lines 728-731), then iterated again to collect pairs (lines 739-741). You can skip the counting pass and just append directly, or collect into a slice in a single pass.
♻️ Proposed fix
- // Count redeemers for pre-allocation - redeemerCount := 0 - for range redeemers.Iter() { - redeemerCount++ - } - type redeemerPair struct { key lcommon.RedeemerKey value lcommon.RedeemerValue } - pairs := make([]redeemerPair, 0, redeemerCount) + var pairs []redeemerPair for key, value := range redeemers.Iter() { pairs = append(pairs, redeemerPair{key, value}) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/utxorpc/submit.go` around lines 727 - 731, Remove the redundant first pass that counts redeemers (the redeemerCount loop over redeemers.Iter()) and instead collect pairs in a single pass: delete the redeemerCount variable and its counting loop, then iterate once over redeemers.Iter() to append each item directly into the slice you later use (replace any pre-allocation that used redeemerCount with a simple nil slice and use append). Ensure the code paths that previously referenced redeemerCount now rely on the populated slice from the single Iter() pass (look for usages around redeemers.Iter(), redeemerCount and where the slice of pairs is built).
214-220: Minor: Hex-encoding for byte comparison is unnecessary overhead.
sortInputsCanonicallyconverts TxId bytes to hex strings for comparison. Since hex encoding preserves byte ordering, this is correct but wasteful. You could compare the byte slices directly withbytes.Compare.♻️ Proposed fix
+import "bytes" + func sortInputsCanonically( inputs []lcommon.TransactionInput, ) []lcommon.TransactionInput { sorted := make([]lcommon.TransactionInput, len(inputs)) copy(sorted, inputs) sort.Slice(sorted, func(i, j int) bool { - idI := hex.EncodeToString(sorted[i].Id().Bytes()) - idJ := hex.EncodeToString(sorted[j].Id().Bytes()) - if idI != idJ { - return idI < idJ + cmp := bytes.Compare(sorted[i].Id().Bytes(), sorted[j].Id().Bytes()) + if cmp != 0 { + return cmp < 0 } return sorted[i].Index() < sorted[j].Index() }) return sorted }The same applies to the policy sorting at lines 255-261 and 853-854.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/utxorpc/submit.go` around lines 214 - 220, Replace the unnecessary hex-encoding byte comparisons inside sort.Slice with direct byte-slice comparisons using bytes.Compare: in sortInputsCanonically (the sort.Slice closure comparing idI/idJ), compare sorted[i].Id().Bytes() and sorted[j].Id().Bytes() with bytes.Compare(... ) < 0 instead of hex.EncodeToString(...); apply the same change to the policy sorting closures referenced around the other locations (where hex-encoding of Id().Bytes() is used) so all lexicographic ordering uses bytes.Compare on Id().Bytes() directly.openapi/api/openapi.yaml (1)
161-202: No pagination on a potentially unbounded result set.The
/localstatequery/utxos/search-by-assetendpoint can return all UTxOs in the entire chain state matching a given asset. For popular tokens this could be a very large response. Consider addinglimit/offset(or cursor-based) pagination parameters to protect the server and clients from excessively large payloads.Based on learnings, fetching the entire UTxO set via
GetUTxOWhole()is unavoidable due to cardano-node limitations, but server-side pagination of the response is still possible and recommended.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@openapi/api/openapi.yaml` around lines 161 - 202, The endpoint /localstatequery/utxos/search-by-asset can return unbounded results; add pagination by adding query parameters (e.g., limit:int with a sensible default/max and offset:int or cursor:string) to the OpenAPI operation and update the handler that currently uses GetUTxOWhole() to apply server-side slicing/cursoring before returning results; also update the response schema api.responseLocalStateQuerySearchUTxOsByAsset (or create a paginated wrapper schema) to return an items array plus pagination metadata (total/nextCursor/limit/offset) and enforce a server-side max limit to protect resources.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@internal/utxorpc/submit.go`:
- Around line 269-278: The function buildScriptPurpose currently returns a
malformed default data.NewConstr(0) when inputs are invalid; change its
signature to return (data.Constr, error) (or propagate an error type used in
this package), validate each branch (e.g., spend with index bounds, mint with
non-nil AssetMint) and return a descriptive error on the default/failure path
instead of a silent fallback, and update all callers of buildScriptPurpose to
handle the error; reference the buildScriptPurpose function and the switch
branches (cases 0/1/2/3) when making the changes so the invalid-path behavior is
replaced by explicit error propagation or at least logging plus an error return.
- Around line 958-967: The current evalErr handling in the EvalTx flow (the
block checking evalErr and the subsequent redeemer.ExUnits.Steps/Memory
assignments) only logs the error and leaves the redeemer with zero ExUnits,
which hides failures; instead, when evalErr != nil either return an overall RPC
error from EvalTx (propagating evalErr) or populate an error field on the
corresponding redeemer in the EvalTx response (e.g., redeemer.Error or similar)
and skip assigning zero ExUnits — locate the evalErr check near the
redeemer.ExUnits assignments, remove the silent log-only behavior, and implement
one clear propagation method so callers can distinguish evaluation failures from
genuine zero-resource scripts.
- Around line 47-73: The current conversion hardcodes slotLengthMs and the
slotToPOSIXTime function (and related systemStartToUnixMs) which ignores
Byron-era 20s slots and era transitions; update slotToPOSIXTime to query the
chain's era history (via GetEraHistory()) and compute POSIX millis by mapping
the given slot across era boundaries using each era's slot length and start
offset (replace the slotLengthMs constant and its use in slotToPOSIXTime), and
ensure systemStartToUnixMs still returns the correct Unix ms for the chain start
so the era-based calculation can add per-era slot durations to produce the
correct POSIXTimeRange for ScriptContext.
- Around line 619-658: The code makes a separate node call per input by
iterating tx.Inputs() and tx.ReferenceInputs() calling
oConn.LocalStateQuery().Client.GetUTxOByTxIn for each; batch these into at most
two calls (one for all regular inputs and one for all reference inputs) by
collecting the TransactionInput slices, calling GetUTxOByTxIn once per slice,
and then populate resolvedUtxos (the map[string]ledger.Utxo) by matching each
returned utxos.Results entry back to its original TransactionInput key (use the
result keys/IDs to map to input.String() or the TransactionInput value) instead
of doing per-input round-trips.
In `@openapi/api/openapi.yaml`:
- Around line 420-438: The amount property in the OpenAPI schema (api.utxoItem
-> properties -> amount) is currently just type: integer which the Go generator
maps to int32; update that property to include format: int64 so generated models
use int64 (fixing model_api_utxo_item.go where Amount is currently *int32), then
regenerate the client/models to pick up the change; ensure any usages of Amount
in code handle int64 pointers accordingly.
In `@openapi/docs/ApiUtxoItem.md`:
- Around line 7-11: The ApiUtxoItem schema's Amount property is defined as an
integer (generating int32) which cannot hold realistic lovelace values; edit the
openapi.yaml schema for ApiUtxoItem and change the Amount definition to support
larger values (either set type: integer with format: int64 or change type:
string), and update any related examples/nullable/pointer notation for the
Amount field so generated clients/models (ApiUtxoItem, Amount) use 64-bit or
string representation consistently.
---
Outside diff comments:
In `@internal/api/localstatequery.go`:
- Around line 473-483: When no addresses are supplied the handler calls
LocalStateQuery().Client.GetUTxOWhole() and returns an unbounded UTxO set; add a
configurable result limit and basic pagination: read optional query params
(e.g., limit and page or offset) in the same handler, enforce a default/cap
(e.g., default 100, hard cap from a config value), validate and reject requests
> cap, and pass the limit/offset into the fetch (or if GetUTxOWhole() has no
paging, fetch then slice the result before marshalling) so the response never
returns the entire chain; update any API docs to reflect the new query params.
---
Duplicate comments:
In `@openapi/model_api_utxo_item.go`:
- Around line 22-28: The generated ApiUtxoItem struct uses Amount *int32 which
will overflow; update the OpenAPI spec to add format: int64 for the amount field
and then regenerate the client, or if you need a quick manual fix change the
ApiUtxoItem Amount field to *int64 (and adjust any related usages/JSON tags) so
the Go model can hold 64-bit values; reference the ApiUtxoItem type and its
Amount field and ensure the openapi.yaml amount entry is marked with type:
integer and format: int64 before regenerating.
---
Nitpick comments:
In `@internal/utxorpc/submit.go`:
- Around line 727-731: Remove the redundant first pass that counts redeemers
(the redeemerCount loop over redeemers.Iter()) and instead collect pairs in a
single pass: delete the redeemerCount variable and its counting loop, then
iterate once over redeemers.Iter() to append each item directly into the slice
you later use (replace any pre-allocation that used redeemerCount with a simple
nil slice and use append). Ensure the code paths that previously referenced
redeemerCount now rely on the populated slice from the single Iter() pass (look
for usages around redeemers.Iter(), redeemerCount and where the slice of pairs
is built).
- Around line 214-220: Replace the unnecessary hex-encoding byte comparisons
inside sort.Slice with direct byte-slice comparisons using bytes.Compare: in
sortInputsCanonically (the sort.Slice closure comparing idI/idJ), compare
sorted[i].Id().Bytes() and sorted[j].Id().Bytes() with bytes.Compare(... ) < 0
instead of hex.EncodeToString(...); apply the same change to the policy sorting
closures referenced around the other locations (where hex-encoding of
Id().Bytes() is used) so all lexicographic ordering uses bytes.Compare on
Id().Bytes() directly.
In `@openapi/api/openapi.yaml`:
- Around line 161-202: The endpoint /localstatequery/utxos/search-by-asset can
return unbounded results; add pagination by adding query parameters (e.g.,
limit:int with a sensible default/max and offset:int or cursor:string) to the
OpenAPI operation and update the handler that currently uses GetUTxOWhole() to
apply server-side slicing/cursoring before returning results; also update the
response schema api.responseLocalStateQuerySearchUTxOsByAsset (or create a
paginated wrapper schema) to return an items array plus pagination metadata
(total/nextCursor/limit/offset) and enforce a server-side max limit to protect
resources.
internal/utxorpc/submit.go
Outdated
| // slotLengthMs is the slot length in milliseconds (1 second for post-Shelley) | ||
| const slotLengthMs = 1000 | ||
|
|
||
| // systemStartToUnixMs converts SystemStartResult to Unix milliseconds | ||
| func systemStartToUnixMs(ss *localstatequery.SystemStartResult) int64 { | ||
| // SystemStart contains: Year, Day (day of year 1-366), Picoseconds (within day) | ||
| // Convert to Unix timestamp in milliseconds | ||
| year := int(ss.Year.Int64()) | ||
| dayOfYear := ss.Day | ||
| picoseconds := ss.Picoseconds.Int64() | ||
|
|
||
| // Create time for January 1st of the year | ||
| t := time.Date(year, time.January, 1, 0, 0, 0, 0, time.UTC) | ||
| // Add days (dayOfYear is 1-indexed, so subtract 1) | ||
| t = t.AddDate(0, 0, dayOfYear-1) | ||
| // Add picoseconds (convert to nanoseconds first: pico = 10^-12, nano = 10^-9) | ||
| nanoseconds := picoseconds / 1000 | ||
| t = t.Add(time.Duration(nanoseconds)) | ||
|
|
||
| return t.UnixMilli() | ||
| } | ||
|
|
||
| // slotToPOSIXTime converts a slot number to POSIXTime (milliseconds since Unix epoch) | ||
| func slotToPOSIXTime(slot uint64, systemStartMs int64) int64 { | ||
| // Slot values on Cardano won't exceed int64 range for centuries | ||
| return systemStartMs + int64(slot)*slotLengthMs //nolint:gosec | ||
| } |
There was a problem hiding this comment.
Slot-to-POSIX conversion ignores Byron era and era transitions.
slotLengthMs is hardcoded to 1000ms and slotToPOSIXTime computes systemStart + slot * 1000. This is incorrect because:
- Byron era had 20-second slots (slots 0 through ~4,492,799 on mainnet).
- The correct conversion requires the era history (era boundaries and slot lengths per era).
For transactions with validity intervals spanning slots from the Byron era (unlikely but possible) or any network with different slot configurations, this will produce wrong POSIXTimeRange values in the ScriptContext, causing script evaluation to fail or produce incorrect results.
Consider using the era history from GetEraHistory() to properly map slots to POSIX times, similar to how cardano-cli computes slot-to-time conversions.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@internal/utxorpc/submit.go` around lines 47 - 73, The current conversion
hardcodes slotLengthMs and the slotToPOSIXTime function (and related
systemStartToUnixMs) which ignores Byron-era 20s slots and era transitions;
update slotToPOSIXTime to query the chain's era history (via GetEraHistory())
and compute POSIX millis by mapping the given slot across era boundaries using
each era's slot length and start offset (replace the slotLengthMs constant and
its use in slotToPOSIXTime), and ensure systemStartToUnixMs still returns the
correct Unix ms for the chain start so the era-based calculation can add per-era
slot durations to produce the correct POSIXTimeRange for ScriptContext.
| if evalErr != nil { | ||
| log.Printf( | ||
| "Failed to evaluate script for redeemer %d: %v", | ||
| key.Index, | ||
| evalErr, | ||
| ) | ||
| } else { | ||
| redeemer.ExUnits.Steps = uint64(exUnits.Steps) //nolint:gosec | ||
| redeemer.ExUnits.Memory = uint64(exUnits.Memory) //nolint:gosec | ||
| } |
There was a problem hiding this comment.
Script evaluation errors are silently swallowed.
When evalErr != nil, the error is logged but the redeemer is still added with zero ExUnits. Callers of EvalTx will receive a successful response with zero steps/memory, indistinguishable from a script that genuinely uses zero resources. This defeats the purpose of EvalTx as a pre-submission check.
Consider either returning the error in the response (e.g., via an error field per redeemer) or returning the overall RPC error so callers know evaluation failed.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@internal/utxorpc/submit.go` around lines 958 - 967, The current evalErr
handling in the EvalTx flow (the block checking evalErr and the subsequent
redeemer.ExUnits.Steps/Memory assignments) only logs the error and leaves the
redeemer with zero ExUnits, which hides failures; instead, when evalErr != nil
either return an overall RPC error from EvalTx (propagating evalErr) or populate
an error field on the corresponding redeemer in the EvalTx response (e.g.,
redeemer.Error or similar) and skip assigning zero ExUnits — locate the evalErr
check near the redeemer.ExUnits assignments, remove the silent log-only
behavior, and implement one clear propagation method so callers can distinguish
evaluation failures from genuine zero-resource scripts.
Signed-off-by: Chris Gianelloni <wolf31o2@blinklabs.io>
a98ce1a to
daa7837
Compare
There was a problem hiding this comment.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
internal/api/localstatequery.go (1)
416-419:⚠️ Potential issue | 🟡 MinorValidation blocks valid Cardano tokens with empty asset names.
Empty asset names (
[]byte{}) are legitimate in Cardano;hex.DecodeString("")returns an empty slice cleanly. TheassetNameHex == ""guard rejects requests like?asset_name=even though the hex decoding below would succeed and produce a valid search key. Remove the empty-string check and let the hex decoder be the only validation gate.🐛 Proposed fix
- if assetNameHex == "" { - c.JSON(400, apiError("asset_name parameter is required")) - return - } - // Parse asset name assetName, err := hex.DecodeString(assetNameHex) if err != nil { c.JSON(400, apiError("invalid asset_name hex: "+err.Error())) return }Update the Godoc
@Paramline accordingly (e.g."Asset name (hex), empty string for unnamed assets").🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/api/localstatequery.go` around lines 416 - 419, Remove the artificial rejection of empty asset names by deleting the conditional that returns a 400 when assetNameHex == "" so that hex.DecodeString handles validation; adjust any surrounding logic that assumed non-empty (e.g., code using assetNameHex as input to hex.DecodeString) and update the Godoc `@Param` description to note that an empty string is allowed for unnamed Cardano assets (e.g., "Asset name (hex), empty string for unnamed assets"); ensure no other code paths rely on the previous non-empty guard.
🧹 Nitpick comments (4)
internal/api/localstatequery.go (1)
473-513: Unbounded memory consumption and response size when no address filter is provided.When
addressStris empty,GetUTxOWhole()loads the entire UTxO set into memory (millions of entries on mainnet), and all matching results are returned in a single response with no limit. Consider adding a configurable result cap and documenting the performance implications of this endpoint to prevent accidental DoS from unauthenticated callers. Based on learnings, server-side filtering is unavailable in the Cardano node protocol, so client-side mitigations (pagination, max-result cap, or requiring an address filter) are the only options.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/api/localstatequery.go` around lines 473 - 513, The handler currently calls LocalStateQuery().Client.GetUTxOWhole() and returns all matching UTxOs in responseLocalStateQuerySearchUTxOsByAsset, which can blow memory and response size; change the logic so GetUTxOWhole() is only allowed when a configured maxResults (e.g. config.MaxUTxOSearchResults) is set and enforce a hard cap: if no address filter (i.e. using GetUTxOWhole), require either a query param page/pagination or reject the request with a 400/422 unless a max-result limit is provided; when processing utxos.Results apply server-side truncation to at most config.MaxUTxOSearchResults, set Count accordingly, and return a hint (e.g. truncated flag or nextPage token) to the client; update use sites of responseLocalStateQuerySearchUTxOsByAsset, utxoItem and UTxOsResult handling to ensure assets and amounts are computed only for the returned slice to avoid scanning the whole set.openapi/test/api_localstatequery_test.go (1)
21-122: Missing test stub forLocalstatequeryUtxosSearchByAssetGet.Every other endpoint in this file has a corresponding (skipped) test stub. The new
LocalstatequeryUtxosSearchByAssetGetendpoint is absent from the test suite, even as a placeholder. Since this is auto-generated code, regenerating should produce the stub; otherwise it can be added manually:✏️ Suggested stub addition
) + t.Run( + "Test LocalstatequeryAPIService LocalstatequeryUtxosSearchByAssetGet", + func(t *testing.T) { + + t.Skip("skip test") // remove to run test + + resp, httpRes, err := apiClient.LocalstatequeryAPI.LocalstatequeryUtxosSearchByAssetGet(context.Background()). + PolicyId("policyId_example"). + AssetName("assetName_example"). + Execute() + + require.Nil(t, err) + require.NotNil(t, resp) + assert.Equal(t, 200, httpRes.StatusCode) + + }, + ) + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@openapi/test/api_localstatequery_test.go` around lines 21 - 122, Add a skipped test stub for the missing LocalstatequeryUtxosSearchByAssetGet endpoint inside Test_openapi_LocalstatequeryAPIService: create a t.Run block named "Test LocalstatequeryAPIService LocalstatequeryUtxosSearchByAssetGet" that calls apiClient.LocalstatequeryAPI.LocalstatequeryUtxosSearchByAssetGet(context.Background()).Execute(), wraps the call with t.Skip("skip test") so it remains disabled, and includes the same assertions as other tests (require.Nil(t, err), require.NotNil(t, resp), assert.Equal(t, 200, httpRes.StatusCode)) to match the existing test patterns.openapi/api/openapi.yaml (1)
332-353:assetsexample value is a JSON-encoded string, not an object — misleading for API clients.The
assetsfield is declared astype: object(line 435) but its example throughout the spec is"{}"— a string. This is likely a generator artifact from a Go source annotation ofexample:"{}"on ananyfield. Consider updating the source swag annotation to emit a real object literal (e.g.,example:{}) so the generated docs match the schema type.Based on learnings, files under
openapi/are auto-generated — the fix belongs in the Go source annotation, not the generated YAML directly.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@openapi/api/openapi.yaml` around lines 332 - 353, The OpenAPI example shows assets as the JSON string "{}" while the schema for the assets field is an object; fix the Go source annotation that generates this by changing the struct tag on the UTXO item (the field used in the api.utxoItem schema and reflected in api.responseLocalStateQuerySearchUTxOsByAsset) so its swagger example is a real object literal (e.g., change example:"{}" to example:{} or provide a map/example object) and then re-generate the openapi YAML so the assets example is an object, not a string.internal/utxorpc/submit.go (1)
735-751: Reference scripts stored in UTxO outputs are not collected.Babbage/Conway transactions routinely reference scripts embedded in UTxO outputs rather than including them in the transaction witnesses. Scripts are only looked up in
witnesses.PlutusV{1,2,3}Scripts(), so any transaction that uses reference scripts will reach the "Script not found for hash" branch and silently return zero ExUnits for every affected redeemer.The fix is to iterate
resolvedUtxosand extract any scripts from theirOutput.Scripts()(or equivalent) after the batch UTxO fetch.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/utxorpc/submit.go` around lines 735 - 751, The current script maps (v1ScriptByHash, v2ScriptByHash, v3ScriptByHash) only include scripts from witnesses.PlutusV{1,2,3}Scripts(), so reference scripts embedded in resolved UTxOs are missed; after the batch UTxO fetch and before script lookup (i.e., near where v1ScriptByHash/v2ScriptByHash/v3ScriptByHash are populated in submit.go), iterate over resolvedUtxos and for each utxo inspect utxo.Output.Scripts() (or the equivalent method), extract any PlutusV1/PlutusV2/PlutusV3 scripts, compute their hash with s.Hash(), hex-encode it and insert them into the corresponding map (same keys/values as the witness loop) so reference scripts are available during redeemer script lookups.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@internal/utxorpc/submit.go`:
- Line 926: The hardcoded maxBudget (maxBudget := lcommon.ExUnits{Memory:
14_000_000, Steps: 10_000_000_000}) uses mainnet constants and must be replaced
with the protocol parameters fetched earlier in this function; locate the
variable that holds the fetched protocol parameters (the one fetched above this
line) and construct maxBudget from its execution-unit limits (e.g., use the
protocol params' max transaction/execution units field such as
MaxTxExUnits/MaxExecutionUnits or the equivalent names in this codebase) and
convert those values into lcommon.ExUnits instead of hardcoding Memory and
Steps. Ensure the created maxBudget uses those dynamic fields everywhere the
current maxBudget variable is used.
- Around line 973-986: The current code uses a V1/V2-shaped ScriptContext for V3
scripts (see v3ScriptByHash and scriptContext usage), which will cause
pattern-match failures; change the logic to reject V3 script evaluation
explicitly until proper builders exist: do not call v3Script.Evaluate with the
V1/V2 context or append incorrect exUnits—return an evaluation error for V3
scripts (and keep txEvalRedeemers handling consistent). Add TODO notes
referencing buildTxInfoV3 / buildScriptContextV3 and ensure future
implementation creates a V3-shaped ScriptContext (ScriptContext with TxInfo V3
fields) before calling v3Script.Evaluate.
- Around line 840-866: The code currently assumes utxo.Output.Datum()'s
Datum.Data is non-nil; add a check after obtaining datum := utxo.Output.Datum()
to handle the case where datum != nil but datum.Data == nil by attempting to
retrieve the actual datum from the witness datum table (e.g., lookup by datum
hash from the witness/on-chain witness map used in this flow), set datumData to
that retrieved data (or return a clear error if not found), then proceed to
buildScriptContext and set args; ensure the final args slice passed to Evaluate
contains a non-nil data.PlutusData for the datum position so we never pass a nil
datum into Evaluate.
In `@openapi/model_api_utxo_item.go`:
- Around line 122-127: GetAssetsOk deviates from the other getters by returning
a non-pointer map and an empty map when unset; change ApiUtxoItem.GetAssetsOk to
match the pattern used elsewhere by returning (*map[string]interface{}, bool),
return nil, false when o==nil or IsNil(o.Assets), and return &o.Assets, true
when set (ensure the IsNil check stays). This keeps the code/docs consistent and
preserves the nil sentinel behavior expected by callers.
---
Outside diff comments:
In `@internal/api/localstatequery.go`:
- Around line 416-419: Remove the artificial rejection of empty asset names by
deleting the conditional that returns a 400 when assetNameHex == "" so that
hex.DecodeString handles validation; adjust any surrounding logic that assumed
non-empty (e.g., code using assetNameHex as input to hex.DecodeString) and
update the Godoc `@Param` description to note that an empty string is allowed for
unnamed Cardano assets (e.g., "Asset name (hex), empty string for unnamed
assets"); ensure no other code paths rely on the previous non-empty guard.
---
Duplicate comments:
In `@internal/utxorpc/submit.go`:
- Line 629: The call to oConn.LocalStateQuery().Client.Start() ignores its
returned error; change the code to capture and handle that error before
proceeding to call GetCurrentProtocolParams() — call Start(), check the returned
error, log/contextualize it (or return it up the call stack) and only call
GetCurrentProtocolParams() if Start() succeeded; also ensure any started client
is cleaned up appropriately (stop/close) on early returns. Use the unique
symbols Client.Start() and GetCurrentProtocolParams() to locate and modify the
code path.
- Around line 47-50: The constant slotLengthMs = 1000 is incorrect for multi-era
support; replace the hardcoded value by querying the node's era history (via
GetEraHistory()) and use it with systemStartMs to derive per-era slot lengths
and convert slot numbers to wall-clock ms; update any logic that uses
slotLengthMs (e.g., slot-to-time conversions in submit.go) to compute slot
duration dynamically from the EraHistory object so Byron-era 20s slots and
non-mainnet configs are handled correctly.
- Around line 998-1007: When evalErr != nil you must not silently leave
redeemer.ExUnits at zero; change the logic in the block handling evalErr (check
of evalErr around key.Index) to surface the failure instead of writing zero
values. Either return or propagate an error (including key.Index and evalErr)
from the function that builds/appends the redeemer, or mark the redeemer with an
explicit failure indicator rather than assigning Steps/Memory = 0; ensure
callers of the function that calls this code (the method that appends redeemer
and currently sets redeemer.ExUnits.Steps and redeemer.ExUnits.Memory) handle
the returned error so script evaluation failures are distinguishable from
genuine zero-usage scripts.
In `@openapi/api/openapi.yaml`:
- Around line 420-440: The UTxO amount field was previously vulnerable to 32-bit
overflow; update the OpenAPI schema for api.utxoItem to ensure the amount
property is an integer with format int64 and that the example value matches that
format. In the api.utxoItem schema update the amount property (amount) to
include both type: integer and format: int64, keep the example as a valid int64,
and verify any consumers or generated clients that use api.utxoItem.amount are
regenerated or validated against this change.
In `@openapi/docs/ApiUtxoItem.md`:
- Around line 88-93: The documented signature for GetAssetsOk in ApiUtxoItem.md
incorrectly shows (*map[string]interface{}, bool) while the implementation in
openapi/model_api_utxo_item.go defines GetAssetsOk() as returning
(map[string]interface{}, bool); update the documentation to match the
implementation (use map[string]interface{} rather than *map[string]interface{})
for the GetAssetsOk return type, and ensure any generator template handling of
additionalProperties/free-form objects produces consistent signatures between
the generated model (GetAssetsOk) and the corresponding markdown.
- Around line 7-11: This is a duplicate review note about the ApiUtxoItem field
Amount now being int64; remove the duplicate comment marker and ensure only a
single resolved note remains referencing the Amount field in ApiUtxoItem.md
(verify the "Amount | Pointer to **int64**" line is correct and leave one
concise resolved comment), then update the PR/review thread to mark the
duplicate as resolved.
---
Nitpick comments:
In `@internal/api/localstatequery.go`:
- Around line 473-513: The handler currently calls
LocalStateQuery().Client.GetUTxOWhole() and returns all matching UTxOs in
responseLocalStateQuerySearchUTxOsByAsset, which can blow memory and response
size; change the logic so GetUTxOWhole() is only allowed when a configured
maxResults (e.g. config.MaxUTxOSearchResults) is set and enforce a hard cap: if
no address filter (i.e. using GetUTxOWhole), require either a query param
page/pagination or reject the request with a 400/422 unless a max-result limit
is provided; when processing utxos.Results apply server-side truncation to at
most config.MaxUTxOSearchResults, set Count accordingly, and return a hint (e.g.
truncated flag or nextPage token) to the client; update use sites of
responseLocalStateQuerySearchUTxOsByAsset, utxoItem and UTxOsResult handling to
ensure assets and amounts are computed only for the returned slice to avoid
scanning the whole set.
In `@internal/utxorpc/submit.go`:
- Around line 735-751: The current script maps (v1ScriptByHash, v2ScriptByHash,
v3ScriptByHash) only include scripts from witnesses.PlutusV{1,2,3}Scripts(), so
reference scripts embedded in resolved UTxOs are missed; after the batch UTxO
fetch and before script lookup (i.e., near where
v1ScriptByHash/v2ScriptByHash/v3ScriptByHash are populated in submit.go),
iterate over resolvedUtxos and for each utxo inspect utxo.Output.Scripts() (or
the equivalent method), extract any PlutusV1/PlutusV2/PlutusV3 scripts, compute
their hash with s.Hash(), hex-encode it and insert them into the corresponding
map (same keys/values as the witness loop) so reference scripts are available
during redeemer script lookups.
In `@openapi/api/openapi.yaml`:
- Around line 332-353: The OpenAPI example shows assets as the JSON string "{}"
while the schema for the assets field is an object; fix the Go source annotation
that generates this by changing the struct tag on the UTXO item (the field used
in the api.utxoItem schema and reflected in
api.responseLocalStateQuerySearchUTxOsByAsset) so its swagger example is a real
object literal (e.g., change example:"{}" to example:{} or provide a map/example
object) and then re-generate the openapi YAML so the assets example is an
object, not a string.
In `@openapi/test/api_localstatequery_test.go`:
- Around line 21-122: Add a skipped test stub for the missing
LocalstatequeryUtxosSearchByAssetGet endpoint inside
Test_openapi_LocalstatequeryAPIService: create a t.Run block named "Test
LocalstatequeryAPIService LocalstatequeryUtxosSearchByAssetGet" that calls
apiClient.LocalstatequeryAPI.LocalstatequeryUtxosSearchByAssetGet(context.Background()).Execute(),
wraps the call with t.Skip("skip test") so it remains disabled, and includes the
same assertions as other tests (require.Nil(t, err), require.NotNil(t, resp),
assert.Equal(t, 200, httpRes.StatusCode)) to match the existing test patterns.
| datum := utxo.Output.Datum() | ||
| if datum == nil { | ||
| return connect.NewResponse( | ||
| resp, | ||
| ), fmt.Errorf( | ||
| "no datum found for input %s", | ||
| input.String(), | ||
| ) | ||
| } | ||
| datumData := datum.Data | ||
| redeemerData := value.Data.Data | ||
| contextData, err := buildScriptContext( | ||
| tx, | ||
| resolvedUtxos, | ||
| redeemers, | ||
| key.Tag, | ||
| key.Index, | ||
| systemStartMs, | ||
| ) | ||
| if err != nil { | ||
| return connect.NewResponse(resp), fmt.Errorf( | ||
| "build script context for spend redeemer %d: %w", | ||
| key.Index, | ||
| err, | ||
| ) | ||
| } | ||
| args = []data.PlutusData{datumData, redeemerData, contextData} |
There was a problem hiding this comment.
datum.Data can be nil for datum-hash outputs, bypassing the nil guard.
The datum == nil check (Line 841) only catches the case where the UTxO has no datum at all. For pre-Babbage or hash-based datum outputs, utxo.Output.Datum() returns a non-nil Datum struct with a hash but a nil Data field. datumData then becomes a nil data.PlutusData, which is passed directly to Evaluate as the datum argument, likely causing a panic or evaluation failure.
The fix is to fall back to the witness datum table when inline data is absent:
🐛 Proposed fix
datum := utxo.Output.Datum()
- if datum == nil {
- return connect.NewResponse(
- resp,
- ), fmt.Errorf(
- "no datum found for input %s",
- input.String(),
- )
- }
- datumData := datum.Data
+ var datumData data.PlutusData
+ if datum != nil && datum.Data != nil {
+ datumData = datum.Data
+ } else {
+ // Datum hash output: look up data in transaction witnesses
+ datumHash := utxo.Output.DatumHash()
+ if datumHash != nil && witnesses != nil {
+ for _, wd := range witnesses.PlutusData() {
+ if bytes.Equal(wd.Hash().Bytes(), datumHash[:]) {
+ datumData = wd.Data
+ break
+ }
+ }
+ }
+ if datumData == nil {
+ return connect.NewResponse(resp), fmt.Errorf(
+ "no datum found for input %s", input.String(),
+ )
+ }
+ }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@internal/utxorpc/submit.go` around lines 840 - 866, The code currently
assumes utxo.Output.Datum()'s Datum.Data is non-nil; add a check after obtaining
datum := utxo.Output.Datum() to handle the case where datum != nil but
datum.Data == nil by attempting to retrieve the actual datum from the witness
datum table (e.g., lookup by datum hash from the witness/on-chain witness map
used in this flow), set datumData to that retrieved data (or return a clear
error if not found), then proceed to buildScriptContext and set args; ensure the
final args slice passed to Evaluate contains a non-nil data.PlutusData for the
datum position so we never pass a nil datum into Evaluate.
| var evalErr error | ||
|
|
||
| // Create a large budget for evaluation (will return actual usage) | ||
| maxBudget := lcommon.ExUnits{Memory: 14_000_000, Steps: 10_000_000_000} |
There was a problem hiding this comment.
Hardcoded max budget uses mainnet values — wrong for testnets and future parameter changes.
maxBudget := lcommon.ExUnits{Memory: 14_000_000, Steps: 10_000_000_000} matches current mainnet protocol parameters but will be incorrect on testnets or after a parameter update. The protocol parameters are already fetched above; use them instead:
♻️ Proposed fix
- // Create a large budget for evaluation (will return actual usage)
- maxBudget := lcommon.ExUnits{Memory: 14_000_000, Steps: 10_000_000_000}
+ // Use max execution unit limits from protocol parameters
+ // (extract from protoParams based on era, similar to costModels extraction above)
+ maxBudget := lcommon.ExUnits{
+ Memory: protoMaxMemory,
+ Steps: protoMaxSteps,
+ }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@internal/utxorpc/submit.go` at line 926, The hardcoded maxBudget (maxBudget
:= lcommon.ExUnits{Memory: 14_000_000, Steps: 10_000_000_000}) uses mainnet
constants and must be replaced with the protocol parameters fetched earlier in
this function; locate the variable that holds the fetched protocol parameters
(the one fetched above this line) and construct maxBudget from its
execution-unit limits (e.g., use the protocol params' max transaction/execution
units field such as MaxTxExUnits/MaxExecutionUnits or the equivalent names in
this codebase) and convert those values into lcommon.ExUnits instead of
hardcoding Memory and Steps. Ensure the created maxBudget uses those dynamic
fields everywhere the current maxBudget variable is used.
| } else if v3Script, found := v3ScriptByHash[scriptHashHex]; found { | ||
| // V3 script: Evaluate(scriptContext, budget, evalContext) | ||
| // For V3, datum and redeemer are embedded in the scriptContext | ||
| costModelData := costModels[3] | ||
| langVersion := cek.LanguageVersionV3 | ||
| evalContext, err := cek.NewEvalContext(langVersion, protoVersion, costModelData) | ||
| if err != nil { | ||
| log.Printf("Failed to create eval context for V3 script: %v", err) | ||
| txEvalRedeemers = append(txEvalRedeemers, redeemer) | ||
| continue | ||
| } | ||
| // V3 scriptContext should include datum and redeemer | ||
| scriptContext := args[len(args)-1] // Last arg is always context | ||
| exUnits, evalErr = v3Script.Evaluate(scriptContext, maxBudget, evalContext) |
There was a problem hiding this comment.
V3 scripts receive a V1/V2-shaped ScriptContext — evaluation will always fail or produce wrong results.
buildScriptContext constructs Constr(0, TxInfo_V1V2, ScriptPurpose), but the Plutus V3 ScriptContext has a different shape:
ScriptContext { scriptContextTxInfo :: TxInfo V3 -- 15+ fields, including votes/proposals/treasury
, scriptContextRedeemer :: Redeemer
, scriptContextScriptInfo :: ScriptInfo
}
The V3 TxInfo also has additional fields (votes, proposalProcedures, currentTreasuryValue, treasuryDonation) compared to the 12-field V1/V2 struct. Any V3 script that deconstructs its context will immediately fail pattern matching.
A correct implementation requires a separate buildTxInfoV3 / buildScriptContextV3 path. Until then, V3 script evaluation should be rejected explicitly rather than silently returning wrong ExUnits:
🐛 Proposed immediate guard
} else if v3Script, found := v3ScriptByHash[scriptHashHex]; found {
+ // TODO: V3 ScriptContext structure differs from V1/V2 (extra TxInfo fields,
+ // different ScriptContext constructor). Return an error until properly implemented.
+ return connect.NewResponse(resp), fmt.Errorf(
+ "Plutus V3 script evaluation not yet supported: requires V3-specific ScriptContext",
+ )
_ = v3Script🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@internal/utxorpc/submit.go` around lines 973 - 986, The current code uses a
V1/V2-shaped ScriptContext for V3 scripts (see v3ScriptByHash and scriptContext
usage), which will cause pattern-match failures; change the logic to reject V3
script evaluation explicitly until proper builders exist: do not call
v3Script.Evaluate with the V1/V2 context or append incorrect exUnits—return an
evaluation error for V3 scripts (and keep txEvalRedeemers handling consistent).
Add TODO notes referencing buildTxInfoV3 / buildScriptContextV3 and ensure
future implementation creates a V3-shaped ScriptContext (ScriptContext with
TxInfo V3 fields) before calling v3Script.Evaluate.
| func (o *ApiUtxoItem) GetAssetsOk() (map[string]interface{}, bool) { | ||
| if o == nil || IsNil(o.Assets) { | ||
| return map[string]interface{}{}, false | ||
| } | ||
| return o.Assets, true | ||
| } |
There was a problem hiding this comment.
GetAssetsOk is inconsistent with all other Get*Ok methods in this struct.
Two deviations from the established pattern:
- Return type: All other
Get*Okmethods return(*T, bool)(a pointer), butGetAssetsOkreturns(map[string]interface{}, bool)— no pointer. - Nil sentinel when unset: All other methods return
nil, falsewhen the field is not set;GetAssetsOkreturnsmap[string]interface{}{}, false— an allocated empty map. Any caller that guards on the returned value beingnil(rather than checking the boolean) will silently miss the unset case.
The documented signature in openapi/docs/ApiUtxoItem.md (line 90) also shows *map[string]interface{} as the first return type, creating an additional code/docs mismatch. Since both files are auto-generated, the template or OpenAPI spec may need a correction so regeneration produces consistent output.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@openapi/model_api_utxo_item.go` around lines 122 - 127, GetAssetsOk deviates
from the other getters by returning a non-pointer map and an empty map when
unset; change ApiUtxoItem.GetAssetsOk to match the pattern used elsewhere by
returning (*map[string]interface{}, bool), return nil, false when o==nil or
IsNil(o.Assets), and return &o.Assets, true when set (ensure the IsNil check
stays). This keeps the code/docs consistent and preserves the nil sentinel
behavior expected by callers.
Closes #399
Closes #500
Summary by CodeRabbit
New Features
Chores
Tests
Documentation