feat: Added Transaction size calculation methods#1795
feat: Added Transaction size calculation methods#1795manishdait wants to merge 19 commits intohiero-ledger:mainfrom
Conversation
Codecov Report❌ Patch coverage is @@ Coverage Diff @@
## main #1795 +/- ##
==========================================
+ Coverage 93.68% 93.72% +0.03%
==========================================
Files 144 144
Lines 9350 9376 +26
==========================================
+ Hits 8760 8788 +28
+ Misses 590 588 -2 🚀 New features to boost your workflow:
|
0da2438 to
1825462
Compare
|
Hi, this is MergeConflictBot. Please resolve these conflicts locally and push the changes. Quick Fix for CHANGELOG.md ConflictsIf your conflict is only in CHANGELOG.md, you can resolve it easily using the GitHub web editor:
For all other merge conflicts, please read: Thank you for contributing! |
|
Hi @manishdait, This pull request has had no commit activity for 10 days. Are you still working on it?
If you're no longer working on this, please comment Reach out on discord or join our office hours if you need assistance. From the Python SDK Team |
|
Hi @manishdait, this is InactivityBot 👋 This pull request has had no new commits for 21 days, so I'm closing it and unassigning you from the linked issue to keep the backlog healthy. If you're no longer interested, no action is needed. Tip: You can comment If you'd like to continue working on this later, feel free to comment |
|
Hi @manishdait, this is InactivityBot 👋 This pull request has had no new commits for 22 days, so I'm closing it and unassigning you from the linked issue to keep the backlog healthy. If you're no longer interested, no action is needed. Tip: You can comment If you'd like to continue working on this later, feel free to comment |
1825462 to
c6b94fe
Compare
|
Hi, this is WorkflowBot.
|
fbac757 to
1489840
Compare
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
WalkthroughAdds transaction size calculation properties ( Changes
Sequence Diagram(s)mermaid Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes 🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 inconclusive)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 6
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
src/hiero_sdk_python/transaction/transaction.py (1)
419-432:⚠️ Potential issue | 🟠 MajorReturn type of
build_transaction_body()must betransaction_pb2.TransactionBody.Line 419 declares the return type as
transaction_pb2.Transaction, but the docstring and all subclass implementations usetransaction_pb2.TransactionBody. This creates a type contract violation: every subclass override (transfer_transaction.py, batch_transaction.py, etc.) declaresTransactionBody, making them incompatible with the base signature.Suggested fix
- def build_transaction_body(self) -> transaction_pb2.Transaction: + def build_transaction_body(self) -> transaction_pb2.TransactionBody:src/hiero_sdk_python/consensus/topic_message_submit_transaction.py (1)
273-287:⚠️ Potential issue | 🔴 CriticalBLOCKER: carry
secondswhen chunkvalid_start.nanosoverflows.Lines 280-283 add
idirectly tobase_timestamp.nanoswithout checking for overflow. ProtobufTimestamp.nanosmust be in the range[0, 999_999_999]. With the defaultmax_chunks=20, if a transaction is created with avalid_startnear the end of a second (e.g.,nanos=999_999_980), chunk indices approaching 20 will cause the nanos value to exceed the valid range, resulting in invalid or rejected transaction IDs on the network.Suggested fix
- chunk_valid_start = timestamp_pb2.Timestamp( - seconds=base_timestamp.seconds, - nanos=base_timestamp.nanos + i - ) + total_nanos = base_timestamp.nanos + i + chunk_valid_start = timestamp_pb2.Timestamp( + seconds=base_timestamp.seconds + total_nanos // 1_000_000_000, + nanos=total_nanos % 1_000_000_000, + )
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 9f37e713-f101-4e75-99e8-ac95e67c3a50
📒 Files selected for processing (8)
CHANGELOG.mdsrc/hiero_sdk_python/consensus/topic_message_submit_transaction.pysrc/hiero_sdk_python/file/file_append_transaction.pysrc/hiero_sdk_python/transaction/transaction.pytests/integration/file_append_transaction_e2e_test.pytests/integration/topic_message_submit_transaction_e2e_test.pytests/unit/topic_message_submit_transaction_test.pytests/unit/transaction_test.py
| if not self._transaction_ids: | ||
| base_timestamp = self.transaction_id.valid_start | ||
|
|
||
| for i in range(self.get_required_chunks()): | ||
| if i == 0: | ||
| # First chunk uses the original transaction ID | ||
| chunk_transaction_id = self.transaction_id | ||
| else: | ||
| # Subsequent chunks get incremented timestamps | ||
| # Add i nanoseconds to space out chunks | ||
| chunk_valid_start = timestamp_pb2.Timestamp( | ||
| seconds=base_timestamp.seconds, | ||
| nanos=base_timestamp.nanos + i | ||
| ) | ||
| chunk_transaction_id = TransactionId( | ||
| account_id=self.transaction_id.account_id, | ||
| valid_start=chunk_valid_start | ||
| ) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's look at the file_append_transaction.py code around lines 307-310
head -n 320 src/hiero_sdk_python/file/file_append_transaction.py | tail -n 40Repository: hiero-ledger/hiero-sdk-python
Length of output: 1610
🏁 Script executed:
# Check the full freeze_with method and chunk ID generation
sed -n '270,320p' src/hiero_sdk_python/file/file_append_transaction.pyRepository: hiero-ledger/hiero-sdk-python
Length of output: 2038
🏁 Script executed:
# Look at the body_size_all_chunks implementation mentioned
sed -n '460,480p' src/hiero_sdk_python/file/file_append_transaction.pyRepository: hiero-ledger/hiero-sdk-python
Length of output: 593
🏁 Script executed:
# Check protobuf Timestamp definition - find where it's imported/defined
find src -name "timestamp_pb2.py" -o -name "*.proto" | head -20Repository: hiero-ledger/hiero-sdk-python
Length of output: 55
🏁 Script executed:
# Search for similar timestamp overflow handling in topic transactions
rg "chunk.*timestamp|nanos.*i|second.*total_nanos" src/hiero_sdk_python --type py -B 2 -A 2Repository: hiero-ledger/hiero-sdk-python
Length of output: 8576
🏁 Script executed:
# Check body_size property implementation
rg "def body_size|@property" src/hiero_sdk_python/file/file_append_transaction.py -A 5Repository: hiero-ledger/hiero-sdk-python
Length of output: 287
🏁 Script executed:
# Check how body_size is calculated - likely in parent Transaction class
rg "def body_size" src/hiero_sdk_python/transaction/transaction.py -A 10Repository: hiero-ledger/hiero-sdk-python
Length of output: 268
🏁 Script executed:
# Look for _current_chunk_index usage in FileAppendTransaction
rg "_current_chunk_index" src/hiero_sdk_python/file/file_append_transaction.py -B 2 -A 2Repository: hiero-ledger/hiero-sdk-python
Length of output: 1017
🏁 Script executed:
# Check the protobuf Timestamp field constraints
rg "class.*Timestamp|nanos.*999" src/hiero_sdk_python/hapi -r --type py | head -20Repository: hiero-ledger/hiero-sdk-python
Length of output: 181
🏁 Script executed:
# Check if there are any existing tests for chunk timestamp generation
fd "test.*file.*append" src/hiero_sdk_python --type fRepository: hiero-ledger/hiero-sdk-python
Length of output: 55
🏁 Script executed:
# Look at tests to see how chunk timestamps are validated
find . -path "*/test*" -name "*file*append*.py" -o -path "*/test*" -name "*append*.py" | head -5Repository: hiero-ledger/hiero-sdk-python
Length of output: 173
🏁 Script executed:
# Look at the chunk_index usage in build_transaction_body
rg "build_transaction_body|_current_chunk_index" src/hiero_sdk_python/file/file_append_transaction.py -B 3 -A 3Repository: hiero-ledger/hiero-sdk-python
Length of output: 1529
🏁 Script executed:
# Check what body_size calls and how it uses _current_chunk_index
sed -n '1,100p' src/hiero_sdk_python/file/file_append_transaction.py | grep -A 20 "def build_transaction_body"Repository: hiero-ledger/hiero-sdk-python
Length of output: 55
🏁 Script executed:
# Look at the full build_transaction_body implementation
sed -n '150,250p' src/hiero_sdk_python/file/file_append_transaction.pyRepository: hiero-ledger/hiero-sdk-python
Length of output: 3615
🏁 Script executed:
# Check if Timestamp class wraps protobuf Timestamp
cat src/hiero_sdk_python/timestamp.pyRepository: hiero-ledger/hiero-sdk-python
Length of output: 5051
🏁 Script executed:
# Look at what type transaction_id.valid_start is
rg "valid_start" src/hiero_sdk_python/transaction/transaction_id.py -B 2 -A 2 | head -30Repository: hiero-ledger/hiero-sdk-python
Length of output: 1155
🏁 Script executed:
# Check the unit tests for FileAppendTransaction
head -n 100 tests/unit/file_append_transaction_test.pyRepository: hiero-ledger/hiero-sdk-python
Length of output: 3737
🏁 Script executed:
# Check the exact protobuf Timestamp definition for nanos bounds
rg "nanos" src/hiero_sdk_python/hapi/services -l | head -5Repository: hiero-ledger/hiero-sdk-python
Length of output: 147
🏁 Script executed:
# Verify the protobuf constraints - check imported timestamp_pb2
python3 << 'EOF'
# Check what constraints protobuf enforces on Timestamp.nanos
from google.protobuf import descriptor
try:
from hiero_sdk_python.hapi.services import timestamp_pb2
print("Timestamp message fields:")
for field in timestamp_pb2.Timestamp.DESCRIPTOR.fields:
print(f" {field.name}: {field.type} (label: {field.label})")
except Exception as e:
print(f"Error: {e}")
EOFRepository: hiero-ledger/hiero-sdk-python
Length of output: 114
🏁 Script executed:
# Search for how body_size_all_chunks should work - check similar implementation in topic_message_submit_transaction
rg "body_size_all_chunks" src/hiero_sdk_python/consensus/topic_message_submit_transaction.py -A 15Repository: hiero-ledger/hiero-sdk-python
Length of output: 545
🏁 Script executed:
# Verify that _current_chunk_index is supposed to be used when calculating body_size
rg "_current_chunk_index|start_index.*chunk_size" src/hiero_sdk_python/file/file_append_transaction.py -B 2 -A 2Repository: hiero-ledger/hiero-sdk-python
Length of output: 1018
🏁 Script executed:
# Check if body_size_all_chunks in topic_message_submit_transaction also has the same issue
sed -n '450,480p' src/hiero_sdk_python/consensus/topic_message_submit_transaction.pyRepository: hiero-ledger/hiero-sdk-python
Length of output: 55
🏁 Script executed:
# Look at how execute() properly updates _current_chunk_index
sed -n '320,350p' src/hiero_sdk_python/file/file_append_transaction.pyRepository: hiero-ledger/hiero-sdk-python
Length of output: 979
🏁 Script executed:
# Verify protobuf Timestamp nanos maximum value by checking Google protobuf docs or imports
rg "timestamp_pb2" src/hiero_sdk_python -l | head -3 | xargs grep -l "Timestamp"Repository: hiero-ledger/hiero-sdk-python
Length of output: 203
🏁 Script executed:
# Check imports in file_append_transaction to see if using protobuf Timestamp or SDK Timestamp
head -n 30 src/hiero_sdk_python/file/file_append_transaction.pyRepository: hiero-ledger/hiero-sdk-python
Length of output: 1265
🏁 Script executed:
# Look at how the SDK's Timestamp.plus_nanos handles overflow properly
sed -n '95,110p' src/hiero_sdk_python/timestamp.pyRepository: hiero-ledger/hiero-sdk-python
Length of output: 558
🏁 Script executed:
# Check if there are any validation tests for timestamp bounds
rg "nanos.*99" tests/unit -l | head -3Repository: hiero-ledger/hiero-sdk-python
Length of output: 55
🏁 Script executed:
# Verify the actual behavior by checking what chunk_index should do
sed -n '195,220p' src/hiero_sdk_python/file/file_append_transaction.pyRepository: hiero-ledger/hiero-sdk-python
Length of output: 1016
🏁 Script executed:
# Check if there's loop update logic we might have missed
sed -n '468,476p' src/hiero_sdk_python/file/file_append_transaction.pyRepository: hiero-ledger/hiero-sdk-python
Length of output: 367
BLOCKER: Add nanosecond carry logic to prevent timestamp overflow.
Lines 307-310 add i directly to base_timestamp.nanos without handling second boundary rollover. When base_timestamp.nanos + i exceeds 999,999,999, the protobuf Timestamp.nanos field becomes invalid. Later chunks receive malformed transaction IDs.
BLOCKER: Advance _current_chunk_index in body_size_all_chunks loop.
Lines 468-476: The loop iterates over transaction IDs but never updates _current_chunk_index. Since _build_proto_body() uses _current_chunk_index to slice the correct chunk (line 182: start_index = self._current_chunk_index * self.chunk_size), all body size calculations operate on chunk 0. The method returns sizes for the same chunk repeatedly instead of one size per chunk.
Suggested fixes
For timestamp overflow:
- chunk_valid_start = timestamp_pb2.Timestamp(
- seconds=base_timestamp.seconds,
- nanos=base_timestamp.nanos + i
- )
+ total_nanos = base_timestamp.nanos + i
+ chunk_valid_start = timestamp_pb2.Timestamp(
+ seconds=base_timestamp.seconds + total_nanos // 1_000_000_000,
+ nanos=total_nanos % 1_000_000_000,
+ )For body_size_all_chunks:
original_index = self._current_chunk_index
for transaction_id in self._transaction_ids:
self.transaction_id = transaction_id
+ self._current_chunk_index = self._transaction_ids.index(transaction_id)
sizes.append(self.body_size)
self._current_chunk_index = original_index📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if not self._transaction_ids: | |
| base_timestamp = self.transaction_id.valid_start | |
| for i in range(self.get_required_chunks()): | |
| if i == 0: | |
| # First chunk uses the original transaction ID | |
| chunk_transaction_id = self.transaction_id | |
| else: | |
| # Subsequent chunks get incremented timestamps | |
| # Add i nanoseconds to space out chunks | |
| chunk_valid_start = timestamp_pb2.Timestamp( | |
| seconds=base_timestamp.seconds, | |
| nanos=base_timestamp.nanos + i | |
| ) | |
| chunk_transaction_id = TransactionId( | |
| account_id=self.transaction_id.account_id, | |
| valid_start=chunk_valid_start | |
| ) | |
| if not self._transaction_ids: | |
| base_timestamp = self.transaction_id.valid_start | |
| for i in range(self.get_required_chunks()): | |
| if i == 0: | |
| # First chunk uses the original transaction ID | |
| chunk_transaction_id = self.transaction_id | |
| else: | |
| # Subsequent chunks get incremented timestamps | |
| # Add i nanoseconds to space out chunks | |
| total_nanos = base_timestamp.nanos + i | |
| chunk_valid_start = timestamp_pb2.Timestamp( | |
| seconds=base_timestamp.seconds + total_nanos // 1_000_000_000, | |
| nanos=total_nanos % 1_000_000_000, | |
| ) | |
| chunk_transaction_id = TransactionId( | |
| account_id=self.transaction_id.account_id, | |
| valid_start=chunk_valid_start | |
| ) |
1489840 to
2a2bd41
Compare
There was a problem hiding this comment.
Actionable comments posted: 9
♻️ Duplicate comments (6)
src/hiero_sdk_python/transaction/transaction.py (1)
261-261:⚠️ Potential issue | 🔴 CriticalDon't route
freeze()through subclassfreeze_with(None).This now dispatches manual freezing into every override with
client=None.TokenCreateTransaction.freeze_with()still readsclient.operator_account_idbefore delegating whenauto_renew_periodis set, sofreeze()can now raiseAttributeErroron that path. Keepfreeze()on a non-virtual path, or make every override null-safe before it touchesclient.As per coding guidelines:
Any change to public methods (freeze, freeze_with, sign, execute, to_bytes, from_bytes, etc.) includes backwards-compatibility analysis.src/hiero_sdk_python/file/file_append_transaction.py (2)
300-314:⚠️ Potential issue | 🔴 CriticalCarry nanosecond overflow into
secondsfor later chunks.
base_timestamp.nanos + ican exceed999_999_999, which produces an invalid protobuf timestamp for the chunk transaction ID. Multi-chunk append near a second boundary will then fail to freeze or submit correctly.As per coding guidelines: `valid_start timestamps for each chunk are spaced out correctly (e.g., incremented by at least 1 nanosecond)`.🛠️ Proposed fix
- chunk_valid_start = timestamp_pb2.Timestamp( - seconds=base_timestamp.seconds, - nanos=base_timestamp.nanos + i - ) + total_nanos = base_timestamp.nanos + i + chunk_valid_start = timestamp_pb2.Timestamp( + seconds=base_timestamp.seconds + total_nanos // 1_000_000_000, + nanos=total_nanos % 1_000_000_000, + )
470-478:⚠️ Potential issue | 🟠 MajorAdvance
_current_chunk_indexwhile computingbody_size_all_chunks.
body_sizeslicesself.contentsusing_current_chunk_index, but this loop only swapstransaction_id. Every entry is therefore measured against the same chunk payload, so a shorter final chunk reports the wrong size.🛠️ Proposed fix
- for transaction_id in self._transaction_ids: - self.transaction_id = transaction_id - sizes.append(self.body_size) - - self._current_chunk_index = original_index - self.transaction_id = original_transaction_id - return sizes + try: + for index, transaction_id in enumerate(self._transaction_ids): + self._current_chunk_index = index + self.transaction_id = transaction_id + sizes.append(self.body_size) + return sizes + finally: + self._current_chunk_index = original_index + self.transaction_id = original_transaction_idsrc/hiero_sdk_python/consensus/topic_message_submit_transaction.py (1)
434-442:⚠️ Potential issue | 🟠 MajorMAJOR:
body_size_all_chunksnever advances_current_index.Both the message slice and
chunkInfo.numberdepend on_current_index, but this loop only swapstransaction_id. As written, every entry is measured against the same chunk, so the returned size list is wrong for multi-chunk messages.🛠️ Proposed fix
- for transaction_id in self._transaction_ids: - self.transaction_id = transaction_id - sizes.append(self.body_size) - - self._current_index = original_index - self.transaction_id = original_transaction_id - return sizes + try: + for index, transaction_id in enumerate(self._transaction_ids): + self._current_index = index + self.transaction_id = transaction_id + sizes.append(self.body_size) + return sizes + finally: + self._current_index = original_index + self.transaction_id = original_transaction_idtests/integration/file_append_transaction_e2e_test.py (1)
328-339:⚠️ Potential issue | 🟠 MajorThis "manual freeze" test still exercises
freeze_with().The test name says manual freeze, but it uses
freeze_with(env.client)and leaves a debugprint()/ commented sign call behind. That misses the regression surface this PR is trying to cover.As per coding guidelines: `Tests should be end-to-end: construct → freeze → sign → execute → verify`.🛠️ Proposed fix
+ node_account_id = env.client.network.nodes[0]._account_id tx = ( FileAppendTransaction() .set_file_id(file_id) .set_chunk_size(1024) .set_contents(content) - .freeze_with(env.client) + .set_transaction_id(TransactionId.generate(env.client.operator_account_id)) + .set_node_account_id(node_account_id) + .freeze() ) - - # tx.sign(env.client.operator_private_key) - - print(tx.get_required_chunks()) + tx.sign(env.client.operator_private_key)tests/unit/transaction_test.py (1)
282-406:⚠️ Potential issue | 🟠 MajorChunk-sizing tests still miss high-risk regressions
These cases can still pass with incorrect implementations: Lines 285 and 322 use exact multiples (no short final chunk), and Line 376 relies on implicit defaults (without explicit
set_chunk_size, chunking may not occur at all). Add at least one non-multiple payload and assert the last chunk body size differs.Suggested test hardening
def test_file_append_chunk_tx_should_return_list_of_body_sizes(file_id, account_id, transaction_id): """Test file append tx should return array of body sizes for multi-chunk transaction.""" chunk_size = 1024 - content = "a" * (chunk_size * 3) + content = "a" * (chunk_size * 2 + 137) # force a short final chunk @@ sizes = tx.body_size_all_chunks assert isinstance(sizes, list) assert len(sizes) == 3 + assert all(isinstance(s, int) and s > 0 for s in sizes) + assert sizes[-1] < sizes[0], "Final chunk should be smaller for non-multiple payload" def test_message_submit_chunk_tx_should_return_list_of_body_sizes(topic_id, account_id, transaction_id): @@ chunk_size = 1024 - message= "a" * (chunk_size * 3) + message = "a" * (chunk_size * 2 + 137) @@ sizes = tx.body_size_all_chunks assert isinstance(sizes, list) assert len(sizes) == 3 + assert all(isinstance(s, int) and s > 0 for s in sizes) + assert sizes[-1] < sizes[0], "Final chunk should be smaller for non-multiple payload" def test_chunked_tx_return_proper_sizes(file_id, account_id, transaction_id): @@ large_tx = ( FileAppendTransaction() .set_file_id(file_id) + .set_chunk_size(1024) .set_contents(large_content) @@ ) + assert len(large_tx.body_size_all_chunks) > 1, "Expected multi-chunk transaction" @@ small_tx = ( FileAppendTransaction() .set_file_id(file_id) + .set_chunk_size(1024) .set_contents(small_content)As per coding guidelines:
Unit tests should be extensive - test even if we don't expect users to use it currently.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 5c5fae07-ce46-46cd-8e61-093dfc735866
📒 Files selected for processing (9)
CHANGELOG.mdsrc/hiero_sdk_python/consensus/topic_message_submit_transaction.pysrc/hiero_sdk_python/file/file_append_transaction.pysrc/hiero_sdk_python/transaction/transaction.pytests/integration/file_append_transaction_e2e_test.pytests/integration/topic_message_submit_transaction_e2e_test.pytests/unit/mock_server.pytests/unit/topic_message_submit_transaction_test.pytests/unit/transaction_test.py
src/hiero_sdk_python/consensus/topic_message_submit_transaction.py
Outdated
Show resolved
Hide resolved
| message_tx = ( | ||
| TopicMessageSubmitTransaction() | ||
| .set_topic_id(topic_id) | ||
| .set_message(message) | ||
| .set_transaction_id(TransactionId.generate(env.client.operator_account_id)) | ||
| .set_node_account_id(AccountId(0,0,3)) | ||
| .freeze() |
There was a problem hiding this comment.
Use a node from env.client.network instead of hard-coding 0.0.3.
Hard-coding a specific node account ID makes this manual-freeze test environment-dependent. If the configured integration network exposes a different node set, the test fails for the wrong reason instead of validating the manual-freeze path.
🛠️ Proposed fix
+ node_account_id = env.client.network.nodes[0]._account_id
message_tx = (
TopicMessageSubmitTransaction()
.set_topic_id(topic_id)
.set_message(message)
.set_transaction_id(TransactionId.generate(env.client.operator_account_id))
- .set_node_account_id(AccountId(0,0,3))
+ .set_node_account_id(node_account_id)
.freeze()
)| def test_topic_submit_message_raises_error_on_freeze(topic_id): | ||
| """Test transaction raises error on freeze when the transaction_id and node_id not set""" | ||
| tx = ( | ||
| TopicMessageSubmitTransaction() | ||
| .set_topic_id(topic_id) | ||
| .set_message("Hello Hiero") | ||
| ) | ||
|
|
||
| with pytest.raises(ValueError): | ||
| tx.freeze() |
There was a problem hiding this comment.
Match the expected failure reason in this test.
A bare pytest.raises(ValueError) will also pass on unrelated validation failures, so this doesn't pin the new manual-freeze contract. Please match the missing-transaction-id error text here, and ideally add the sibling missing-node-id case as well.
🛠️ Proposed fix
- with pytest.raises(ValueError):
+ with pytest.raises(
+ ValueError,
+ match="Transaction ID must be set before freezing",
+ ):
tx.freeze()There was a problem hiding this comment.
Actionable comments posted: 5
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 70d640d3-78dc-4545-a5c8-db16b77403d8
📒 Files selected for processing (3)
src/hiero_sdk_python/transaction/transaction.pytests/integration/file_append_transaction_e2e_test.pytests/unit/mock_server.py
a9fef8c to
31ddd71
Compare
a46d754 to
50854e3
Compare
There was a problem hiding this comment.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/hiero_sdk_python/consensus/topic_message_submit_transaction.py (1)
274-281:⚠️ Potential issue | 🔴 CriticalBLOCKER: carry nanosecond overflow when deriving chunk
TransactionIds.When
iapproaches the chunk count limit (default 20),base_timestamp.nanos + ican exceed999_999_999, which violates the protobufTimestamp.nanosconstraint[0, 999_999_999]. For example, ifbase_timestamp.nanos = 999_999_999andi = 19, the result is1_000_000_018, which is invalid. Carry the overflow intosecondsbefore constructing the protobufTimestamp:🐛 Proposed fix
- chunk_valid_start = timestamp_pb2.Timestamp( - seconds=base_timestamp.seconds, - nanos=base_timestamp.nanos + i - ) + total_nanos = base_timestamp.nanos + i + chunk_valid_start = timestamp_pb2.Timestamp( + seconds=base_timestamp.seconds + total_nanos // 1_000_000_000, + nanos=total_nanos % 1_000_000_000, + )
♻️ Duplicate comments (4)
src/hiero_sdk_python/transaction/transaction.py (1)
923-927:⚠️ Potential issue | 🟠 Major
sizestill under-reports the default submission path.On a freshly frozen transaction, Lines 396-397 add the operator signature inside
execute(), but this property only measures the signatures already attached to the object. That makes pre-submission size checks wrong on the commonfreeze(...).execute(client)path unless the caller signed manually first. Either document this as the current encoded size or require the transaction to be signed before exposing.size.src/hiero_sdk_python/consensus/topic_message_submit_transaction.py (1)
420-436:⚠️ Potential issue | 🟠 MajorMAJOR: advance
_current_indexwhile sizing each chunk.
body_sizerebuilds the protobuf from the active chunk state, so this loop currently measures the same chunk repeatedly. A shorter final chunk will report the wrong size and break chunk-planning logic.♻️ Proposed fix
self._require_frozen() sizes = [] original_index = self._current_index original_transaction_id = self.transaction_id - for transaction_id in self._transaction_ids: - self.transaction_id = transaction_id - sizes.append(self.body_size) - - self._current_index = original_index - self.transaction_id = original_transaction_id - return sizes + try: + for index, transaction_id in enumerate(self._transaction_ids): + self._current_index = index + self.transaction_id = transaction_id + sizes.append(self.body_size) + return sizes + finally: + self._current_index = original_index + self.transaction_id = original_transaction_idsrc/hiero_sdk_python/file/file_append_transaction.py (2)
294-310:⚠️ Potential issue | 🔴 CriticalCarry nanosecond overflow into
secondswhen generating chunk IDs.Adding
idirectly tobase_timestamp.nanoscan produce invalidTimestampvalues for later chunks near a second boundary. Carry the overflow intosecondsbefore building the nextTransactionId. As per coding guidelines:valid_start timestamps for each chunk are spaced out correctly (e.g., incremented by at least 1 nanosecond).🐛 Proposed fix
- chunk_valid_start = timestamp_pb2.Timestamp( - seconds=base_timestamp.seconds, - nanos=base_timestamp.nanos + i - ) + total_nanos = base_timestamp.nanos + i + chunk_valid_start = timestamp_pb2.Timestamp( + seconds=base_timestamp.seconds + total_nanos // 1_000_000_000, + nanos=total_nanos % 1_000_000_000, + )
456-472:⚠️ Potential issue | 🟠 MajorAdvance
_current_chunk_indexwhile sizing each chunk.Line 189 slices
contentsfrom_current_chunk_index, but this helper only swapstransaction_id. For multi-chunk appends, it will report the active chunk's size for every entry instead of measuring each chunk body.♻️ Proposed fix
self._require_frozen() sizes = [] original_index = self._current_chunk_index original_transaction_id = self.transaction_id - for transaction_id in self._transaction_ids: - self.transaction_id = transaction_id - sizes.append(self.body_size) - - self._current_chunk_index = original_index - self.transaction_id = original_transaction_id - return sizes + try: + for index, transaction_id in enumerate(self._transaction_ids): + self._current_chunk_index = index + self.transaction_id = transaction_id + sizes.append(self.body_size) + return sizes + finally: + self._current_chunk_index = original_index + self.transaction_id = original_transaction_id
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: afa76868-5e45-4fbb-8bb5-70a1b0407d61
📒 Files selected for processing (6)
src/hiero_sdk_python/consensus/topic_message_submit_transaction.pysrc/hiero_sdk_python/file/file_append_transaction.pysrc/hiero_sdk_python/node.pysrc/hiero_sdk_python/transaction/transaction.pytests/integration/file_append_transaction_e2e_test.pytests/unit/mock_server.py
| if self._root_certificates: | ||
| # Use the certificate that is provided | ||
| self._node_pem_cert = self._root_certificates | ||
| print("node cert ", self._node_pem_cert) | ||
|
|
There was a problem hiding this comment.
Remove the certificate debug dump.
This branch runs in normal SDK usage whenever custom root certificates are configured, and it writes the full PEM bundle to stdout. That leaks certificate material into application/test logs and bypasses the SDK logger configuration.
🛡️ Proposed fix
if self._address._is_transport_security():
if self._root_certificates:
# Use the certificate that is provided
self._node_pem_cert = self._root_certificates
- print("node cert ", self._node_pem_cert)
else:| tx = ( | ||
| FileAppendTransaction() | ||
| .set_file_id(file_id) | ||
| .set_chunk_size(1024) | ||
| .set_contents(content) | ||
| .set_transaction_id(TransactionId.generate(env.client.operator_account_id)) | ||
| .set_node_account_id(AccountId(0,0,3)) | ||
| .freeze() |
There was a problem hiding this comment.
Use a configured network node instead of hardcoding 0.0.3.
This only covers the manual-freeze path on environments that expose that exact node account. Pulling the node ID from env.client.network keeps the test deterministic across CI, localnet, and any future network config.
🧪 Proposed fix
+ node_account_id = next(node._account_id for node in env.client.network.nodes)
+
tx = (
FileAppendTransaction()
.set_file_id(file_id)
.set_chunk_size(1024)
.set_contents(content)
.set_transaction_id(TransactionId.generate(env.client.operator_account_id))
- .set_node_account_id(AccountId(0,0,3))
+ .set_node_account_id(node_account_id)
.freeze()
)| network.set_transport_security(False) | ||
| network.set_verify_certificates(False) | ||
| client = Client(network) | ||
|
|
||
| # Force non-tls channel | ||
| for node in client.network.nodes: | ||
| node._address._is_transport_security = lambda: False | ||
| node._set_verify_certificates(False) | ||
| node._close() |
There was a problem hiding this comment.
Remove the per-node TLS monkey patch from the fixture.
Both Network.set_transport_security() and Network.set_verify_certificates() already fan out to existing _Nodes, so overriding node._address._is_transport_security here just makes the helper pass even if that real propagation path breaks later. Apply the configuration on client.network and drop the direct _Node mutations so the tests keep exercising the real behavior.
🐛 Proposed fix
# Create network and client
network = Network(nodes=nodes)
- network.set_transport_security(False)
- network.set_verify_certificates(False)
client = Client(network)
-
- # Force non-tls channel
- for node in client.network.nodes:
- node._address._is_transport_security = lambda: False
- node._set_verify_certificates(False)
- node._close()
+ client.network.set_transport_security(False)
+ client.network.set_verify_certificates(False)50854e3 to
7d89c78
Compare
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/hiero_sdk_python/consensus/topic_message_submit_transaction.py (1)
274-281:⚠️ Potential issue | 🔴 CriticalBLOCKER: carry nanosecond overflow when deriving chunk transaction IDs.
base_timestamp.nanos + ican exceed protobufTimestamp.nanos's999_999_999ceiling, producing invalidTransactionIds for later chunks. This needs second rollover before multi-chunk freeze completes.🧩 Suggested fix
- chunk_valid_start = timestamp_pb2.Timestamp( - seconds=base_timestamp.seconds, - nanos=base_timestamp.nanos + i - ) + total_nanos = base_timestamp.nanos + i + chunk_valid_start = timestamp_pb2.Timestamp( + seconds=base_timestamp.seconds + total_nanos // 1_000_000_000, + nanos=total_nanos % 1_000_000_000, + )As per coding guidelines:
base_timestamp.nanos + i MUST be checked for overflowand carried intoseconds.
♻️ Duplicate comments (12)
src/hiero_sdk_python/node.py (1)
127-128:⚠️ Potential issue | 🟠 MajorRemove the certificate dump from
_get_channel().This writes the configured PEM bundle to stdout on a normal TLS path and bypasses the SDK logger configuration. Please delete it rather than logging certificate material.
🛡️ Proposed fix
if self._root_certificates: # Use the certificate that is provided self._node_pem_cert = self._root_certificates - print("node cert ", self._node_pem_cert)As per coding guidelines:
REVIEW FOCUS 3 — CERTIFICATE HANDLING (SECURITY CRITICAL).tests/unit/mock_server.py (2)
131-142:⚠️ Potential issue | 🟠 MajorReserve the test port at bind time, not with a probe socket.
This helper releases the socket before
MockServerbinds, so another process can claim the returned port and make the fixture flaky under parallel runs. The50212retry only changes which unreserved port gets returned.As per coding guidelines:
CRITICAL PRINCIPLES - Tests Must Fail Loudly & Deterministically.
182-190:⚠️ Potential issue | 🟠 MajorDrop the per-node TLS monkey patch and configure
client.networkdirectly.Because
_find_free_port()already avoids50212, these mock addresses are non-TLS without overriding_is_transport_security(). Rebinding the method here bypasses the realNetwork→_Nodepropagation path and can hide TLS configuration regressions; the explicit_close()is redundant as well.🐛 Proposed fix
- network = Network(nodes=nodes) - network.set_transport_security(False) - network.set_verify_certificates(False) - client = Client(network) - - # Force non-tls channel - for node in client.network.nodes: - node._address._is_transport_security = lambda: False - node._set_verify_certificates(False) - node._close() + client = Client(Network(nodes=nodes)) + client.network.set_transport_security(False) + client.network.set_verify_certificates(False)As per coding guidelines:
PRIORITY 1 - Protect Against Breaking ChangesandPrefer testing real functionality over mocked behavior.tests/unit/topic_message_submit_transaction_test.py (1)
643-652:⚠️ Potential issue | 🟡 MinorMatch the expected manual-freeze error.
pytest.raises(ValueError)here will also pass on unrelated validation failures. Match the missing-transaction-id message, and add the sibling missing-node-id case separately so this locks down the newfreeze()contract.🧪 Tighten the assertion
- with pytest.raises(ValueError): + with pytest.raises( + ValueError, + match="Transaction ID must be set before freezing", + ): tx.freeze()As per coding guidelines:
Tests must provide useful error messages when they fail for future debugging.src/hiero_sdk_python/consensus/topic_message_submit_transaction.py (1)
428-436:⚠️ Potential issue | 🟠 MajorMAJOR:
body_size_all_chunksis still measuring the same chunk repeatedly.The loop swaps only
transaction_id;_current_indexnever changes, so a shorter final chunk reports the wrong size. Ifbody_sizeraises, the transaction state also stays mutated unless restoration runs in afinallyblock.🧮 Suggested fix
original_index = self._current_index original_transaction_id = self.transaction_id - for transaction_id in self._transaction_ids: - self.transaction_id = transaction_id - sizes.append(self.body_size) - - self._current_index = original_index - self.transaction_id = original_transaction_id - return sizes + try: + for index, transaction_id in enumerate(self._transaction_ids): + self._current_index = index + self.transaction_id = transaction_id + sizes.append(self.body_size) + return sizes + finally: + self._current_index = original_index + self.transaction_id = original_transaction_idtests/integration/file_append_transaction_e2e_test.py (1)
328-335:⚠️ Potential issue | 🟠 MajorUse a configured node instead of hardcoding
0.0.3.This manual-freeze path becomes environment-dependent when the active network does not expose that account. Pick a node from
env.client.networkso the test stays deterministic across CI and localnet setups.🧪 Suggested fix
+ node_account_id = next(node._account_id for node in env.client.network.nodes) + tx = ( FileAppendTransaction() .set_file_id(file_id) .set_chunk_size(1024) .set_contents(content) .set_transaction_id(TransactionId.generate(env.client.operator_account_id)) - .set_node_account_id(AccountId(0,0,3)) + .set_node_account_id(node_account_id) .freeze() )As per coding guidelines:
Integration tests validate real network behavior - they must be reliable and safe.tests/integration/topic_message_submit_transaction_e2e_test.py (1)
363-369:⚠️ Potential issue | 🟠 MajorUse a node from
env.client.networkinstead of hardcoding0.0.3.Hardcoding a specific node account ID makes this manual-freeze test environment-dependent. If the configured integration network exposes a different node set, the test fails for the wrong reason.
🧪 Suggested fix
+ node_account_id = next(node._account_id for node in env.client.network.nodes) + message_tx = ( TopicMessageSubmitTransaction() .set_topic_id(topic_id) .set_message(message) .set_transaction_id(TransactionId.generate(env.client.operator_account_id)) - .set_node_account_id(AccountId(0,0,3)) + .set_node_account_id(node_account_id) .freeze() )As per coding guidelines:
Integration tests validate real network behavior - they must be reliable and safe.tests/unit/transaction_test.py (2)
204-205: 🛠️ Refactor suggestion | 🟠 MajorAssert the sizing API types explicitly.
These tests compare values, but they do not lock down
size/body_sizeasintor eachbody_size_all_chunksentry asint. Add explicit type assertions so API drift fails loudly.🧪 Suggested assertions
assert tx1.size == tx2.size + assert isinstance(tx1.size, int) + assert isinstance(tx2.size, int) @@ assert tx1.body_size < tx2.body_size + assert isinstance(tx1.body_size, int) + assert isinstance(tx2.body_size, int) @@ sizes = tx.body_size_all_chunks assert isinstance(sizes, list) + assert all(isinstance(size, int) for size in sizes)As per coding guidelines:
Assert return types where relevant (e.g., assert isinstance(result, AccountId)).Also applies to: 230-230, 251-252, 297-299, 314-316, 334-336, 351-353, 369-371
282-299:⚠️ Potential issue | 🟠 MajorThese chunk-sizing tests still miss the cases that would catch a broken implementation.
The multi-chunk cases use exact multiples of
chunk_size,test_tx_with_no_content_should_return_single_body_chunk()uses" "instead of empty content, andlarge_content = 2048never chunks becauseFileAppendTransactiondefaults to 4096-byte chunks. A helper that always returns the first chunk's size would still pass here.Use a non-even final chunk, assert the last entry differs from the earlier ones, switch the empty-content case to
"", and setchunk_sizeexplicitly intest_chunked_tx_return_proper_sizes().As per coding guidelines:
Cover happy paths AND unhappy paths/edge cases.Also applies to: 319-336, 356-371, 374-406
src/hiero_sdk_python/file/file_append_transaction.py (2)
294-310:⚠️ Potential issue | 🔴 CriticalCarry nanoseconds into seconds when deriving chunk transaction IDs.
base_timestamp.nanos + ican exceed protobufTimestamp.nanosbounds and generate invalidTransactionIds for later chunks. The rollover needs to be handled here, not left to protobuf validation.🧩 Suggested fix
- chunk_valid_start = timestamp_pb2.Timestamp( - seconds=base_timestamp.seconds, - nanos=base_timestamp.nanos + i - ) + total_nanos = base_timestamp.nanos + i + chunk_valid_start = timestamp_pb2.Timestamp( + seconds=base_timestamp.seconds + total_nanos // 1_000_000_000, + nanos=total_nanos % 1_000_000_000, + )As per coding guidelines:
valid_start timestamps for each chunk are spaced out correctly (e.g., incremented by at least 1 nanosecond).
464-472:⚠️ Potential issue | 🟠 MajorAdvance
_current_chunk_indexwhile sizing each chunk.The current loop only swaps
transaction_id, so every entry uses whatever_current_chunk_indexwas before the call. Ifbody_sizeraises, restoration also needs to happen in afinallyblock.🧮 Suggested fix
original_index = self._current_chunk_index original_transaction_id = self.transaction_id - for transaction_id in self._transaction_ids: - self.transaction_id = transaction_id - sizes.append(self.body_size) - - self._current_chunk_index = original_index - self.transaction_id = original_transaction_id - return sizes + try: + for index, transaction_id in enumerate(self._transaction_ids): + self._current_chunk_index = index + self.transaction_id = transaction_id + sizes.append(self.body_size) + return sizes + finally: + self._current_chunk_index = original_index + self.transaction_id = original_transaction_idAs per coding guidelines:
Chunk payload slicing in build_transaction_body() accurately extracts the current chunk before signing and execution.src/hiero_sdk_python/transaction/transaction.py (1)
923-927:⚠️ Potential issue | 🟠 Major
sizeunder-reports whatexecute()will send until the transaction is signed.On a freshly frozen transaction this serializes an empty
sigMap, butexecute()adds the operator signature before submission when needed. That means the new pre-submission size helper can approve a transaction that exceeds the limit once it is actually sent. Either require fully signed transactions before readingsize, or document/rename this as the current encoded size.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: fa1f012b-c199-4b7c-9911-59296d55056c
📒 Files selected for processing (10)
CHANGELOG.mdsrc/hiero_sdk_python/consensus/topic_message_submit_transaction.pysrc/hiero_sdk_python/file/file_append_transaction.pysrc/hiero_sdk_python/node.pysrc/hiero_sdk_python/transaction/transaction.pytests/integration/file_append_transaction_e2e_test.pytests/integration/topic_message_submit_transaction_e2e_test.pytests/unit/mock_server.pytests/unit/topic_message_submit_transaction_test.pytests/unit/transaction_test.py
Signed-off-by: Manish Dait <daitmanish88@gmail.com>
Signed-off-by: Manish Dait <daitmanish88@gmail.com>
Signed-off-by: Manish Dait <daitmanish88@gmail.com>
Signed-off-by: Manish Dait <daitmanish88@gmail.com>
Signed-off-by: Manish Dait <daitmanish88@gmail.com>
Signed-off-by: Manish Dait <daitmanish88@gmail.com>
Signed-off-by: Manish Dait <daitmanish88@gmail.com>
Signed-off-by: Manish Dait <daitmanish88@gmail.com>
Signed-off-by: Manish Dait <daitmanish88@gmail.com>
Signed-off-by: Manish Dait <daitmanish88@gmail.com>
Signed-off-by: Manish Dait <daitmanish88@gmail.com>
Signed-off-by: Manish Dait <daitmanish88@gmail.com>
Signed-off-by: Manish Dait <daitmanish88@gmail.com>
Signed-off-by: Manish Dait <daitmanish88@gmail.com>
Signed-off-by: Manish Dait <daitmanish88@gmail.com>
Signed-off-by: Manish Dait <daitmanish88@gmail.com>
Signed-off-by: Manish Dait <daitmanish88@gmail.com>
Signed-off-by: Manish Dait <daitmanish88@gmail.com>
Signed-off-by: Manish Dait <daitmanish88@gmail.com>
7d89c78 to
098356f
Compare
Up to standards ✅🟢 Issues
|
| Metric | Results |
|---|---|
| Complexity | 13 |
TIP This summary will be updated as you push new changes. Give us feedback
Description:
This PR introduces new helper properties to allow calculating transaction size prior to submission.
Added Properties
size: Returns the total encoded transaction size in bytes (including signatures).body_size: Returns the encodedTransactionBodysize in bytes.body_size_all_chunks: Returns a list of encoded body sizes for each chunk in multi-chunk transactions (e.g.,FileAppendTransaction,TopicMessageSubmitTransaction).Changes Made
Added the new
@propertyhelpers:sizebody_sizebody_size_all_chunksRefactored freeze() to properly support chunked transaction size calculations
Fixed an issue in
FileAppendTransaction.execute()cause due to improper indentationAdded comprehensive unit and e2e tests
Related issue(s):
Fixes #1794
Notes for reviewer:
Checklist