Skip to content

feat: Added Transaction size calculation methods#1795

Open
manishdait wants to merge 19 commits intohiero-ledger:mainfrom
manishdait:feat/tx-size
Open

feat: Added Transaction size calculation methods#1795
manishdait wants to merge 19 commits intohiero-ledger:mainfrom
manishdait:feat/tx-size

Conversation

@manishdait
Copy link
Copy Markdown
Contributor

@manishdait manishdait commented Feb 13, 2026

Description:
This PR introduces new helper properties to allow calculating transaction size prior to submission.

Added Properties

  • size: Returns the total encoded transaction size in bytes (including signatures).
  • body_size: Returns the encoded TransactionBody size in bytes.
  • body_size_all_chunks: Returns a list of encoded body sizes for each chunk in multi-chunk transactions (e.g., FileAppendTransaction, TopicMessageSubmitTransaction).

Changes Made

  • Added the new @property helpers:

    • size
    • body_size
    • body_size_all_chunks
  • Refactored freeze() to properly support chunked transaction size calculations

  • Fixed an issue in FileAppendTransaction.execute() cause due to improper indentation

  • Added comprehensive unit and e2e tests

Related issue(s):

Fixes #1794

Notes for reviewer:

Checklist

  • Documented (Code comments, README, etc.)
  • Tested (unit, integration, etc.)

@codecov
Copy link
Copy Markdown

codecov bot commented Feb 13, 2026

Codecov Report

❌ Patch coverage is 98.41270% with 1 line in your changes missing coverage. Please review.

Impacted file tree graph

@@            Coverage Diff             @@
##             main    #1795      +/-   ##
==========================================
+ Coverage   93.68%   93.72%   +0.03%     
==========================================
  Files         144      144              
  Lines        9350     9376      +26     
==========================================
+ Hits         8760     8788      +28     
+ Misses        590      588       -2     
🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@github-actions
Copy link
Copy Markdown

Hi, this is MergeConflictBot.
Your pull request cannot be merged because it contains merge conflicts.

Please resolve these conflicts locally and push the changes.

Quick Fix for CHANGELOG.md Conflicts

If your conflict is only in CHANGELOG.md, you can resolve it easily using the GitHub web editor:

  1. Click on the "Resolve conflicts" button in the PR
  2. Accept both changes (keep both changelog entries)
  3. Click "Mark as resolved"
  4. Commit the merge

For all other merge conflicts, please read:

Thank you for contributing!

@github-actions
Copy link
Copy Markdown

Hi @manishdait,

This pull request has had no commit activity for 10 days. Are you still working on it?
To keep the PR active, you can:

  • Push a new commit.
  • Comment /working on the linked issue (not this PR).

If you're no longer working on this, please comment /unassign on the linked issue to release it for others. Otherwise, this PR may be closed due to inactivity.

Reach out on discord or join our office hours if you need assistance.

From the Python SDK Team

@github-actions
Copy link
Copy Markdown

github-actions bot commented Mar 9, 2026

Hi @manishdait, this is InactivityBot 👋

This pull request has had no new commits for 21 days, so I'm closing it and unassigning you from the linked issue to keep the backlog healthy.

If you're no longer interested, no action is needed.

Tip: You can comment /unassign on any issue to proactively step away before this bot kicks in.

If you'd like to continue working on this later, feel free to comment /assign on the issue to get re-assigned, and open a new PR when you're ready. 🚀

@github-actions github-actions bot closed this Mar 9, 2026
@manishdait manishdait reopened this Mar 10, 2026
@github-actions
Copy link
Copy Markdown

Hi @manishdait, this is InactivityBot 👋

This pull request has had no new commits for 22 days, so I'm closing it and unassigning you from the linked issue to keep the backlog healthy.

If you're no longer interested, no action is needed.

Tip: You can comment /unassign on any issue to proactively step away before this bot kicks in.

If you'd like to continue working on this later, feel free to comment /assign on the issue to get re-assigned, and open a new PR when you're ready. 🚀

@github-actions
Copy link
Copy Markdown

Hi, this is WorkflowBot.
Your pull request cannot be merged as it is not passing all our workflow checks.
Please click on each check to review the logs and resolve issues so all checks pass.
To help you:

@manishdait manishdait force-pushed the feat/tx-size branch 2 times, most recently from fbac757 to 1489840 Compare March 22, 2026 08:00
@manishdait manishdait marked this pull request as ready for review March 23, 2026 17:29
@manishdait manishdait requested review from a team as code owners March 23, 2026 17:29
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Mar 23, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review

Walkthrough

Adds transaction size calculation properties (size, body_size, body_size_all_chunks), centralizes transaction ID and node resolution during freeze, adjusts multi-chunk freeze behavior, introduces tests validating sizing/chunking, and updates CHANGELOG.

Changes

Cohort / File(s) Summary
Core transaction logic
src/hiero_sdk_python/transaction/transaction.py
Added size and body_size properties; freeze() delegates to freeze_with(None); new helpers _resolve_transaction_id and _resolve_node_ids; tightened freeze validation; build_transaction_body() annotated.
File append multi-chunk
src/hiero_sdk_python/file/file_append_transaction.py
freeze_with simplified to rely on parent freeze and to ensure _transaction_ids generation; added body_size_all_chunks property that iterates chunks by temporarily swapping transaction_id; minor TYPE_CHECKING import change.
Topic message multi-chunk
src/hiero_sdk_python/consensus/topic_message_submit_transaction.py
freeze_with now resolves transaction IDs unconditionally when needed; added body_size_all_chunks property to collect per-chunk body_size by temporarily mutating transaction_id and restoring state.
Node / channel plumbing
src/hiero_sdk_python/node.py
Added a debug print emitting _node_pem_cert when root certificates exist in _get_channel().
Tests — unit
tests/unit/transaction_test.py, tests/unit/topic_message_submit_transaction_test.py, tests/unit/mock_server.py
Added sizing and chunking tests and fixtures; new freeze-related unit test; mock server port retry behavior changed and TLS handling moved to network-level with additional per-node overrides.
Tests — integration
tests/integration/file_append_transaction_e2e_test.py, tests/integration/topic_message_submit_transaction_e2e_test.py
Added E2E tests that manually set TransactionId/NodeAccountId, call .freeze(), sign/execute multi-chunk FileAppend and TopicMessageSubmit flows, and assert expected ledger state.
Changelog
CHANGELOG.md
Added Unreleased entry documenting new transaction size properties: size, body_size, and body_size_all_chunks.

Sequence Diagram(s)

mermaid
sequenceDiagram
participant Client
participant Transaction
participant Node
participant Ledger
Client->>Transaction: configure (payload, chunk_size, optional ids)
Client->>Transaction: freeze_with(client)
Transaction->>Transaction: _resolve_transaction_id(client) / _resolve_node_ids(client)
Transaction->>Transaction: build per-chunk TransactionBody (compute body_size / size)
Transaction->>Node: assign/send signed transaction chunk(s)
Node->>Ledger: submit transaction chunk(s)
Ledger-->>Node: return receipt(s)
Node-->>Client: return execution result

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 inconclusive)

Check name Status Explanation Resolution
Out of Scope Changes check ❓ Inconclusive Most changes are directly scoped to transaction size calculation. However, modifications to mock_server.py (TLS setup refactoring) and node.py (debug print statement) appear tangential to the primary objective. Clarify whether TLS/mock server and debug logging changes in node.py are necessary for the size calculation feature or should be in separate pull requests.
✅ Passed checks (4 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately describes the main feature addition—transaction size calculation methods—and clearly summarizes the primary changeset.
Description check ✅ Passed The description is directly related to the changeset, explaining the new properties added, refactoring performed, and tests included.
Linked Issues check ✅ Passed All three required properties (size, body_size, body_size_all_chunks) are implemented across Transaction and its subclasses, with comprehensive unit and integration tests validating the functionality.
Docstring Coverage ✅ Passed Docstring coverage is 90.70% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
src/hiero_sdk_python/transaction/transaction.py (1)

419-432: ⚠️ Potential issue | 🟠 Major

Return type of build_transaction_body() must be transaction_pb2.TransactionBody.

Line 419 declares the return type as transaction_pb2.Transaction, but the docstring and all subclass implementations use transaction_pb2.TransactionBody. This creates a type contract violation: every subclass override (transfer_transaction.py, batch_transaction.py, etc.) declares TransactionBody, making them incompatible with the base signature.

Suggested fix
-    def build_transaction_body(self) -> transaction_pb2.Transaction:
+    def build_transaction_body(self) -> transaction_pb2.TransactionBody:
src/hiero_sdk_python/consensus/topic_message_submit_transaction.py (1)

273-287: ⚠️ Potential issue | 🔴 Critical

BLOCKER: carry seconds when chunk valid_start.nanos overflows.

Lines 280-283 add i directly to base_timestamp.nanos without checking for overflow. Protobuf Timestamp.nanos must be in the range [0, 999_999_999]. With the default max_chunks=20, if a transaction is created with a valid_start near the end of a second (e.g., nanos=999_999_980), chunk indices approaching 20 will cause the nanos value to exceed the valid range, resulting in invalid or rejected transaction IDs on the network.

Suggested fix
-                    chunk_valid_start = timestamp_pb2.Timestamp(
-                        seconds=base_timestamp.seconds,
-                        nanos=base_timestamp.nanos + i
-                    )
+                    total_nanos = base_timestamp.nanos + i
+                    chunk_valid_start = timestamp_pb2.Timestamp(
+                        seconds=base_timestamp.seconds + total_nanos // 1_000_000_000,
+                        nanos=total_nanos % 1_000_000_000,
+                    )

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

Run ID: 9f37e713-f101-4e75-99e8-ac95e67c3a50

📥 Commits

Reviewing files that changed from the base of the PR and between 2898bf9 and 1489840.

📒 Files selected for processing (8)
  • CHANGELOG.md
  • src/hiero_sdk_python/consensus/topic_message_submit_transaction.py
  • src/hiero_sdk_python/file/file_append_transaction.py
  • src/hiero_sdk_python/transaction/transaction.py
  • tests/integration/file_append_transaction_e2e_test.py
  • tests/integration/topic_message_submit_transaction_e2e_test.py
  • tests/unit/topic_message_submit_transaction_test.py
  • tests/unit/transaction_test.py

Comment on lines +297 to +314
if not self._transaction_ids:
base_timestamp = self.transaction_id.valid_start

for i in range(self.get_required_chunks()):
if i == 0:
# First chunk uses the original transaction ID
chunk_transaction_id = self.transaction_id
else:
# Subsequent chunks get incremented timestamps
# Add i nanoseconds to space out chunks
chunk_valid_start = timestamp_pb2.Timestamp(
seconds=base_timestamp.seconds,
nanos=base_timestamp.nanos + i
)
chunk_transaction_id = TransactionId(
account_id=self.transaction_id.account_id,
valid_start=chunk_valid_start
)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

# First, let's look at the file_append_transaction.py code around lines 307-310
head -n 320 src/hiero_sdk_python/file/file_append_transaction.py | tail -n 40

Repository: hiero-ledger/hiero-sdk-python

Length of output: 1610


🏁 Script executed:

# Check the full freeze_with method and chunk ID generation
sed -n '270,320p' src/hiero_sdk_python/file/file_append_transaction.py

Repository: hiero-ledger/hiero-sdk-python

Length of output: 2038


🏁 Script executed:

# Look at the body_size_all_chunks implementation mentioned
sed -n '460,480p' src/hiero_sdk_python/file/file_append_transaction.py

Repository: hiero-ledger/hiero-sdk-python

Length of output: 593


🏁 Script executed:

# Check protobuf Timestamp definition - find where it's imported/defined
find src -name "timestamp_pb2.py" -o -name "*.proto" | head -20

Repository: hiero-ledger/hiero-sdk-python

Length of output: 55


🏁 Script executed:

# Search for similar timestamp overflow handling in topic transactions
rg "chunk.*timestamp|nanos.*i|second.*total_nanos" src/hiero_sdk_python --type py -B 2 -A 2

Repository: hiero-ledger/hiero-sdk-python

Length of output: 8576


🏁 Script executed:

# Check body_size property implementation
rg "def body_size|@property" src/hiero_sdk_python/file/file_append_transaction.py -A 5

Repository: hiero-ledger/hiero-sdk-python

Length of output: 287


🏁 Script executed:

# Check how body_size is calculated - likely in parent Transaction class
rg "def body_size" src/hiero_sdk_python/transaction/transaction.py -A 10

Repository: hiero-ledger/hiero-sdk-python

Length of output: 268


🏁 Script executed:

# Look for _current_chunk_index usage in FileAppendTransaction
rg "_current_chunk_index" src/hiero_sdk_python/file/file_append_transaction.py -B 2 -A 2

Repository: hiero-ledger/hiero-sdk-python

Length of output: 1017


🏁 Script executed:

# Check the protobuf Timestamp field constraints
rg "class.*Timestamp|nanos.*999" src/hiero_sdk_python/hapi -r --type py | head -20

Repository: hiero-ledger/hiero-sdk-python

Length of output: 181


🏁 Script executed:

# Check if there are any existing tests for chunk timestamp generation
fd "test.*file.*append" src/hiero_sdk_python --type f

Repository: hiero-ledger/hiero-sdk-python

Length of output: 55


🏁 Script executed:

# Look at tests to see how chunk timestamps are validated
find . -path "*/test*" -name "*file*append*.py" -o -path "*/test*" -name "*append*.py" | head -5

Repository: hiero-ledger/hiero-sdk-python

Length of output: 173


🏁 Script executed:

# Look at the chunk_index usage in build_transaction_body
rg "build_transaction_body|_current_chunk_index" src/hiero_sdk_python/file/file_append_transaction.py -B 3 -A 3

Repository: hiero-ledger/hiero-sdk-python

Length of output: 1529


🏁 Script executed:

# Check what body_size calls and how it uses _current_chunk_index
sed -n '1,100p' src/hiero_sdk_python/file/file_append_transaction.py | grep -A 20 "def build_transaction_body"

Repository: hiero-ledger/hiero-sdk-python

Length of output: 55


🏁 Script executed:

# Look at the full build_transaction_body implementation
sed -n '150,250p' src/hiero_sdk_python/file/file_append_transaction.py

Repository: hiero-ledger/hiero-sdk-python

Length of output: 3615


🏁 Script executed:

# Check if Timestamp class wraps protobuf Timestamp
cat src/hiero_sdk_python/timestamp.py

Repository: hiero-ledger/hiero-sdk-python

Length of output: 5051


🏁 Script executed:

# Look at what type transaction_id.valid_start is
rg "valid_start" src/hiero_sdk_python/transaction/transaction_id.py -B 2 -A 2 | head -30

Repository: hiero-ledger/hiero-sdk-python

Length of output: 1155


🏁 Script executed:

# Check the unit tests for FileAppendTransaction
head -n 100 tests/unit/file_append_transaction_test.py

Repository: hiero-ledger/hiero-sdk-python

Length of output: 3737


🏁 Script executed:

# Check the exact protobuf Timestamp definition for nanos bounds
rg "nanos" src/hiero_sdk_python/hapi/services -l | head -5

Repository: hiero-ledger/hiero-sdk-python

Length of output: 147


🏁 Script executed:

# Verify the protobuf constraints - check imported timestamp_pb2
python3 << 'EOF'
# Check what constraints protobuf enforces on Timestamp.nanos
from google.protobuf import descriptor
try:
    from hiero_sdk_python.hapi.services import timestamp_pb2
    print("Timestamp message fields:")
    for field in timestamp_pb2.Timestamp.DESCRIPTOR.fields:
        print(f"  {field.name}: {field.type} (label: {field.label})")
except Exception as e:
    print(f"Error: {e}")
EOF

Repository: hiero-ledger/hiero-sdk-python

Length of output: 114


🏁 Script executed:

# Search for how body_size_all_chunks should work - check similar implementation in topic_message_submit_transaction
rg "body_size_all_chunks" src/hiero_sdk_python/consensus/topic_message_submit_transaction.py -A 15

Repository: hiero-ledger/hiero-sdk-python

Length of output: 545


🏁 Script executed:

# Verify that _current_chunk_index is supposed to be used when calculating body_size
rg "_current_chunk_index|start_index.*chunk_size" src/hiero_sdk_python/file/file_append_transaction.py -B 2 -A 2

Repository: hiero-ledger/hiero-sdk-python

Length of output: 1018


🏁 Script executed:

# Check if body_size_all_chunks in topic_message_submit_transaction also has the same issue
sed -n '450,480p' src/hiero_sdk_python/consensus/topic_message_submit_transaction.py

Repository: hiero-ledger/hiero-sdk-python

Length of output: 55


🏁 Script executed:

# Look at how execute() properly updates _current_chunk_index
sed -n '320,350p' src/hiero_sdk_python/file/file_append_transaction.py

Repository: hiero-ledger/hiero-sdk-python

Length of output: 979


🏁 Script executed:

# Verify protobuf Timestamp nanos maximum value by checking Google protobuf docs or imports
rg "timestamp_pb2" src/hiero_sdk_python -l | head -3 | xargs grep -l "Timestamp"

Repository: hiero-ledger/hiero-sdk-python

Length of output: 203


🏁 Script executed:

# Check imports in file_append_transaction to see if using protobuf Timestamp or SDK Timestamp
head -n 30 src/hiero_sdk_python/file/file_append_transaction.py

Repository: hiero-ledger/hiero-sdk-python

Length of output: 1265


🏁 Script executed:

# Look at how the SDK's Timestamp.plus_nanos handles overflow properly
sed -n '95,110p' src/hiero_sdk_python/timestamp.py

Repository: hiero-ledger/hiero-sdk-python

Length of output: 558


🏁 Script executed:

# Check if there are any validation tests for timestamp bounds
rg "nanos.*99" tests/unit -l | head -3

Repository: hiero-ledger/hiero-sdk-python

Length of output: 55


🏁 Script executed:

# Verify the actual behavior by checking what chunk_index should do
sed -n '195,220p' src/hiero_sdk_python/file/file_append_transaction.py

Repository: hiero-ledger/hiero-sdk-python

Length of output: 1016


🏁 Script executed:

# Check if there's loop update logic we might have missed
sed -n '468,476p' src/hiero_sdk_python/file/file_append_transaction.py

Repository: hiero-ledger/hiero-sdk-python

Length of output: 367


BLOCKER: Add nanosecond carry logic to prevent timestamp overflow.

Lines 307-310 add i directly to base_timestamp.nanos without handling second boundary rollover. When base_timestamp.nanos + i exceeds 999,999,999, the protobuf Timestamp.nanos field becomes invalid. Later chunks receive malformed transaction IDs.

BLOCKER: Advance _current_chunk_index in body_size_all_chunks loop.

Lines 468-476: The loop iterates over transaction IDs but never updates _current_chunk_index. Since _build_proto_body() uses _current_chunk_index to slice the correct chunk (line 182: start_index = self._current_chunk_index * self.chunk_size), all body size calculations operate on chunk 0. The method returns sizes for the same chunk repeatedly instead of one size per chunk.

Suggested fixes

For timestamp overflow:

-                    chunk_valid_start = timestamp_pb2.Timestamp(
-                        seconds=base_timestamp.seconds,
-                        nanos=base_timestamp.nanos + i
-                    )
+                    total_nanos = base_timestamp.nanos + i
+                    chunk_valid_start = timestamp_pb2.Timestamp(
+                        seconds=base_timestamp.seconds + total_nanos // 1_000_000_000,
+                        nanos=total_nanos % 1_000_000_000,
+                    )

For body_size_all_chunks:

         original_index = self._current_chunk_index
         for transaction_id in self._transaction_ids:
             self.transaction_id = transaction_id
+            self._current_chunk_index = self._transaction_ids.index(transaction_id)
             sizes.append(self.body_size)
         
         self._current_chunk_index = original_index
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if not self._transaction_ids:
base_timestamp = self.transaction_id.valid_start
for i in range(self.get_required_chunks()):
if i == 0:
# First chunk uses the original transaction ID
chunk_transaction_id = self.transaction_id
else:
# Subsequent chunks get incremented timestamps
# Add i nanoseconds to space out chunks
chunk_valid_start = timestamp_pb2.Timestamp(
seconds=base_timestamp.seconds,
nanos=base_timestamp.nanos + i
)
chunk_transaction_id = TransactionId(
account_id=self.transaction_id.account_id,
valid_start=chunk_valid_start
)
if not self._transaction_ids:
base_timestamp = self.transaction_id.valid_start
for i in range(self.get_required_chunks()):
if i == 0:
# First chunk uses the original transaction ID
chunk_transaction_id = self.transaction_id
else:
# Subsequent chunks get incremented timestamps
# Add i nanoseconds to space out chunks
total_nanos = base_timestamp.nanos + i
chunk_valid_start = timestamp_pb2.Timestamp(
seconds=base_timestamp.seconds + total_nanos // 1_000_000_000,
nanos=total_nanos % 1_000_000_000,
)
chunk_transaction_id = TransactionId(
account_id=self.transaction_id.account_id,
valid_start=chunk_valid_start
)

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 9

♻️ Duplicate comments (6)
src/hiero_sdk_python/transaction/transaction.py (1)

261-261: ⚠️ Potential issue | 🔴 Critical

Don't route freeze() through subclass freeze_with(None).

This now dispatches manual freezing into every override with client=None. TokenCreateTransaction.freeze_with() still reads client.operator_account_id before delegating when auto_renew_period is set, so freeze() can now raise AttributeError on that path. Keep freeze() on a non-virtual path, or make every override null-safe before it touches client.

As per coding guidelines: Any change to public methods (freeze, freeze_with, sign, execute, to_bytes, from_bytes, etc.) includes backwards-compatibility analysis.

src/hiero_sdk_python/file/file_append_transaction.py (2)

300-314: ⚠️ Potential issue | 🔴 Critical

Carry nanosecond overflow into seconds for later chunks.

base_timestamp.nanos + i can exceed 999_999_999, which produces an invalid protobuf timestamp for the chunk transaction ID. Multi-chunk append near a second boundary will then fail to freeze or submit correctly.

🛠️ Proposed fix
-                    chunk_valid_start = timestamp_pb2.Timestamp(
-                        seconds=base_timestamp.seconds,
-                        nanos=base_timestamp.nanos + i
-                    )
+                    total_nanos = base_timestamp.nanos + i
+                    chunk_valid_start = timestamp_pb2.Timestamp(
+                        seconds=base_timestamp.seconds + total_nanos // 1_000_000_000,
+                        nanos=total_nanos % 1_000_000_000,
+                    )
As per coding guidelines: `valid_start timestamps for each chunk are spaced out correctly (e.g., incremented by at least 1 nanosecond)`.

470-478: ⚠️ Potential issue | 🟠 Major

Advance _current_chunk_index while computing body_size_all_chunks.

body_size slices self.contents using _current_chunk_index, but this loop only swaps transaction_id. Every entry is therefore measured against the same chunk payload, so a shorter final chunk reports the wrong size.

🛠️ Proposed fix
-        for transaction_id in self._transaction_ids:
-            self.transaction_id = transaction_id
-            sizes.append(self.body_size)
-        
-        self._current_chunk_index = original_index
-        self.transaction_id = original_transaction_id
-        return sizes
+        try:
+            for index, transaction_id in enumerate(self._transaction_ids):
+                self._current_chunk_index = index
+                self.transaction_id = transaction_id
+                sizes.append(self.body_size)
+            return sizes
+        finally:
+            self._current_chunk_index = original_index
+            self.transaction_id = original_transaction_id
src/hiero_sdk_python/consensus/topic_message_submit_transaction.py (1)

434-442: ⚠️ Potential issue | 🟠 Major

MAJOR: body_size_all_chunks never advances _current_index.

Both the message slice and chunkInfo.number depend on _current_index, but this loop only swaps transaction_id. As written, every entry is measured against the same chunk, so the returned size list is wrong for multi-chunk messages.

🛠️ Proposed fix
-        for transaction_id in self._transaction_ids:
-            self.transaction_id = transaction_id
-            sizes.append(self.body_size)
-        
-        self._current_index = original_index
-        self.transaction_id = original_transaction_id
-        return sizes
+        try:
+            for index, transaction_id in enumerate(self._transaction_ids):
+                self._current_index = index
+                self.transaction_id = transaction_id
+                sizes.append(self.body_size)
+            return sizes
+        finally:
+            self._current_index = original_index
+            self.transaction_id = original_transaction_id
tests/integration/file_append_transaction_e2e_test.py (1)

328-339: ⚠️ Potential issue | 🟠 Major

This "manual freeze" test still exercises freeze_with().

The test name says manual freeze, but it uses freeze_with(env.client) and leaves a debug print() / commented sign call behind. That misses the regression surface this PR is trying to cover.

🛠️ Proposed fix
+    node_account_id = env.client.network.nodes[0]._account_id
     tx = (
         FileAppendTransaction()
         .set_file_id(file_id)
         .set_chunk_size(1024)
         .set_contents(content)
-        .freeze_with(env.client)
+        .set_transaction_id(TransactionId.generate(env.client.operator_account_id))
+        .set_node_account_id(node_account_id)
+        .freeze()
     )
-
-    # tx.sign(env.client.operator_private_key)
-
-    print(tx.get_required_chunks())
+    tx.sign(env.client.operator_private_key)
As per coding guidelines: `Tests should be end-to-end: construct → freeze → sign → execute → verify`.
tests/unit/transaction_test.py (1)

282-406: ⚠️ Potential issue | 🟠 Major

Chunk-sizing tests still miss high-risk regressions

These cases can still pass with incorrect implementations: Lines 285 and 322 use exact multiples (no short final chunk), and Line 376 relies on implicit defaults (without explicit set_chunk_size, chunking may not occur at all). Add at least one non-multiple payload and assert the last chunk body size differs.

Suggested test hardening
 def test_file_append_chunk_tx_should_return_list_of_body_sizes(file_id, account_id, transaction_id):
     """Test file append tx should return array of body sizes for multi-chunk transaction."""
     chunk_size = 1024
-    content = "a" * (chunk_size * 3)
+    content = "a" * (chunk_size * 2 + 137)  # force a short final chunk

@@
     sizes = tx.body_size_all_chunks
     assert isinstance(sizes, list)
     assert len(sizes) == 3
+    assert all(isinstance(s, int) and s > 0 for s in sizes)
+    assert sizes[-1] < sizes[0], "Final chunk should be smaller for non-multiple payload"


 def test_message_submit_chunk_tx_should_return_list_of_body_sizes(topic_id, account_id, transaction_id):
@@
     chunk_size = 1024
-    message= "a" * (chunk_size * 3)
+    message = "a" * (chunk_size * 2 + 137)

@@
     sizes = tx.body_size_all_chunks
     assert isinstance(sizes, list)
     assert len(sizes) == 3
+    assert all(isinstance(s, int) and s > 0 for s in sizes)
+    assert sizes[-1] < sizes[0], "Final chunk should be smaller for non-multiple payload"


 def test_chunked_tx_return_proper_sizes(file_id, account_id, transaction_id):
@@
     large_tx = (
         FileAppendTransaction()
         .set_file_id(file_id)
+        .set_chunk_size(1024)
         .set_contents(large_content)
@@
     )
+    assert len(large_tx.body_size_all_chunks) > 1, "Expected multi-chunk transaction"

@@
     small_tx = (
         FileAppendTransaction()
         .set_file_id(file_id)
+        .set_chunk_size(1024)
         .set_contents(small_content)

As per coding guidelines: Unit tests should be extensive - test even if we don't expect users to use it currently.


ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

Run ID: 5c5fae07-ce46-46cd-8e61-093dfc735866

📥 Commits

Reviewing files that changed from the base of the PR and between 1489840 and 2a2bd41.

📒 Files selected for processing (9)
  • CHANGELOG.md
  • src/hiero_sdk_python/consensus/topic_message_submit_transaction.py
  • src/hiero_sdk_python/file/file_append_transaction.py
  • src/hiero_sdk_python/transaction/transaction.py
  • tests/integration/file_append_transaction_e2e_test.py
  • tests/integration/topic_message_submit_transaction_e2e_test.py
  • tests/unit/mock_server.py
  • tests/unit/topic_message_submit_transaction_test.py
  • tests/unit/transaction_test.py

Comment on lines +363 to +369
message_tx = (
TopicMessageSubmitTransaction()
.set_topic_id(topic_id)
.set_message(message)
.set_transaction_id(TransactionId.generate(env.client.operator_account_id))
.set_node_account_id(AccountId(0,0,3))
.freeze()
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Use a node from env.client.network instead of hard-coding 0.0.3.

Hard-coding a specific node account ID makes this manual-freeze test environment-dependent. If the configured integration network exposes a different node set, the test fails for the wrong reason instead of validating the manual-freeze path.

🛠️ Proposed fix
+    node_account_id = env.client.network.nodes[0]._account_id
     message_tx = (
         TopicMessageSubmitTransaction()
         .set_topic_id(topic_id)
         .set_message(message)
         .set_transaction_id(TransactionId.generate(env.client.operator_account_id))
-        .set_node_account_id(AccountId(0,0,3))
+        .set_node_account_id(node_account_id)
         .freeze()
     )
As per coding guidelines: `Integration tests validate real network behavior - they must be reliable and safe.`

Comment on lines +643 to +652
def test_topic_submit_message_raises_error_on_freeze(topic_id):
"""Test transaction raises error on freeze when the transaction_id and node_id not set"""
tx = (
TopicMessageSubmitTransaction()
.set_topic_id(topic_id)
.set_message("Hello Hiero")
)

with pytest.raises(ValueError):
tx.freeze()
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Match the expected failure reason in this test.

A bare pytest.raises(ValueError) will also pass on unrelated validation failures, so this doesn't pin the new manual-freeze contract. Please match the missing-transaction-id error text here, and ideally add the sibling missing-node-id case as well.

🛠️ Proposed fix
-    with pytest.raises(ValueError):
+    with pytest.raises(
+        ValueError,
+        match="Transaction ID must be set before freezing",
+    ):
         tx.freeze()
As per coding guidelines: `Tests must provide useful error messages when they fail for future debugging`.

@manishdait manishdait marked this pull request as draft March 24, 2026 19:51
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5


ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

Run ID: 70d640d3-78dc-4545-a5c8-db16b77403d8

📥 Commits

Reviewing files that changed from the base of the PR and between 2a2bd41 and 75650a0.

📒 Files selected for processing (3)
  • src/hiero_sdk_python/transaction/transaction.py
  • tests/integration/file_append_transaction_e2e_test.py
  • tests/unit/mock_server.py

@manishdait manishdait force-pushed the feat/tx-size branch 2 times, most recently from a9fef8c to 31ddd71 Compare March 25, 2026 15:45
@manishdait manishdait marked this pull request as ready for review March 25, 2026 17:31
@manishdait manishdait requested a review from a team as a code owner March 25, 2026 17:31
@manishdait manishdait requested a review from stoqnkpL March 25, 2026 17:31
@manishdait manishdait force-pushed the feat/tx-size branch 2 times, most recently from a46d754 to 50854e3 Compare March 25, 2026 17:34
@manishdait manishdait added status: needs committer review PR needs a review from the committer team status: needs maintainer review PR needs a review from the maintainer team and removed status: in initial development labels Mar 25, 2026
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/hiero_sdk_python/consensus/topic_message_submit_transaction.py (1)

274-281: ⚠️ Potential issue | 🔴 Critical

BLOCKER: carry nanosecond overflow when deriving chunk TransactionIds.

When i approaches the chunk count limit (default 20), base_timestamp.nanos + i can exceed 999_999_999, which violates the protobuf Timestamp.nanos constraint [0, 999_999_999]. For example, if base_timestamp.nanos = 999_999_999 and i = 19, the result is 1_000_000_018, which is invalid. Carry the overflow into seconds before constructing the protobuf Timestamp:

🐛 Proposed fix
-                    chunk_valid_start = timestamp_pb2.Timestamp(
-                        seconds=base_timestamp.seconds,
-                        nanos=base_timestamp.nanos + i
-                    )
+                    total_nanos = base_timestamp.nanos + i
+                    chunk_valid_start = timestamp_pb2.Timestamp(
+                        seconds=base_timestamp.seconds + total_nanos // 1_000_000_000,
+                        nanos=total_nanos % 1_000_000_000,
+                    )
♻️ Duplicate comments (4)
src/hiero_sdk_python/transaction/transaction.py (1)

923-927: ⚠️ Potential issue | 🟠 Major

size still under-reports the default submission path.

On a freshly frozen transaction, Lines 396-397 add the operator signature inside execute(), but this property only measures the signatures already attached to the object. That makes pre-submission size checks wrong on the common freeze(...).execute(client) path unless the caller signed manually first. Either document this as the current encoded size or require the transaction to be signed before exposing .size.

src/hiero_sdk_python/consensus/topic_message_submit_transaction.py (1)

420-436: ⚠️ Potential issue | 🟠 Major

MAJOR: advance _current_index while sizing each chunk.

body_size rebuilds the protobuf from the active chunk state, so this loop currently measures the same chunk repeatedly. A shorter final chunk will report the wrong size and break chunk-planning logic.

♻️ Proposed fix
         self._require_frozen()
         sizes = []
 
         original_index = self._current_index
         original_transaction_id = self.transaction_id
-        for transaction_id in self._transaction_ids:
-            self.transaction_id = transaction_id
-            sizes.append(self.body_size)
-        
-        self._current_index = original_index
-        self.transaction_id = original_transaction_id
-        return sizes
+        try:
+            for index, transaction_id in enumerate(self._transaction_ids):
+                self._current_index = index
+                self.transaction_id = transaction_id
+                sizes.append(self.body_size)
+            return sizes
+        finally:
+            self._current_index = original_index
+            self.transaction_id = original_transaction_id
src/hiero_sdk_python/file/file_append_transaction.py (2)

294-310: ⚠️ Potential issue | 🔴 Critical

Carry nanosecond overflow into seconds when generating chunk IDs.

Adding i directly to base_timestamp.nanos can produce invalid Timestamp values for later chunks near a second boundary. Carry the overflow into seconds before building the next TransactionId. As per coding guidelines: valid_start timestamps for each chunk are spaced out correctly (e.g., incremented by at least 1 nanosecond).

🐛 Proposed fix
-                    chunk_valid_start = timestamp_pb2.Timestamp(
-                        seconds=base_timestamp.seconds,
-                        nanos=base_timestamp.nanos + i
-                    )
+                    total_nanos = base_timestamp.nanos + i
+                    chunk_valid_start = timestamp_pb2.Timestamp(
+                        seconds=base_timestamp.seconds + total_nanos // 1_000_000_000,
+                        nanos=total_nanos % 1_000_000_000,
+                    )

456-472: ⚠️ Potential issue | 🟠 Major

Advance _current_chunk_index while sizing each chunk.

Line 189 slices contents from _current_chunk_index, but this helper only swaps transaction_id. For multi-chunk appends, it will report the active chunk's size for every entry instead of measuring each chunk body.

♻️ Proposed fix
         self._require_frozen()
         sizes = []
 
         original_index = self._current_chunk_index
         original_transaction_id = self.transaction_id
-        for transaction_id in self._transaction_ids:
-            self.transaction_id = transaction_id
-            sizes.append(self.body_size)
-        
-        self._current_chunk_index = original_index
-        self.transaction_id = original_transaction_id
-        return sizes
+        try:
+            for index, transaction_id in enumerate(self._transaction_ids):
+                self._current_chunk_index = index
+                self.transaction_id = transaction_id
+                sizes.append(self.body_size)
+            return sizes
+        finally:
+            self._current_chunk_index = original_index
+            self.transaction_id = original_transaction_id

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

Run ID: afa76868-5e45-4fbb-8bb5-70a1b0407d61

📥 Commits

Reviewing files that changed from the base of the PR and between 75650a0 and 50854e3.

📒 Files selected for processing (6)
  • src/hiero_sdk_python/consensus/topic_message_submit_transaction.py
  • src/hiero_sdk_python/file/file_append_transaction.py
  • src/hiero_sdk_python/node.py
  • src/hiero_sdk_python/transaction/transaction.py
  • tests/integration/file_append_transaction_e2e_test.py
  • tests/unit/mock_server.py

Comment on lines 124 to +128
if self._root_certificates:
# Use the certificate that is provided
self._node_pem_cert = self._root_certificates
print("node cert ", self._node_pem_cert)

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Remove the certificate debug dump.

This branch runs in normal SDK usage whenever custom root certificates are configured, and it writes the full PEM bundle to stdout. That leaks certificate material into application/test logs and bypasses the SDK logger configuration.

🛡️ Proposed fix
         if self._address._is_transport_security():
             if self._root_certificates:
                 # Use the certificate that is provided
                 self._node_pem_cert = self._root_certificates
-                print("node cert ", self._node_pem_cert)
             
             else:
As per coding guidelines: `REVIEW FOCUS 3 — CERTIFICATE HANDLING (SECURITY CRITICAL)`.

Comment on lines +328 to +335
tx = (
FileAppendTransaction()
.set_file_id(file_id)
.set_chunk_size(1024)
.set_contents(content)
.set_transaction_id(TransactionId.generate(env.client.operator_account_id))
.set_node_account_id(AccountId(0,0,3))
.freeze()
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Use a configured network node instead of hardcoding 0.0.3.

This only covers the manual-freeze path on environments that expose that exact node account. Pulling the node ID from env.client.network keeps the test deterministic across CI, localnet, and any future network config.

🧪 Proposed fix
+    node_account_id = next(node._account_id for node in env.client.network.nodes)
+
     tx = (
         FileAppendTransaction()
         .set_file_id(file_id)
         .set_chunk_size(1024)
         .set_contents(content)
         .set_transaction_id(TransactionId.generate(env.client.operator_account_id))
-        .set_node_account_id(AccountId(0,0,3))
+        .set_node_account_id(node_account_id)
         .freeze()
     )

Comment on lines +181 to +189
network.set_transport_security(False)
network.set_verify_certificates(False)
client = Client(network)

# Force non-tls channel
for node in client.network.nodes:
node._address._is_transport_security = lambda: False
node._set_verify_certificates(False)
node._close()
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Remove the per-node TLS monkey patch from the fixture.

Both Network.set_transport_security() and Network.set_verify_certificates() already fan out to existing _Nodes, so overriding node._address._is_transport_security here just makes the helper pass even if that real propagation path breaks later. Apply the configuration on client.network and drop the direct _Node mutations so the tests keep exercising the real behavior.

🐛 Proposed fix
         # Create network and client
         network = Network(nodes=nodes)
-        network.set_transport_security(False)
-        network.set_verify_certificates(False)
         client = Client(network)
-
-        # Force non-tls channel
-        for node in client.network.nodes:
-            node._address._is_transport_security = lambda: False
-            node._set_verify_certificates(False)
-            node._close()
+        client.network.set_transport_security(False)
+        client.network.set_verify_certificates(False)
As per coding guidelines: `PRIORITY 1 - Protect Against Breaking Changes` and `Prefer testing real functionality over mocked behavior.`

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/hiero_sdk_python/consensus/topic_message_submit_transaction.py (1)

274-281: ⚠️ Potential issue | 🔴 Critical

BLOCKER: carry nanosecond overflow when deriving chunk transaction IDs.

base_timestamp.nanos + i can exceed protobuf Timestamp.nanos's 999_999_999 ceiling, producing invalid TransactionIds for later chunks. This needs second rollover before multi-chunk freeze completes.

🧩 Suggested fix
-                    chunk_valid_start = timestamp_pb2.Timestamp(
-                        seconds=base_timestamp.seconds,
-                        nanos=base_timestamp.nanos + i
-                    )
+                    total_nanos = base_timestamp.nanos + i
+                    chunk_valid_start = timestamp_pb2.Timestamp(
+                        seconds=base_timestamp.seconds + total_nanos // 1_000_000_000,
+                        nanos=total_nanos % 1_000_000_000,
+                    )

As per coding guidelines: base_timestamp.nanos + i MUST be checked for overflow and carried into seconds.

♻️ Duplicate comments (12)
src/hiero_sdk_python/node.py (1)

127-128: ⚠️ Potential issue | 🟠 Major

Remove the certificate dump from _get_channel().

This writes the configured PEM bundle to stdout on a normal TLS path and bypasses the SDK logger configuration. Please delete it rather than logging certificate material.

🛡️ Proposed fix
             if self._root_certificates:
                 # Use the certificate that is provided
                 self._node_pem_cert = self._root_certificates
-                print("node cert ", self._node_pem_cert)

As per coding guidelines: REVIEW FOCUS 3 — CERTIFICATE HANDLING (SECURITY CRITICAL).

tests/unit/mock_server.py (2)

131-142: ⚠️ Potential issue | 🟠 Major

Reserve the test port at bind time, not with a probe socket.

This helper releases the socket before MockServer binds, so another process can claim the returned port and make the fixture flaky under parallel runs. The 50212 retry only changes which unreserved port gets returned.

As per coding guidelines: CRITICAL PRINCIPLES - Tests Must Fail Loudly & Deterministically.


182-190: ⚠️ Potential issue | 🟠 Major

Drop the per-node TLS monkey patch and configure client.network directly.

Because _find_free_port() already avoids 50212, these mock addresses are non-TLS without overriding _is_transport_security(). Rebinding the method here bypasses the real Network_Node propagation path and can hide TLS configuration regressions; the explicit _close() is redundant as well.

🐛 Proposed fix
-        network = Network(nodes=nodes)
-        network.set_transport_security(False)
-        network.set_verify_certificates(False)
-        client = Client(network)
-
-        # Force non-tls channel
-        for node in client.network.nodes:
-            node._address._is_transport_security = lambda: False
-            node._set_verify_certificates(False)
-            node._close()
+        client = Client(Network(nodes=nodes))
+        client.network.set_transport_security(False)
+        client.network.set_verify_certificates(False)

As per coding guidelines: PRIORITY 1 - Protect Against Breaking Changes and Prefer testing real functionality over mocked behavior.

tests/unit/topic_message_submit_transaction_test.py (1)

643-652: ⚠️ Potential issue | 🟡 Minor

Match the expected manual-freeze error.

pytest.raises(ValueError) here will also pass on unrelated validation failures. Match the missing-transaction-id message, and add the sibling missing-node-id case separately so this locks down the new freeze() contract.

🧪 Tighten the assertion
-    with pytest.raises(ValueError):
+    with pytest.raises(
+        ValueError,
+        match="Transaction ID must be set before freezing",
+    ):
         tx.freeze()

As per coding guidelines: Tests must provide useful error messages when they fail for future debugging.

src/hiero_sdk_python/consensus/topic_message_submit_transaction.py (1)

428-436: ⚠️ Potential issue | 🟠 Major

MAJOR: body_size_all_chunks is still measuring the same chunk repeatedly.

The loop swaps only transaction_id; _current_index never changes, so a shorter final chunk reports the wrong size. If body_size raises, the transaction state also stays mutated unless restoration runs in a finally block.

🧮 Suggested fix
         original_index = self._current_index
         original_transaction_id = self.transaction_id
-        for transaction_id in self._transaction_ids:
-            self.transaction_id = transaction_id
-            sizes.append(self.body_size)
-        
-        self._current_index = original_index
-        self.transaction_id = original_transaction_id
-        return sizes
+        try:
+            for index, transaction_id in enumerate(self._transaction_ids):
+                self._current_index = index
+                self.transaction_id = transaction_id
+                sizes.append(self.body_size)
+            return sizes
+        finally:
+            self._current_index = original_index
+            self.transaction_id = original_transaction_id
tests/integration/file_append_transaction_e2e_test.py (1)

328-335: ⚠️ Potential issue | 🟠 Major

Use a configured node instead of hardcoding 0.0.3.

This manual-freeze path becomes environment-dependent when the active network does not expose that account. Pick a node from env.client.network so the test stays deterministic across CI and localnet setups.

🧪 Suggested fix
+    node_account_id = next(node._account_id for node in env.client.network.nodes)
+
     tx = (
         FileAppendTransaction()
         .set_file_id(file_id)
         .set_chunk_size(1024)
         .set_contents(content)
         .set_transaction_id(TransactionId.generate(env.client.operator_account_id))
-        .set_node_account_id(AccountId(0,0,3))
+        .set_node_account_id(node_account_id)
         .freeze()
     )

As per coding guidelines: Integration tests validate real network behavior - they must be reliable and safe.

tests/integration/topic_message_submit_transaction_e2e_test.py (1)

363-369: ⚠️ Potential issue | 🟠 Major

Use a node from env.client.network instead of hardcoding 0.0.3.

Hardcoding a specific node account ID makes this manual-freeze test environment-dependent. If the configured integration network exposes a different node set, the test fails for the wrong reason.

🧪 Suggested fix
+    node_account_id = next(node._account_id for node in env.client.network.nodes)
+
     message_tx = (
         TopicMessageSubmitTransaction()
         .set_topic_id(topic_id)
         .set_message(message)
         .set_transaction_id(TransactionId.generate(env.client.operator_account_id))
-        .set_node_account_id(AccountId(0,0,3))
+        .set_node_account_id(node_account_id)
         .freeze()
     )

As per coding guidelines: Integration tests validate real network behavior - they must be reliable and safe.

tests/unit/transaction_test.py (2)

204-205: 🛠️ Refactor suggestion | 🟠 Major

Assert the sizing API types explicitly.

These tests compare values, but they do not lock down size/body_size as int or each body_size_all_chunks entry as int. Add explicit type assertions so API drift fails loudly.

🧪 Suggested assertions
     assert tx1.size == tx2.size
+    assert isinstance(tx1.size, int)
+    assert isinstance(tx2.size, int)
@@
     assert tx1.body_size < tx2.body_size
+    assert isinstance(tx1.body_size, int)
+    assert isinstance(tx2.body_size, int)
@@
     sizes = tx.body_size_all_chunks
     assert isinstance(sizes, list)
+    assert all(isinstance(size, int) for size in sizes)

As per coding guidelines: Assert return types where relevant (e.g., assert isinstance(result, AccountId)).

Also applies to: 230-230, 251-252, 297-299, 314-316, 334-336, 351-353, 369-371


282-299: ⚠️ Potential issue | 🟠 Major

These chunk-sizing tests still miss the cases that would catch a broken implementation.

The multi-chunk cases use exact multiples of chunk_size, test_tx_with_no_content_should_return_single_body_chunk() uses " " instead of empty content, and large_content = 2048 never chunks because FileAppendTransaction defaults to 4096-byte chunks. A helper that always returns the first chunk's size would still pass here.

Use a non-even final chunk, assert the last entry differs from the earlier ones, switch the empty-content case to "", and set chunk_size explicitly in test_chunked_tx_return_proper_sizes().

As per coding guidelines: Cover happy paths AND unhappy paths/edge cases.

Also applies to: 319-336, 356-371, 374-406

src/hiero_sdk_python/file/file_append_transaction.py (2)

294-310: ⚠️ Potential issue | 🔴 Critical

Carry nanoseconds into seconds when deriving chunk transaction IDs.

base_timestamp.nanos + i can exceed protobuf Timestamp.nanos bounds and generate invalid TransactionIds for later chunks. The rollover needs to be handled here, not left to protobuf validation.

🧩 Suggested fix
-                    chunk_valid_start = timestamp_pb2.Timestamp(
-                        seconds=base_timestamp.seconds,
-                        nanos=base_timestamp.nanos + i
-                    )
+                    total_nanos = base_timestamp.nanos + i
+                    chunk_valid_start = timestamp_pb2.Timestamp(
+                        seconds=base_timestamp.seconds + total_nanos // 1_000_000_000,
+                        nanos=total_nanos % 1_000_000_000,
+                    )

As per coding guidelines: valid_start timestamps for each chunk are spaced out correctly (e.g., incremented by at least 1 nanosecond).


464-472: ⚠️ Potential issue | 🟠 Major

Advance _current_chunk_index while sizing each chunk.

The current loop only swaps transaction_id, so every entry uses whatever _current_chunk_index was before the call. If body_size raises, restoration also needs to happen in a finally block.

🧮 Suggested fix
         original_index = self._current_chunk_index
         original_transaction_id = self.transaction_id
-        for transaction_id in self._transaction_ids:
-            self.transaction_id = transaction_id
-            sizes.append(self.body_size)
-        
-        self._current_chunk_index = original_index
-        self.transaction_id = original_transaction_id
-        return sizes
+        try:
+            for index, transaction_id in enumerate(self._transaction_ids):
+                self._current_chunk_index = index
+                self.transaction_id = transaction_id
+                sizes.append(self.body_size)
+            return sizes
+        finally:
+            self._current_chunk_index = original_index
+            self.transaction_id = original_transaction_id

As per coding guidelines: Chunk payload slicing in build_transaction_body() accurately extracts the current chunk before signing and execution.

src/hiero_sdk_python/transaction/transaction.py (1)

923-927: ⚠️ Potential issue | 🟠 Major

size under-reports what execute() will send until the transaction is signed.

On a freshly frozen transaction this serializes an empty sigMap, but execute() adds the operator signature before submission when needed. That means the new pre-submission size helper can approve a transaction that exceeds the limit once it is actually sent. Either require fully signed transactions before reading size, or document/rename this as the current encoded size.


ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

Run ID: fa1f012b-c199-4b7c-9911-59296d55056c

📥 Commits

Reviewing files that changed from the base of the PR and between 50854e3 and 7d89c78.

📒 Files selected for processing (10)
  • CHANGELOG.md
  • src/hiero_sdk_python/consensus/topic_message_submit_transaction.py
  • src/hiero_sdk_python/file/file_append_transaction.py
  • src/hiero_sdk_python/node.py
  • src/hiero_sdk_python/transaction/transaction.py
  • tests/integration/file_append_transaction_e2e_test.py
  • tests/integration/topic_message_submit_transaction_e2e_test.py
  • tests/unit/mock_server.py
  • tests/unit/topic_message_submit_transaction_test.py
  • tests/unit/transaction_test.py

Signed-off-by: Manish Dait <daitmanish88@gmail.com>
Signed-off-by: Manish Dait <daitmanish88@gmail.com>
Signed-off-by: Manish Dait <daitmanish88@gmail.com>
Signed-off-by: Manish Dait <daitmanish88@gmail.com>
Signed-off-by: Manish Dait <daitmanish88@gmail.com>
Signed-off-by: Manish Dait <daitmanish88@gmail.com>
Signed-off-by: Manish Dait <daitmanish88@gmail.com>
Signed-off-by: Manish Dait <daitmanish88@gmail.com>
Signed-off-by: Manish Dait <daitmanish88@gmail.com>
Signed-off-by: Manish Dait <daitmanish88@gmail.com>
Signed-off-by: Manish Dait <daitmanish88@gmail.com>
Signed-off-by: Manish Dait <daitmanish88@gmail.com>
Signed-off-by: Manish Dait <daitmanish88@gmail.com>
Signed-off-by: Manish Dait <daitmanish88@gmail.com>
Signed-off-by: Manish Dait <daitmanish88@gmail.com>
Signed-off-by: Manish Dait <daitmanish88@gmail.com>
Signed-off-by: Manish Dait <daitmanish88@gmail.com>
Signed-off-by: Manish Dait <daitmanish88@gmail.com>
Signed-off-by: Manish Dait <daitmanish88@gmail.com>
@codacy-production
Copy link
Copy Markdown

codacy-production bot commented Mar 31, 2026

Up to standards ✅

🟢 Issues 0 issues

Results:
0 new issues

View in Codacy

🟢 Metrics 13 complexity

Metric Results
Complexity 13

View in Codacy

TIP This summary will be updated as you push new changes. Give us feedback

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

status: needs committer review PR needs a review from the committer team status: needs maintainer review PR needs a review from the maintainer team

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Add Transaction Size Calculation Methods to Python SDK

1 participant