Skip to content

ExtendedHookState#406

Merged
RichardAH merged 40 commits intoXahau:devfrom
tequdev:ExtendedHookState
Oct 23, 2025
Merged

ExtendedHookState#406
RichardAH merged 40 commits intoXahau:devfrom
tequdev:ExtendedHookState

Conversation

@tequdev
Copy link
Member

@tequdev tequdev commented Dec 11, 2024

  • Extended the size of HookStateData, and cost one incremental reserve for every 256 bytes
  • The maximum HookStateData size is tentatively set to 2048 (256*8) / Discussion needed

High Level Overview of Change

Context of Change

Type of Change

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • Refactor (non-breaking change that only restructures code)
  • Tests (you added tests for code that already exists, or your new feature included in this PR)
  • Documentation update
  • Chore (no impact to binary, e.g. .gitignore, formatting, dropping support for older tooling)
  • Release

API Impact

  • Public API: New feature (new methods and/or new fields)
  • Public API: Breaking change (in general, breaking changes should only impact the next api_version)
  • libxrpl change (any change that may affect libxrpl or dependents of libxrpl)
  • Peer protocol change (must be backward compatible or bump the peer protocol version)

tequdev

This comment was marked as resolved.

@tequdev tequdev changed the title ExtendedHookState [DO NOT MERGE] ExtendedHookState Mar 30, 2025
@tequdev tequdev added Amendment New feature or fix amendment feature Feature Request labels Mar 30, 2025
@RichardAH
Copy link
Contributor

RichardAH commented Jun 18, 2025

I think we should not add the complexity of incremental reserve increase, rather just increase the allowable size for the same existing reserve cost. I think 1 reserve for 2048 bytes is probably ok.

An alternative simpler fee model would be to nominate your account for "large hook data" whereupon all your hook states cost more to store but you have access to the larger size. I really dislike the idea of a variable length field claiming different reserves depending on its current size.

@tequdev
Copy link
Member Author

tequdev commented Jun 19, 2025

I think we should not add the complexity of incremental reserve increase, rather just increase the allowable size for the same existing reserve cost. I think 1 reserve for 2048 bytes is probably ok.

An alternative simpler fee model would be to nominate your account for "large hook data" whereupon all your hook states cost more to store but you have access to the larger size. I really dislike the idea of a variable length field claiming different reserves depending on its current size.

Can you explain “large hook data” in more detail?

@RichardAH
Copy link
Contributor

Can you explain “large hook data” in more detail?

Sure, so you do an account set flag which is like the equivalent of specifying large pages in your OS. All your hook states cost more but they're allowed to be bigger. It makes the computation easier. So let's say in large mode you get 2048 bytes per hook state but in computing reserve requirements for your hook state we take your HookStateCount and multiply it by 8. Once the flag is enabled it can only be disabled if HookStateCount is 0.

@tequdev
Copy link
Member Author

tequdev commented Jun 22, 2025

Can you explain “large hook data” in more detail?

Sure, so you do an account set flag which is like the equivalent of specifying large pages in your OS. All your hook states cost more but they're allowed to be bigger. It makes the computation easier. So let's say in large mode you get 2048 bytes per hook state but in computing reserve requirements for your hook state we take your HookStateCount and multiply it by 8. Once the flag is enabled it can only be disabled if HookStateCount is 0.

That's interesting.
Instead of using the flag, how about using a numeric field to represent the maximum size the account allows?

If we use the flag, whenever we need to increase the protocol's maximum allowed size, we end up having to add a new flag.

@RichardAH
Copy link
Contributor

That's interesting. Instead of using the flag, how about using a numeric field to represent the maximum size the account allows?

If we use the flag, whenever we need to increase the protocol's maximum allowed size, we end up having to add a new flag.

I think that's reasonable. We could say HookStateScale or something 0,1 ...

uint32_t const oldOwnerCount = sle->getFieldU32(sfOwnerCount);

uint32_t const newOwnerCount =
oldOwnerCount - (oldScale * stateCount) + (scale * stateCount);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sanity check newOwnerCount > oldOwnerCount

MAKE_ERROR(tecINSUF_RESERVE_SELLER, "The seller of an object has insufficient reserves, and thus cannot complete the sale."),
MAKE_ERROR(tecIMMUTABLE, "The remark is marked immutable on the object, and therefore cannot be updated."),
MAKE_ERROR(tecTOO_MANY_REMARKS, "The number of remarks on the object would exceed the limit of 32."),
MAKE_ERROR(tecHAS_HOOK_STATE, "The account has hook state. Delete all existing state first."),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Change comment to something like: Delete all hook state before reducing scale.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since the number of TER codes is limited, we used a generic error message. However, we will change it as you pointed out, as it can be modified later when this code is used for other purposes.


uint16_t scale = tx.getFieldU16(sfHookStateScale);
if (scale == 0 ||
scale > 16) // Min: 1, Max: 16 (256 * 16 = 4096 bytes)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

extract a constant for this ?

// should not happen, but just in case
return 256U;
}
return 256U * hookStateScale;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we could add an extra defensive check here with a constant, I guess

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

like this?

inline uint32_t
maxHookStateDataSize(uint16_t hookStateScale)
{
    if (hookStateScale == 0)
    {
        // should not happen, but just in case
        return 256U;
    }
    if (hookStateScale > maxHookStateScale())
    {
        // should not happen, but just in case
        return 256 * maxHookStateScale();
    }
    return 256U * hookStateScale;
}

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think assert/throw "fail fast" because it's a violated invariant indicating something went awry ?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's too much to add that kind of processing here.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, we can do "return 256U * hookStateScale;" if it's just a simple calculation, but if you want to handle strange cases (0 || > max_scale) this way, it means you're explicitly washing over bad state. If it should not happen, assert that it doesn't happen. This is some weird middle ground, no?

BEAST_EXPECT(env.le(alice)->getFieldU32(sfHookStateCount) == 10);
BEAST_EXPECT(env.le(alice)->getFieldU32(sfOwnerCount) == 110);
}

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. Decrease WITHOUT state (the escape hatch)

applyCount(5, 0, 50); // ← stateCount=0 (all state deleted)
jt[sfHookStateScale.fieldName] = 2;
env(jt); // ← Should succeed! This is how you escape high scale
BEAST_EXPECT(env.le(alice)->getFieldU16(sfHookStateScale) == 2);

This is important because it tests the only way to reduce scale - by deleting all state first. The spec calls this the "Scale Commitment Trap" escape route.

  1. Insufficient reserves on increase

// Alice has exactly enough for current reserves, but not for 16x increase
applyCount(1, 100, 200); // 100 entries at scale=1
// Manually set balance to just cover current reserves
jt[sfHookStateScale.fieldName] = 16; // Needs 1600 reserves (100×16)
env(jt, ter(tecINSUFFICIENT_RESERVE)); // Should fail - can't afford it

sublimator
sublimator previously approved these changes Oct 23, 2025
@RichardAH RichardAH merged commit 6f148a8 into Xahau:dev Oct 23, 2025
14 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Amendment New feature or fix amendment feature Feature Request

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants