Skip to content

Conversation

@mxiao-cll
Copy link
Contributor

@mxiao-cll mxiao-cll commented Dec 17, 2025

Closes OPDATA-4147

  • First we add two input to EA, the transition points as well as the timezone
    • The transition point is the time of when market change, for US market it is '04:00', '09:30', '16:00', '20:00'
    • It doesn't have to be sorted but does need to follow the HH:MM format
  • We then calculate how far the current time is away from the nearest point
    • This is our secondsFromTransition input to smoother logic
    • If before we put - (i.e. 03:59 would be -60 and 04:01 would be 60)
  • The smoother logic is taken directly from the code DS provided in the doc (linked in ticket)
    • No changes/optimization is made other than converting number into bigint

DS -> Please validate logic as well as provide better test cases for smoother.test.ts

{
    "data": {
        "registry": "0x8F993d593A1beA05FF776cf63b1A61757d845008",
        "asset": "0x14c3abF95Cb9C93a8b82C1CdCB76D72Cb87b2d4c",
        "regularStreamId": "0x000bc7e431fcd497f06b9e1dea869bcda3d05049d0601f3d1e56e64c8cdd05ac",
        "extendedStreamId": "0x000b1a79f503e9b236bc13b01524d1955b5a4d5ff2e22d5a08404c35d23d7301",
        "overnightStreamId": "0x000b0580ced23f9ae7fd77f817ad7e6aec23a04314454365cfe93060a9041bd3",
        "sessionBoundaries": [
            "04:00",
            "09:30",
            "16:00",
            "20:00"
        ],
        "sessionBoundariesTimeZone": "America/New_York",
        "decimals": 8
    }
}
{
    "data": {
        "result": "67373206039",
        "rawPrice": "672745000000000000000",
        "decimals": 8,
        "registry": {
            "sValue": "1001467213276124104",
            "paused": false
        },
        "stream": {
            "regular": {
                "mid": "672745000000000000000",
                "lastSeenTimestampNs": "1766000659570000000",
                "bid": "672739400000000000000",
                "bidVolume": 150933715003899900,
                "ask": "672770000000000000000",
                "askVolume": 4384654083551658000,
                "lastTradedPrice": "672765000000000000000",
                "marketStatus": 2,
                "decimals": 18
            },
            "extended": {
                "mid": "672745000000000000000",
                "lastSeenTimestampNs": "1766000659757000000",
                "bid": "672739400000000000000",
                "bidVolume": 15532559262904484000,
                "ask": "672750600000000000000",
                "askVolume": 12426047410323587000,
                "lastTradedPrice": "672750700000000000000",
                "marketStatus": 2,
                "decimals": 18
            },
            "overnight": {
                "mid": "672730000000000000000",
                "lastSeenTimestampNs": "1766000659000000000",
                "bid": "672720000000000000000",
                "bidVolume": 9319535557742690000,
                "ask": "672740000000000000000",
                "askVolume": 3106511852580897000,
                "lastTradedPrice": "672770000000000000000",
                "marketStatus": 2,
                "decimals": 18
            }
        }
    },
    "statusCode": 200,
    "result": "67373206039",
    "timestamps": {
        "providerDataRequestedUnixMs": 1766000661091,
        "providerDataReceivedUnixMs": 1766000661332
    },
    "meta": {
        "adapterName": "ONDO_CALCULATED",
        "metrics": {
            "feedId": "pgvdPr8ESLbeFlhv9vIrqJ/QNnE="
        }
    }
}

Quality Assurance

  • If a new adapter was made, or an existing one was modified so that its environment variables have changed, update the relevant infra-k8s configuration file.
  • If a new adapter was made, or an existing one was modified so that its environment variables have changed, update the relevant adapter-secrets configuration file.
  • If a new adapter was made, or a new endpoint was added, update the test-payload.json file with relevant requests.
  • The branch naming follows git flow (feature/x, chore/x, release/x, hotfix/x, fix/x) or is created from Jira.
  • This is related to a maximum of one Jira story or GitHub issue.
  • Types are safe (avoid TypeScript/TSLint features like any and disable, instead use more specific types).
  • All code changes have 100% unit and integration test coverage. If testing is not applicable or too difficult to justify doing, the reasoning should be documented explicitly in the PR.

@changeset-bot
Copy link

changeset-bot bot commented Dec 17, 2025

🦋 Changeset detected

Latest commit: 47fa038

The changes in this PR will be included in the next version bump.

This PR includes changesets to release 1 package
Name Type
@chainlink/ondo-calculated-adapter Minor

Not sure what this means? Click here to learn what changesets are.

Click here if you're a maintainer who wants to add another changeset to this PR

@mxiao-cll mxiao-cll requested a review from a team December 17, 2025 21:17
}

/**
* Savitzky-Golay Filter Implementation
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm no expert but I don't think a Savitzky-Golay Filter is appropriate here.
First of all, a Savitzky-Golay Filter requires that the data points are equally spaced. But since we're making a request every second, the variable latency in the request can play a significant role in not making the data points equally spaced in time.
Second, Savitzky-Golay Filter typically assumes that the windows includes both past and future data points, so it's not really applicable to use in real time.
Third, the Savitzky-Golay Filter tries to fit the data points to a polynomial, which makes sense for a physical system with things like velocity and momentum, but stock prices much more resemble something like a random walk/Brownian motion where you can't expect the value to continue going up just because it has been going up.
Using a Savitzky-Golay Filter here also feels like severe overkill. If all we want is to prevent large spikes, we should just clamp the price to the maximum spike we want to allow. Or use an exponential decay or something.

I also did some simple tests to see the effect of the smoother.

  1. With a completely constant value of 100n, the smoothed value drops to 91n during the transition.
  2. With a constant value of 100n and then a 200n, 300n, 200n "bump" just after the transition, the smoothed value has a delayed bumped followed by a rebound, dropping as low as 85n.

I don't know if these are bugs in the implementation or conceptual problems with the Savitzky-Golay Filter, but it definitely doesn't look good. I've pushed my tests to a branch kloet/Savitzky-Golay-Filter.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will let DS address this

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can add test cases the validates the above concerns. Maybe something like this:

  describe('Constant Value Stability', () => {
    it('should maintain constant output for constant input during transition', () => {
      const smoother = new SessionAwareSmoother()
      const CONSTANT_PRICE = 100n
      const WINDOW_SIZE = 23

      // Fill the buffer with constant values (outside transition window)
      for (let i = 0; i < WINDOW_SIZE; i++) {
        smoother.processUpdate(CONSTANT_PRICE, 100) // t=100 is outside window
      }

      // Now test during transition (t=0 means fully smoothed, w=1.0)
      const resultAtTransition = smoother.processUpdate(CONSTANT_PRICE, 0)

      // BUG CHECK: If smoother is correct, constant input should produce constant output
      // The review mentions this drops to 91n, which indicates a normalization bug
      expect(resultAtTransition).toBe(CONSTANT_PRICE)
    })

    it('should produce constant output at various transition points for constant input', () => {
      const smoother = new SessionAwareSmoother()
      const CONSTANT_PRICE = 100n
      const WINDOW_SIZE = 23

      // Fill buffer with constant values
      for (let i = 0; i < WINDOW_SIZE; i++) {
        smoother.processUpdate(CONSTANT_PRICE, 100)
      }

      // Test at different points in the transition window
      const testPoints = [-5, 0, 10, 30, 45]
      for (const t of testPoints) {
        const result = smoother.processUpdate(CONSTANT_PRICE, t)
        expect(result).toBe(CONSTANT_PRICE)
      }
    })
  })
  
  
  describe('Savitzky-Golay Coefficient Normalization', () => {
    it('should have coefficients that sum to 1.0 (normalization check)', () => {
      // This is a unit test for the mathematical correctness of the filter
      // The 91n bug suggests coefficients may not sum to 1.0
      const smoother = new SessionAwareSmoother()
      const CONSTANT_PRICE = 1000000n // Use larger value to reduce rounding error impact
      const WINDOW_SIZE = 23

      // Fill with constant
      for (let i = 0; i < WINDOW_SIZE; i++) {
        smoother.processUpdate(CONSTANT_PRICE, 100)
      }

      // At full smoothing (t=0, w=1.0), output should equal input for constant signal
      const result = smoother.processUpdate(CONSTANT_PRICE, 0)

      // Allow small precision tolerance due to bigint/number conversion
      const tolerance = CONSTANT_PRICE / 1000n // 0.1% tolerance
      expect(result).toBeGreaterThanOrEqual(CONSTANT_PRICE - tolerance)
      expect(result).toBeLessThanOrEqual(CONSTANT_PRICE + tolerance)
    })

As for why 100n dropped to 91n there seems to be an issue with numberTimesBigInt due to precision loss.
If we use integer math we should be able to resolve them. Should be something like:

    // Use integer math for blending to avoid precision loss
    const precision = BigInt(CONFIG.PRECISION)
    const multiplier = 10n ** precision
    const wBI = parseUnits(w.toFixed(CONFIG.PRECISION), CONFIG.PRECISION)
    const oneMinusWBI = parseUnits((1 - w).toFixed(CONFIG.PRECISION), CONFIG.PRECISION)

    return (smoothedPrice * wBI + rawPrice * oneMinusWBI) / multiplier

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Need more time to look into why it dropped to 85

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added your test and used real price values (which are all in 18 decimals) take a look at the result and let me know if it makes sense.

I can't figure out how I am losing precision in numberTimesBigInt I tried to use your code and got the same result.

Copy link

@kalanyuz kalanyuz Dec 20, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @mxiao-cll
I have created natt/ondo_test to demonstrate what I found, but it's a quickly whipped PoC so probably not suitable to merge onto this branch.

The issue in numberTimesBigInt itself is relatively small:

numberTimesBigInt(100n, 0.043) // you'd get 4n, which is slightly truncated

but when this get's added 23 times (window size), that's where we start to lose a lot of values. What I attempted on the branch is to do division one time so we truncate very minimally if not at all.

I also looked into addressing point 2. The formula provided is what we used in hypothesis testing and was center-point based, which causes the lag behavior observed here. I have formula adjusted the formula to be causal and added additional tests to check for bump test concerns.

Please have a look if this looks okay.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also I'd like to thank you for the callout on the polynomial attribute. That's something we'll definitely investigate and possibly add to the hyperparameter tuning objective function.

For now, we've had additional eyes from research and there seems to be no strong pushback. This is probably not final and may be updated in the future. Also, having data from transmitted answers should help us improve the solution.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Merged your branch, I've updated some tests to be more accurate

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This point is still not addressed, as far as I can tell:

First of all, a Savitzky-Golay Filter requires that the data points are equally spaced. But since we're making a request every second, the variable latency in the request can play a significant role in not making the data points equally spaced in time.

I also did some more manual testing and the oscillation is still there.
Also, if you have an elevated price just after the transition point, after just 4 seconds, the smoothed price is actually higher than the elevated price and after just 7 seconds, the additional increase is as much as 29% more than the original increase. For example, if a price is constant at $500 but very briefly shoots up to $510 during the transition just to come back to $500 a few seconds later, the smoothed price will peak as high as $512.9. There's a similar effect downwards if the price goes down for a short time.

But I really want to make a broader point: This smoothing function is pretty complex. In order to justify adding such complexity to the code, it's not enough that there seems to be no strong pushback. We actually need strong reasons to add something so complex rather than something simpler. Given the issues we've seen far, I see no reason to believe the Savitzky-Golay filter will do a better job than something simpler.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@kalanyuz to respond

@kalanyuz
Copy link

kalanyuz commented Dec 21, 2025 via email

@mxiao-cll mxiao-cll requested a review from kalanyuz December 21, 2025 22:49
kalanyuz
kalanyuz previously approved these changes Dec 22, 2025
}

/**
* Savitzky-Golay Filter Implementation
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This point is still not addressed, as far as I can tell:

First of all, a Savitzky-Golay Filter requires that the data points are equally spaced. But since we're making a request every second, the variable latency in the request can play a significant role in not making the data points equally spaced in time.

I also did some more manual testing and the oscillation is still there.
Also, if you have an elevated price just after the transition point, after just 4 seconds, the smoothed price is actually higher than the elevated price and after just 7 seconds, the additional increase is as much as 29% more than the original increase. For example, if a price is constant at $500 but very briefly shoots up to $510 during the transition just to come back to $500 a few seconds later, the smoothed price will peak as high as $512.9. There's a similar effect downwards if the price goes down for a short time.

But I really want to make a broader point: This smoothing function is pretty complex. In order to justify adding such complexity to the code, it's not enough that there seems to be no strong pushback. We actually need strong reasons to add something so complex rather than something simpler. Given the issues we've seen far, I see no reason to believe the Savitzky-Golay filter will do a better job than something simpler.

// Inject bump
const bump = [200n, 300n, 200n].map((p) => p * 10n ** 18n)
for (const price of bump) {
smoother.processUpdate(price, 1) // During transition
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we pass t = 1 multiple times?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@kalanyuz to respond

for (let i = 0; i < WINDOW_SIZE; i++) {
const { smoothedPrice: result } = smoother.processUpdate(BASELINE, i + 1)
convergenceResults.push(result)
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is missing

expect(result).toBeGreaterThanOrEqual(BASELINE)

If you add this, you'll see the price still drop significantly below baseline.
It goes as low as 77% of baseline by t = 14.

Copy link
Contributor Author

@mxiao-cll mxiao-cll Dec 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@kalanyuz to respond, is this expected to be lower or higher during the bump?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants