-
Notifications
You must be signed in to change notification settings - Fork 326
Ondo price smoothing #4465
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Ondo price smoothing #4465
Conversation
🦋 Changeset detectedLatest commit: 47fa038 The changes in this PR will be included in the next version bump. This PR includes changesets to release 1 package
Not sure what this means? Click here to learn what changesets are. Click here if you're a maintainer who wants to add another changeset to this PR |
| } | ||
|
|
||
| /** | ||
| * Savitzky-Golay Filter Implementation |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm no expert but I don't think a Savitzky-Golay Filter is appropriate here.
First of all, a Savitzky-Golay Filter requires that the data points are equally spaced. But since we're making a request every second, the variable latency in the request can play a significant role in not making the data points equally spaced in time.
Second, Savitzky-Golay Filter typically assumes that the windows includes both past and future data points, so it's not really applicable to use in real time.
Third, the Savitzky-Golay Filter tries to fit the data points to a polynomial, which makes sense for a physical system with things like velocity and momentum, but stock prices much more resemble something like a random walk/Brownian motion where you can't expect the value to continue going up just because it has been going up.
Using a Savitzky-Golay Filter here also feels like severe overkill. If all we want is to prevent large spikes, we should just clamp the price to the maximum spike we want to allow. Or use an exponential decay or something.
I also did some simple tests to see the effect of the smoother.
- With a completely constant value of
100n, the smoothed value drops to91nduring the transition. - With a constant value of
100nand then a200n, 300n, 200n"bump" just after the transition, the smoothed value has a delayed bumped followed by a rebound, dropping as low as85n.
I don't know if these are bugs in the implementation or conceptual problems with the Savitzky-Golay Filter, but it definitely doesn't look good. I've pushed my tests to a branch kloet/Savitzky-Golay-Filter.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will let DS address this
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can add test cases the validates the above concerns. Maybe something like this:
describe('Constant Value Stability', () => {
it('should maintain constant output for constant input during transition', () => {
const smoother = new SessionAwareSmoother()
const CONSTANT_PRICE = 100n
const WINDOW_SIZE = 23
// Fill the buffer with constant values (outside transition window)
for (let i = 0; i < WINDOW_SIZE; i++) {
smoother.processUpdate(CONSTANT_PRICE, 100) // t=100 is outside window
}
// Now test during transition (t=0 means fully smoothed, w=1.0)
const resultAtTransition = smoother.processUpdate(CONSTANT_PRICE, 0)
// BUG CHECK: If smoother is correct, constant input should produce constant output
// The review mentions this drops to 91n, which indicates a normalization bug
expect(resultAtTransition).toBe(CONSTANT_PRICE)
})
it('should produce constant output at various transition points for constant input', () => {
const smoother = new SessionAwareSmoother()
const CONSTANT_PRICE = 100n
const WINDOW_SIZE = 23
// Fill buffer with constant values
for (let i = 0; i < WINDOW_SIZE; i++) {
smoother.processUpdate(CONSTANT_PRICE, 100)
}
// Test at different points in the transition window
const testPoints = [-5, 0, 10, 30, 45]
for (const t of testPoints) {
const result = smoother.processUpdate(CONSTANT_PRICE, t)
expect(result).toBe(CONSTANT_PRICE)
}
})
})
describe('Savitzky-Golay Coefficient Normalization', () => {
it('should have coefficients that sum to 1.0 (normalization check)', () => {
// This is a unit test for the mathematical correctness of the filter
// The 91n bug suggests coefficients may not sum to 1.0
const smoother = new SessionAwareSmoother()
const CONSTANT_PRICE = 1000000n // Use larger value to reduce rounding error impact
const WINDOW_SIZE = 23
// Fill with constant
for (let i = 0; i < WINDOW_SIZE; i++) {
smoother.processUpdate(CONSTANT_PRICE, 100)
}
// At full smoothing (t=0, w=1.0), output should equal input for constant signal
const result = smoother.processUpdate(CONSTANT_PRICE, 0)
// Allow small precision tolerance due to bigint/number conversion
const tolerance = CONSTANT_PRICE / 1000n // 0.1% tolerance
expect(result).toBeGreaterThanOrEqual(CONSTANT_PRICE - tolerance)
expect(result).toBeLessThanOrEqual(CONSTANT_PRICE + tolerance)
})As for why 100n dropped to 91n there seems to be an issue with numberTimesBigInt due to precision loss.
If we use integer math we should be able to resolve them. Should be something like:
// Use integer math for blending to avoid precision loss
const precision = BigInt(CONFIG.PRECISION)
const multiplier = 10n ** precision
const wBI = parseUnits(w.toFixed(CONFIG.PRECISION), CONFIG.PRECISION)
const oneMinusWBI = parseUnits((1 - w).toFixed(CONFIG.PRECISION), CONFIG.PRECISION)
return (smoothedPrice * wBI + rawPrice * oneMinusWBI) / multiplierThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Need more time to look into why it dropped to 85
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added your test and used real price values (which are all in 18 decimals) take a look at the result and let me know if it makes sense.
I can't figure out how I am losing precision in numberTimesBigInt I tried to use your code and got the same result.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @mxiao-cll
I have created natt/ondo_test to demonstrate what I found, but it's a quickly whipped PoC so probably not suitable to merge onto this branch.
The issue in numberTimesBigInt itself is relatively small:
numberTimesBigInt(100n, 0.043) // you'd get 4n, which is slightly truncated
but when this get's added 23 times (window size), that's where we start to lose a lot of values. What I attempted on the branch is to do division one time so we truncate very minimally if not at all.
I also looked into addressing point 2. The formula provided is what we used in hypothesis testing and was center-point based, which causes the lag behavior observed here. I have formula adjusted the formula to be causal and added additional tests to check for bump test concerns.
Please have a look if this looks okay.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also I'd like to thank you for the callout on the polynomial attribute. That's something we'll definitely investigate and possibly add to the hyperparameter tuning objective function.
For now, we've had additional eyes from research and there seems to be no strong pushback. This is probably not final and may be updated in the future. Also, having data from transmitted answers should help us improve the solution.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Merged your branch, I've updated some tests to be more accurate
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This point is still not addressed, as far as I can tell:
First of all, a Savitzky-Golay Filter requires that the data points are equally spaced. But since we're making a request every second, the variable latency in the request can play a significant role in not making the data points equally spaced in time.
I also did some more manual testing and the oscillation is still there.
Also, if you have an elevated price just after the transition point, after just 4 seconds, the smoothed price is actually higher than the elevated price and after just 7 seconds, the additional increase is as much as 29% more than the original increase. For example, if a price is constant at $500 but very briefly shoots up to $510 during the transition just to come back to $500 a few seconds later, the smoothed price will peak as high as $512.9. There's a similar effect downwards if the price goes down for a short time.
But I really want to make a broader point: This smoothing function is pretty complex. In order to justify adding such complexity to the code, it's not enough that there seems to be no strong pushback. We actually need strong reasons to add something so complex rather than something simpler. Given the issues we've seen far, I see no reason to believe the Savitzky-Golay filter will do a better job than something simpler.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@kalanyuz to respond
|
Thank you Michael.
We can remove the w <= 0.001 check too.
…On Sat, Dec 20, 2025 at 22:17 Michael Xiao ***@***.***> wrote:
***@***.**** commented on this pull request.
------------------------------
In packages/composites/ondo-calculated/src/lib/smoother.ts
<https://urldefense.com/v3/__https://github.com/smartcontractkit/external-adapters-js/pull/4465*discussion_r2637137558__;Iw!!PuLkPvyT3vEAvUES!Aa24JgBdLNIDbuOwNkSDJzuGZHF5G_N7jrQ-iTWMyodVE36kXGGewmctOQ4Y43XR9X9O08_uU_ML_M4HMLPH7qY0HzLOr1M$>
:
> +import { parseUnits } from 'ethers'
+
+const CONFIG = {
+ SAVGOL: {
+ WINDOW_SIZE: 23,
+ POLY_ORDER: 2, // Quadratic
+ },
+ TRANSITION: {
+ WINDOW_BEFORE: 10, // seconds
+ WINDOW_AFTER: 60, // seconds
+ },
+ PRECISION: 18, // Keep 18 decimals when converting number to bigint
+}
+
+/**
+ * Savitzky-Golay Filter Implementation
Merged your branch, I've updated some tests to be more accurate
—
Reply to this email directly, view it on GitHub
<https://urldefense.com/v3/__https://github.com/smartcontractkit/external-adapters-js/pull/4465*discussion_r2637137558__;Iw!!PuLkPvyT3vEAvUES!Aa24JgBdLNIDbuOwNkSDJzuGZHF5G_N7jrQ-iTWMyodVE36kXGGewmctOQ4Y43XR9X9O08_uU_ML_M4HMLPH7qY0HzLOr1M$>,
or unsubscribe
<https://urldefense.com/v3/__https://github.com/notifications/unsubscribe-auth/AAHUC2AENBQ4RKDJARU6YM34CVLARAVCNFSM6AAAAACPLJGSFCVHI2DSMVQWIX3LMV43YUDVNRWFEZLROVSXG5CSMV3GSZLXHMZTMMBRGEZDKMZTGU__;!!PuLkPvyT3vEAvUES!Aa24JgBdLNIDbuOwNkSDJzuGZHF5G_N7jrQ-iTWMyodVE36kXGGewmctOQ4Y43XR9X9O08_uU_ML_M4HMLPH7qY0j7YVaKo$>
.
You are receiving this because you commented.Message ID:
***@***.***
com>
|
packages/composites/ondo-calculated/test/unit/lib/smoother.test.ts
Outdated
Show resolved
Hide resolved
| } | ||
|
|
||
| /** | ||
| * Savitzky-Golay Filter Implementation |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This point is still not addressed, as far as I can tell:
First of all, a Savitzky-Golay Filter requires that the data points are equally spaced. But since we're making a request every second, the variable latency in the request can play a significant role in not making the data points equally spaced in time.
I also did some more manual testing and the oscillation is still there.
Also, if you have an elevated price just after the transition point, after just 4 seconds, the smoothed price is actually higher than the elevated price and after just 7 seconds, the additional increase is as much as 29% more than the original increase. For example, if a price is constant at $500 but very briefly shoots up to $510 during the transition just to come back to $500 a few seconds later, the smoothed price will peak as high as $512.9. There's a similar effect downwards if the price goes down for a short time.
But I really want to make a broader point: This smoothing function is pretty complex. In order to justify adding such complexity to the code, it's not enough that there seems to be no strong pushback. We actually need strong reasons to add something so complex rather than something simpler. Given the issues we've seen far, I see no reason to believe the Savitzky-Golay filter will do a better job than something simpler.
| // Inject bump | ||
| const bump = [200n, 300n, 200n].map((p) => p * 10n ** 18n) | ||
| for (const price of bump) { | ||
| smoother.processUpdate(price, 1) // During transition |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we pass t = 1 multiple times?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@kalanyuz to respond
packages/composites/ondo-calculated/test/unit/lib/smoother.test.ts
Outdated
Show resolved
Hide resolved
| for (let i = 0; i < WINDOW_SIZE; i++) { | ||
| const { smoothedPrice: result } = smoother.processUpdate(BASELINE, i + 1) | ||
| convergenceResults.push(result) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is missing
expect(result).toBeGreaterThanOrEqual(BASELINE)
If you add this, you'll see the price still drop significantly below baseline.
It goes as low as 77% of baseline by t = 14.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@kalanyuz to respond, is this expected to be lower or higher during the bump?
Closes OPDATA-4147
secondsFromTransitioninput to smoother logic-(i.e. 03:59 would be -60 and 04:01 would be 60)DS -> Please validate logic as well as provide better test cases for smoother.test.ts
Quality Assurance
infra-k8sconfiguration file.adapter-secretsconfiguration file.test-payload.jsonfile with relevant requests.feature/x,chore/x,release/x,hotfix/x,fix/x) or is created from Jira.