-
Notifications
You must be signed in to change notification settings - Fork 4
Add alternative provider retrieval measurement #571
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Copilot reviewed 2 out of 3 changed files in this pull request and generated 1 comment.
Files not reviewed (1)
- migrations/067.do.measurement-network-retrieval.sql: Language not supported
Comments suppressed due to low confidence (1)
api/index.js:144
- Ensure that the updated SQL parameter count and ordering correctly match the columns in your INSERT statement to avoid mismatches during query execution.
$1, $2, ... $26,
api/index.js
Outdated
@@ -190,7 +209,14 @@ const getMeasurement = async (req, res, client, measurementId) => { | |||
endAt: resultRow.end_at, | |||
byteLength: resultRow.byte_length, | |||
carTooLarge: resultRow.car_too_large, | |||
attestation: resultRow.attestation | |||
attestation: resultRow.attestation, | |||
networkRetrieval: { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Kudos for re-creating this structure in the JSON data uploaded to Storacha! 👏🏻
Let's use the same property name as we use in spark-checker, see CheckerNetwork/spark-checker#132 (comment)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Copilot reviewed 2 out of 3 changed files in this pull request and generated no comments.
Files not reviewed (1)
- migrations/067.do.measurement-alternative-provider-check.sql: Language not supported
Comments suppressed due to low confidence (1)
api/index.js:144
- Ensure that the updated prepared statement's parameter placeholders exactly match the number of provided values and the expected database column mappings to avoid runtime errors.
+ $1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11, $12, $13, $14, $15, $16, $17, $18, $19, $20, $21, $22, $23, $24, $25, $26,
alternative_provider_check_car_too_large, | ||
alternative_provider_check_end_at, | ||
alternative_provider_check_protocol, | ||
alternative_provider_check_provider_id, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Throwing this out there: We could alternatively implement this in such a way that after the alternative provider check has been completed, two measurements will have been created. One would link to the other. This would save us from having to duplicate the measurement schema inside itself. I don't think this is worth it yet though
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No that suggestion is really not important since spark-api's business is just buffering measurements until it flushes them again. If anything, we should discuss this in a repo that's further down the data processing pipeline
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was thinking about this a bit and you are right. What started of as adding one field (status code for alternative retrieval) turned up into duplicating code a lot. We might be better off with adding a relationship between columns in the measurements table between regular and alternative provider check. That way we could avoid duplicating code down the processing and evaluation pipeline.
…ilecoin-station/spark-api into add/network-wide-retrieval-measurement
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The change looks reasonable, I have a few comments to discuss.
api/bin/spark.js
Outdated
@@ -27,7 +27,7 @@ assert(DEAL_INGESTER_TOKEN, 'DEAL_INGESTER_TOKEN is required') | |||
const client = new pg.Pool({ | |||
connectionString: DATABASE_URL, | |||
// allow the pool to close all connections and become empty | |||
min: 0, | |||
// min: 0, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am curious - why do we need this change? Can you please add a code comment to capture the reasoning?
Should we update all other Spark services communicating with PG to use the same settings, too?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great catch, I must've committed this accidentally.
api/index.js
Outdated
@@ -98,6 +98,16 @@ const createMeasurement = async (req, res, client) => { | |||
validate(measurement, 'stationId', { type: 'string', required: true }) | |||
assert(measurement.stationId.match(/^[0-9a-fA-F]{88}$/), 400, 'Invalid Station ID') | |||
|
|||
if (measurement.alternativeProviderCheck) { | |||
validate(measurement, 'alternativeProviderCheck', { type: 'object', required: false }) | |||
validate(measurement.alternativeProviderCheck, 'statusCode', { type: 'number', required: false }) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This will produce a confusing error message when the validation fails - the message will mention only statusCode
but not alternativeProviderCheck
.
I think the nicest solution would be to improve validate
to support nested parameters, but that feels like too much work to me.
validate(measurement.alternativeProviderCheck, 'statusCode', { type: 'number', required: false }) | |
validate(measurement, 'alternativeProviderCheck.statusCode', { type: 'number', required: false }) |
Even better solution is to use JSON Schema validation offered by Fastify, but we don't use fastify here yet. Should we take over #549 to land it first and then continue with this pull request? That does not feel like the fastest way to shipping this feature either :(
How much work would it be to rework the validation to use JSON Schema in this route only, as an interim solution until we land the migration to Fastify?
const MEASUREMENT_SCHEMA = {
// JSON schema of the measurement
// We will be able to use this with Fastify later
}
const createMeasurement = async (req, res, client) => {
const body = await getRawBody(req, { limit: '100kb' })
const measurement = JSON.parse(body.toString())
const valid = ajv.validate(schema, data)
if (!valid) {
// report errors back to the client
// console.log(ajv.errors)
}
// Some validations cannot be described in JSON Schema
if (typeof measurement.participantAddress === 'string' && measurement.participantAddress.startsWith('f4')) {
try {
measurement.participantAddress = ethAddressFromDelegated(measurement.participantAddress)
} catch (err) {
assert.fail(400, 'Invalid .participantAddress - doesn\'t convert to 0x address')
}
}
// etc.
Let's discuss!
publish/index.js
Outdated
alternative_provider_check_status_code, | ||
alternative_provider_check_timeout, | ||
alternative_provider_check_car_too_large, | ||
alternative_provider_check_end_at, | ||
alternative_provider_check_protocol, | ||
alternative_provider_check_provider_id |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we have any concerns about the size of JSON measurements uploaded to Storacha? The prefix alternative_provider_check_
has 28 characters, it's repeated 6x per measurement, and we have ~200K measurements/round - that's 33MB/round to store the prefix.
Have you considered re-building the nested alternative_provider_check
object instead?
Example measurement to show what I mean:
{
"miner": "f01...",
"provider_id": "somePeerId",
// ...
"alternative_provider_check": {
"provider_id": "someOtherPeerId",
// ..
}
}
Maybe this is not a concern right now? Let's discuss!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I haven't been thinking about the size itself. What started as adding single field to the measurement structure quickly turned into adding more and more of them.
I like your suggestion 👍🏻
Another alternative could be to store a alternative provider check as a separate measurement with an additional flag. WDYT?
This PR adds support for new alternative-provider check measurement fields added in CheckerNetwork/spark-checker#132. New fields are saved alongside others in the
measurements
and later on published.Closes #570
Relates to: