feat: initial support for scheduled measurements#797
feat: initial support for scheduled measurements#797MartinKolarik wants to merge 38 commits intomasterfrom
Conversation
# Conflicts: # src/lib/server.ts
|
Important Review skippedDraft detected. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the Use the checkbox below for a quick retry:
WalkthroughThis PR introduces stream schedule execution infrastructure for automated periodic measurements. It adds a separate time-series database alongside the existing measurement store, with new migration files and schema tables for schedule definitions (gp_schedule, gp_schedule_configuration) and time-series hypertables. A new ScheduleLoader syncs schedules from the dashboard database, and a StreamScheduleExecutor manages timer-based probe dispatching with location filtering. Measurements are augmented with scheduleId and configurationId fields, and time-series records are written to separate DNS and HTTP hypertables. Configuration files are updated to support two PostgreSQL instances, test infrastructure is refactored with a new Mocha config, and extensive test coverage is added for the schedule execution flow. Possibly related PRs
Suggested reviewers
🚥 Pre-merge checks | ✅ 4✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 10
🤖 Fix all issues with AI agents
In `@migrations/measurement-store-1/20260130135153_add-schedules.js`:
- Line 41: The down() migration is currently a no-op which prevents safe
rollback after the up migration that adds schedule-related columns and replaces
the export_measurement() function; implement down() to reverse those changes by
removing the added columns (the new schedule/cron columns added in up) from each
altered table and restoring the prior export_measurement() function definition
(recreate the previous SQL function body or load it from a backup/original
migration). Locate the up migration changes and mirror them in down(): drop each
column added by up and execute SQL to recreate the previous export_measurement()
implementation so schema and function behavior are fully reverted.
In `@migrations/time-series-1/20251204163412_create-tables.js`:
- Line 81: The current export const down = () => {} is a no-op; replace it with
a real async down(knex) migration that undoes the up() changes by dropping the
tables/indexes created in this migration (use knex.schema.dropTableIfExists or
conditional drops) in reverse dependency order to avoid FK issues, and return
the Promise (e.g., export const down = async (knex) => { await
knex.schema.dropTableIfExists('child_table'); await
knex.schema.dropTableIfExists('parent_table'); } ), ensuring the function
signature accepts the knex instance and cleans up all objects created by up().
In `@migrations/time-series-1/README.md`:
- Around line 5-13: The fenced code blocks containing the npm/knex commands
(e.g., the blocks with "npm run knex:time-series-1 migrate:latest" and "npm run
knex:time-series-1 migrate:make <migration_name>") should include a language
identifier; change their opening fences to ```bash so both blocks read as bash
code fences to satisfy markdownlint MD040 and ensure consistent rendering.
In `@src/measurement/store-offloader.ts`:
- Around line 168-205: The time-series writes (writeDnsRecords and
writeHttpRecords) can throw and prevent setOffloadedExpiration from running;
make those writes best-effort so Redis expirations always run: wrap the
Promise.all([...]) call in a try/catch or use Promise.allSettled for
writeDnsRecords/writeHttpRecords inside insertBatchToDb (or the surrounding
method shown) so any failures are caught/logged but do not throw, and then
always call
this.primaryMeasurementStore.setOffloadedExpiration(measurements.map(m => m.id))
(or place it in a finally block) so expirations are set regardless of
time-series write outcome.
In `@src/schedule/executor.ts`:
- Around line 18-21: stableSecond currently assumes intervalSeconds is positive,
which can cause modulo errors or a 0ms timer; add a guard: if intervalSeconds is
not a finite number or <= 0, log an error/warning and return a safe sentinel (or
throw) so callers won't create a setInterval with 0ms. Also update the
executor's scheduling code that actually calls setInterval (the scheduling loop
that creates timers later in this file) to check the interval before creating
the timer and skip/log invalid schedules instead of scheduling them. Ensure you
use processLogger (or the module logger) for consistent logging and reference
stableSecond and the scheduling loop when making changes.
- Around line 118-139: The code chunks localProbes into probesChunks without
applying schedule.probe_limit, so measurements can exceed the configured probe
limit; before computing chunkSize and probesChunks in executor.ts, enforce the
limit by slicing localProbes to schedule.probe_limit (if set) and then proceed
to compute chunkSize and _.chunk; update references around the
chunkSize/probesChunks creation and ensure store.createMeasurement continues to
receive the already-truncated probes list so the actual probes sent respects
schedule.probe_limit.
In `@src/schedule/loader.ts`:
- Around line 8-10: The import of MeasurementOptions and Location are regular
imports but they are used only as types, which causes runtime imports to be
emitted; change their imports to type-only imports—replace "import {
MeasurementOptions } from '../measurement/types.js';" and "import { Location }
from '../lib/location/types.js';" with "import type { MeasurementOptions } ..."
and "import type { Location } ..." so the compiler treats them as type-only and
no runtime require is emitted.
In `@src/schedule/types.ts`:
- Around line 1-2: Change the two current imports to be type-only: replace the
runtime imports with "import type { MeasurementOptions } from
'../measurement/types.js'" and "import type { Location } from
'../lib/location/types.js'" so MeasurementOptions and Location are only imported
as types (removing any runtime import) to avoid module load failures under
strict TS settings.
In `@test/tests/integration/schedule/stream-schedule.test.ts`:
- Around line 86-91: The forEach callback in insertSchedule is implicitly
returning the result of configurationIds.add (triggering
lint/suspicious/useIterableCallbackReturn); update the callback to use a
statement block so it doesn't return a value (e.g., change the arrow to use {
configurationIds.add(config.id); } ), or replace the forEach with an explicit
for...of loop; ensure you modify the insertSchedule function and its use of
schedule.configurations.forEach so the callback has no implicit return.
In `@test/utils/clock.ts`:
- Around line 31-35: The helper tickAsyncStepped has no guard against a
non-positive step which can cause an infinite loop; update the function
(tickAsyncStepped) to validate that step > 0 at the start (throw an error or
clamp to a minimum of 1) and when decrementing time subtract the actual amount
passed to clock.tickAsync (e.g., const delta = Math.min(step, time); await
clock.tickAsync(delta); time -= delta;) so time always decreases and the loop
can terminate.
🧹 Nitpick comments (1)
migrations/measurement-store-1/20260130135153_add-schedules.js (1)
9-12: Consider adding indexes onscheduleIdif queries will filter by it.The new
scheduleIdandconfigurationIdcolumns are added without indexes. If scheduled measurements will be queried or aggregated by these fields (e.g., for reporting or cleanup), missing indexes could degrade performance on these potentially large tables.Also applies to: 31-38
| const tsDnsRecords: TimeSeriesDnsRecord[] = []; | ||
| const tsHttpRecords: TimeSeriesHttpRecord[] = []; | ||
|
|
||
| for (const [ index, measurement ] of measurements.entries()) { | ||
| const meta = storedMeta[index]; | ||
|
|
||
| if (!meta?.timeSeriesEnabled || !measurement.configurationId) { | ||
| continue; | ||
| } | ||
|
|
||
| for (const [ index, result ] of measurement.results.entries()) { | ||
| if (measurement.type === 'dns') { | ||
| tsDnsRecords.push({ | ||
| measurementId: measurement.id, | ||
| testId: index.toString(), | ||
| configurationId: measurement.configurationId, | ||
| probe: result.probe, | ||
| result: result.result as DnsResult, | ||
| }); | ||
| } else if (measurement.type === 'http') { | ||
| tsHttpRecords.push({ | ||
| measurementId: measurement.id, | ||
| testId: index.toString(), | ||
| configurationId: measurement.configurationId, | ||
| probe: result.probe, | ||
| result: result.result as HttpResult, | ||
| }); | ||
| } | ||
| } | ||
| } | ||
|
|
||
| await Promise.all([ | ||
| writeDnsRecords(tsDnsRecords), | ||
| writeHttpRecords(tsHttpRecords), | ||
| ]); | ||
|
|
||
| this.primaryMeasurementStore.setOffloadedExpiration(measurements.map(m => m.id)).catch(() => {}); | ||
| } |
There was a problem hiding this comment.
Time-series write failure blocks Redis expiration.
If writeDnsRecords/writeHttpRecords throws, insertBatchToDb aborts before setOffloadedExpiration, leaving Redis entries at full TTL and triggering repeated fallback retries even though the primary insert already succeeded. Make time‑series writes best‑effort (or retry separately) so expiration still runs.
🛠️ Suggested fix (best‑effort time‑series writes)
- await Promise.all([
- writeDnsRecords(tsDnsRecords),
- writeHttpRecords(tsHttpRecords),
- ]);
+ try {
+ await Promise.all([
+ writeDnsRecords(tsDnsRecords),
+ writeHttpRecords(tsHttpRecords),
+ ]);
+ } catch (error) {
+ logger.error('Failed to write time-series records; continuing offload.', error);
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const tsDnsRecords: TimeSeriesDnsRecord[] = []; | |
| const tsHttpRecords: TimeSeriesHttpRecord[] = []; | |
| for (const [ index, measurement ] of measurements.entries()) { | |
| const meta = storedMeta[index]; | |
| if (!meta?.timeSeriesEnabled || !measurement.configurationId) { | |
| continue; | |
| } | |
| for (const [ index, result ] of measurement.results.entries()) { | |
| if (measurement.type === 'dns') { | |
| tsDnsRecords.push({ | |
| measurementId: measurement.id, | |
| testId: index.toString(), | |
| configurationId: measurement.configurationId, | |
| probe: result.probe, | |
| result: result.result as DnsResult, | |
| }); | |
| } else if (measurement.type === 'http') { | |
| tsHttpRecords.push({ | |
| measurementId: measurement.id, | |
| testId: index.toString(), | |
| configurationId: measurement.configurationId, | |
| probe: result.probe, | |
| result: result.result as HttpResult, | |
| }); | |
| } | |
| } | |
| } | |
| await Promise.all([ | |
| writeDnsRecords(tsDnsRecords), | |
| writeHttpRecords(tsHttpRecords), | |
| ]); | |
| this.primaryMeasurementStore.setOffloadedExpiration(measurements.map(m => m.id)).catch(() => {}); | |
| } | |
| const tsDnsRecords: TimeSeriesDnsRecord[] = []; | |
| const tsHttpRecords: TimeSeriesHttpRecord[] = []; | |
| for (const [ index, measurement ] of measurements.entries()) { | |
| const meta = storedMeta[index]; | |
| if (!meta?.timeSeriesEnabled || !measurement.configurationId) { | |
| continue; | |
| } | |
| for (const [ index, result ] of measurement.results.entries()) { | |
| if (measurement.type === 'dns') { | |
| tsDnsRecords.push({ | |
| measurementId: measurement.id, | |
| testId: index.toString(), | |
| configurationId: measurement.configurationId, | |
| probe: result.probe, | |
| result: result.result as DnsResult, | |
| }); | |
| } else if (measurement.type === 'http') { | |
| tsHttpRecords.push({ | |
| measurementId: measurement.id, | |
| testId: index.toString(), | |
| configurationId: measurement.configurationId, | |
| probe: result.probe, | |
| result: result.result as HttpResult, | |
| }); | |
| } | |
| } | |
| } | |
| try { | |
| await Promise.all([ | |
| writeDnsRecords(tsDnsRecords), | |
| writeHttpRecords(tsHttpRecords), | |
| ]); | |
| } catch (error) { | |
| logger.error('Failed to write time-series records; continuing offload.', error); | |
| } | |
| this.primaryMeasurementStore.setOffloadedExpiration(measurements.map(m => m.id)).catch(() => {}); | |
| } |
🤖 Prompt for AI Agents
In `@src/measurement/store-offloader.ts` around lines 168 - 205, The time-series
writes (writeDnsRecords and writeHttpRecords) can throw and prevent
setOffloadedExpiration from running; make those writes best-effort so Redis
expirations always run: wrap the Promise.all([...]) call in a try/catch or use
Promise.allSettled for writeDnsRecords/writeHttpRecords inside insertBatchToDb
(or the surrounding method shown) so any failures are caught/logged but do not
throw, and then always call
this.primaryMeasurementStore.setOffloadedExpiration(measurements.map(m => m.id))
(or place it in a finally block) so expirations are set regardless of
time-series write outcome.
| const stableSecond = (scheduleId: string, intervalSeconds: number) => { | ||
| const hash = crypto.createHash('sha1').update(scheduleId).digest(); | ||
| return hash.readUInt32BE(0) % intervalSeconds; | ||
| }; |
There was a problem hiding this comment.
Validate schedule intervals to avoid NaN/0ms timers.
A non‑positive interval makes the modulo invalid and can create a 0ms setInterval, which can spin hot. Add a guard and skip/log invalid schedules.
🛡️ Suggested guard
private createTimer (scheduleId: string, intervalSeconds: number) {
+ if (intervalSeconds <= 0) {
+ logger.warn(`Skipping schedule ${scheduleId}: invalid interval ${intervalSeconds}s.`);
+ return;
+ }
const sec = stableSecond(scheduleId, intervalSeconds);
const intervalMs = intervalSeconds * 1000;Also applies to: 75-88
🤖 Prompt for AI Agents
In `@src/schedule/executor.ts` around lines 18 - 21, stableSecond currently
assumes intervalSeconds is positive, which can cause modulo errors or a 0ms
timer; add a guard: if intervalSeconds is not a finite number or <= 0, log an
error/warning and return a safe sentinel (or throw) so callers won't create a
setInterval with 0ms. Also update the executor's scheduling code that actually
calls setInterval (the scheduling loop that creates timers later in this file)
to check the interval before creating the timer and skip/log invalid schedules
instead of scheduling them. Ensure you use processLogger (or the module logger)
for consistent logging and reference stableSecond and the scheduling loop when
making changes.
| const chunkSize = config.get<number>('measurement.limits.authenticatedTestsPerMeasurement'); | ||
| const probesChunks = _.chunk(localProbes, chunkSize); | ||
|
|
||
| for (const configuration of schedule.configurations) { | ||
| if (!configuration.enabled) { | ||
| continue; | ||
| } | ||
|
|
||
| const requestBase = { | ||
| type: configuration.measurement_type, | ||
| target: configuration.measurement_target, | ||
| measurementOptions: configuration.measurement_options, | ||
| locations: schedule.locations, | ||
| limit: schedule.probe_limit ?? undefined, | ||
| scheduleId: schedule.id, | ||
| configurationId: configuration.id, | ||
| inProgressUpdates: false, | ||
| }; | ||
|
|
||
| for (const probesChunk of probesChunks) { | ||
| const probesMap = new Map<number, ServerProbe>(probesChunk.map((s, idx) => [ idx, s ])); | ||
| const measurementId = await this.store.createMeasurement(requestBase, probesMap, probesChunk, 'special', { |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Inspect createMeasurement signature and limit enforcement
rg -n "createMeasurement\(" src/measurement/store.ts -C 3
rg -n "\blimit\b" src/measurement/store.ts -C 3Repository: jsdelivr/globalping
Length of output: 947
🏁 Script executed:
#!/bin/bash
# Get the full createMeasurement function signature and implementation
rg -n "createMeasurement\s*\(" src/measurement/store.ts -A 50 | head -100Repository: jsdelivr/globalping
Length of output: 2959
Enforce probe_limit before chunking probes.
The limit is stored in measurement metadata but never used to cap the actual probes. Since createMeasurement accepts all probes passed to it, schedules will exceed their limit. Slice localProbes by schedule.probe_limit before chunking:
Suggested fix
- const probesChunks = _.chunk(localProbes, chunkSize);
+ const limitedProbes = schedule.probe_limit
+ ? localProbes.slice(0, schedule.probe_limit)
+ : localProbes;
+ const probesChunks = _.chunk(limitedProbes, chunkSize);🤖 Prompt for AI Agents
In `@src/schedule/executor.ts` around lines 118 - 139, The code chunks localProbes
into probesChunks without applying schedule.probe_limit, so measurements can
exceed the configured probe limit; before computing chunkSize and probesChunks
in executor.ts, enforce the limit by slicing localProbes to schedule.probe_limit
(if set) and then proceed to compute chunkSize and _.chunk; update references
around the chunkSize/probesChunks creation and ensure store.createMeasurement
continues to receive the already-truncated probes list so the actual probes sent
respects schedule.probe_limit.
| import type { ConfigurationRow, Schedule, ScheduleRow } from './types.js'; | ||
| import { MeasurementOptions } from '../measurement/types.js'; | ||
| import { Location } from '../lib/location/types.js'; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Check TypeScript configuration for type-only import handling
fd -g 'tsconfig*.json' | head -5Repository: jsdelivr/globalping
Length of output: 76
🏁 Script executed:
# Examine the file to see how MeasurementOptions and Location are used
cat -n src/schedule/loader.ts | head -30Repository: jsdelivr/globalping
Length of output: 1162
🏁 Script executed:
# Check the exports in the source type files
rg -n "export.*MeasurementOptions" src/measurement/types.ts
rg -n "export.*Location" src/lib/location/types.tsRepository: jsdelivr/globalping
Length of output: 184
🏁 Script executed:
# Search for all uses of MeasurementOptions and Location in loader.ts
rg -n "MeasurementOptions|Location" src/schedule/loader.tsRepository: jsdelivr/globalping
Length of output: 327
🏁 Script executed:
# Check the full tsconfig.json content
cat tsconfig.jsonRepository: jsdelivr/globalping
Length of output: 525
Use import type for MeasurementOptions and Location to avoid runtime import errors.
Both symbols are type-only exports and used only in type positions (type assertions). With @tsconfig/strictest (which includes verbatimModuleSyntax: true), regular imports of type-only exports will remain in the emitted JavaScript and fail at module load.
🔧 Proposed change
-import { MeasurementOptions } from '../measurement/types.js';
-import { Location } from '../lib/location/types.js';
+import type { MeasurementOptions } from '../measurement/types.js';
+import type { Location } from '../lib/location/types.js';📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| import type { ConfigurationRow, Schedule, ScheduleRow } from './types.js'; | |
| import { MeasurementOptions } from '../measurement/types.js'; | |
| import { Location } from '../lib/location/types.js'; | |
| import type { ConfigurationRow, Schedule, ScheduleRow } from './types.js'; | |
| import type { MeasurementOptions } from '../measurement/types.js'; | |
| import type { Location } from '../lib/location/types.js'; |
🤖 Prompt for AI Agents
In `@src/schedule/loader.ts` around lines 8 - 10, The import of MeasurementOptions
and Location are regular imports but they are used only as types, which causes
runtime imports to be emitted; change their imports to type-only imports—replace
"import { MeasurementOptions } from '../measurement/types.js';" and "import {
Location } from '../lib/location/types.js';" with "import type {
MeasurementOptions } ..." and "import type { Location } ..." so the compiler
treats them as type-only and no runtime require is emitted.
| import { MeasurementOptions } from '../measurement/types.js'; | ||
| import { Location } from '../lib/location/types.js'; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check TS compiler settings that preserve type-only imports
echo "=== Checking tsconfig.json for import-related settings ==="
fd -g 'tsconfig*.json' -x cat {} \;
echo ""
echo "=== Checking MeasurementOptions export in src/measurement/types.ts ==="
cat -n src/measurement/types.ts | head -50
echo ""
echo "=== Checking Location export in src/lib/location/types.ts ==="
cat -n src/lib/location/types.ts | head -50
echo ""
echo "=== Checking usage in src/schedule/types.ts ==="
cat -n src/schedule/types.tsRepository: jsdelivr/globalping
Length of output: 3599
🏁 Script executed:
#!/bin/bash
# Search for MeasurementOptions in measurement/types.ts
echo "=== Searching for MeasurementOptions export ==="
rg -n "MeasurementOptions" src/measurement/types.ts
# Also check for any type exports that might be an alias
echo ""
echo "=== Full content of measurement/types.ts (checking all lines) ==="
wc -l src/measurement/types.tsRepository: jsdelivr/globalping
Length of output: 433
Use import type for MeasurementOptions and Location to prevent runtime import errors.
Both are exported as type-only exports and only used in type positions. With the strictest TypeScript configuration, these must use import type syntax to avoid module load failures.
🔧 Proposed change
-import { MeasurementOptions } from '../measurement/types.js';
-import { Location } from '../lib/location/types.js';
+import type { MeasurementOptions } from '../measurement/types.js';
+import type { Location } from '../lib/location/types.js';📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| import { MeasurementOptions } from '../measurement/types.js'; | |
| import { Location } from '../lib/location/types.js'; | |
| import type { MeasurementOptions } from '../measurement/types.js'; | |
| import type { Location } from '../lib/location/types.js'; |
🤖 Prompt for AI Agents
In `@src/schedule/types.ts` around lines 1 - 2, Change the two current imports to
be type-only: replace the runtime imports with "import type { MeasurementOptions
} from '../measurement/types.js'" and "import type { Location } from
'../lib/location/types.js'" so MeasurementOptions and Location are only imported
as types (removing any runtime import) to avoid module load failures under
strict TS settings.
| const insertSchedule = async (schedule: ScheduleEntry) => { | ||
| console.log(`Inserting schedule ${schedule.id}`); | ||
|
|
||
| scheduleIds.add(schedule.id); | ||
| schedule.configurations.forEach(config => configurationIds.add(config.id)); | ||
|
|
There was a problem hiding this comment.
Avoid returning a value from the forEach callback (Biome error).
Line 90 implicitly returns the result of Set.add, which triggers lint/suspicious/useIterableCallbackReturn. Wrap the callback in a block to avoid returning a value.
🛠️ Suggested fix
- schedule.configurations.forEach(config => configurationIds.add(config.id));
+ schedule.configurations.forEach((config) => {
+ configurationIds.add(config.id);
+ });🧰 Tools
🪛 Biome (2.3.14)
[error] 90-90: This callback passed to forEach() iterable method should not return a value.
Either remove this return or remove the returned value.
(lint/suspicious/useIterableCallbackReturn)
🤖 Prompt for AI Agents
In `@test/tests/integration/schedule/stream-schedule.test.ts` around lines 86 -
91, The forEach callback in insertSchedule is implicitly returning the result of
configurationIds.add (triggering lint/suspicious/useIterableCallbackReturn);
update the callback to use a statement block so it doesn't return a value (e.g.,
change the arrow to use { configurationIds.add(config.id); } ), or replace the
forEach with an explicit for...of loop; ensure you modify the insertSchedule
function and its use of schedule.configurations.forEach so the callback has no
implicit return.
| const tickAsyncStepped = async (time: number, step = 20) => { | ||
| while (time > 0) { | ||
| await clock.tickAsync(Math.min(step, time)); | ||
| time -= step; | ||
| } |
There was a problem hiding this comment.
Guard against non-positive step to avoid infinite loops.
If step is 0 or negative, time never decreases and the loop never exits.
🛠️ Suggested fix
const tickAsyncStepped = async (time: number, step = 20) => {
+ if (step <= 0) {
+ throw new RangeError('step must be > 0');
+ }
while (time > 0) {
- await clock.tickAsync(Math.min(step, time));
- time -= step;
+ const delta = Math.min(step, time);
+ await clock.tickAsync(delta);
+ time -= delta;
}
};📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const tickAsyncStepped = async (time: number, step = 20) => { | |
| while (time > 0) { | |
| await clock.tickAsync(Math.min(step, time)); | |
| time -= step; | |
| } | |
| const tickAsyncStepped = async (time: number, step = 20) => { | |
| if (step <= 0) { | |
| throw new RangeError('step must be > 0'); | |
| } | |
| while (time > 0) { | |
| const delta = Math.min(step, time); | |
| await clock.tickAsync(delta); | |
| time -= delta; | |
| } | |
| }; |
🤖 Prompt for AI Agents
In `@test/utils/clock.ts` around lines 31 - 35, The helper tickAsyncStepped has no
guard against a non-positive step which can cause an infinite loop; update the
function (tickAsyncStepped) to validate that step > 0 at the start (throw an
error or clamp to a minimum of 1) and when decrementing time subtract the actual
amount passed to clock.tickAsync (e.g., const delta = Math.min(step, time);
await clock.tickAsync(delta); time -= delta;) so time always decreases and the
loop can terminate.
Part of #291, but the system is intentionally generic to support custom user-scheduled measurements as well in the future.
Key concepts
Scheduledefines a single logical target, how often it's tested, and which probes it runs on.Configurationdefines a measurement template. E.g., if we wanted to test CDN providers with three different file sizes, that would be three configurations within a single schedule.Scheduling is handled by the API in the end, as it is much more efficient (in terms of storage, API requests for multiple results, etc.) to group multiple probe results into a single measurement. With probe-level scheduling, that would be rather difficult. Two scheduling modes: