Releases: streamingfast/firehose-ethereum
v1.4.10
Fixes
- Fixed: jobs would hang when flags
--substreams-state-bundle-sizeand--substreams-tier1-subrequests-sizehad different values. The latter flag has been completely removed, subrequests will be bound to the state bundle size.
Added
- Added support for continuous authentication via the grpc auth plugin (allowing cutoff triggered by the auth system).
v1.4.9
Highlights
Substreams State Store Selection
The substreams server now accepts X-Sf-Substreams-Cache-Tag header to select which Substreams state store URL should be used by the request. When performing a Substreams request, the servers will pick the state store based on the header. This enable consumers to stay on the same cache version when the operators needs to bump the data version (reasons for this could be a bug in Substreams software that caused some cached data to be corrupted on invalid).
To benefit from this, operators that have a version currently in their state store URL should move the version part from --substreams-state-store-url to the new flag --substreams-state-store-default-tag. For example if today you have in your config:
start:
...
flags:
substreams-state-store-url: /<some>/<path>/v3You should convert to:
start:
...
flags:
substreams-state-store-url: /<some>/<path>
substreams-state-store-default-tag: v3Substreams Scheduler Improvements for Parallel Processing
The substreams scheduler has been improved to reduce the number of required jobs for parallel processing. This affects backprocessing (preparing the states of modules up to a "start-block") and forward processing (preparing the states and the outputs to speed up streaming in production-mode).
Jobs on tier2 workers are now divided in "stages", each stage generating the partial states for all the modules that have the same dependencies. A substreams that has a single store won't be affected, but one that has 3 top-level stores, which used to run 3 jobs for every segment now only runs a single job per segment to get all the states ready.
Operators Upgrade
The app substreams-tier1 and substreams-tier2 should be upgraded concurrently. Some calls will fail while versions are misaligned.
Backend Changes
- Substreams bumped to version v1.1.9
- Authentication plugin
trustcan now specify an exclusive list ofallowedheaders (all lowercase), ex:trust://?allowed=x-sf-user-id,x-sf-api-key-id,x-real-ip,x-sf-substreams-cache-tag - The
tier2app no longer uses thecommon-auth-plugin,trustwill always be used, so thattier1can pass down its headers (ex:X-Sf-Substreams-Cache-Tag).
v1.4.8
Fixed
- Fixed a bug in
substreams-tier1andsubstreams-tier2which caused "live" blocks to be sent while the stream previously received block(s) were historic.
Added
- Added a check for readiness of the
dauthprovider when answering "/healthz" on firehose and substreams
Changed
- Changed
--substreams-tier1-debug-request-statsto--substreams-tier1-request-statswhich enabled request stats logging on Substreams Tier1 - Changed
--substreams-tier2-debug-request-statsto--substreams-tier2-request-statswhich enabled request stats logging on Substreams Tier2
v1.4.7
v1.4.7
- Fixed an occasional panic in substreams-tier1 caused by a race condition
- Fixed the grpc error codes for substreams tier1: Unauthenticated on bad auth, Canceled (endpoint is shutting down, please reconnect) on shutdown
- Fixed the grpc healthcheck method on substreams-tier1 (regression)
- Fixed the default value for flag
common-auth-plugin: now set to 'trusted://' instead of panicking on removed 'null://'
v1.4.6
Changed
- Substreams (@v1.1.6) is now out of the
firehoseapp, and must be started usingsubstreams-tier1andsubstreams-tier2apps! - Most substreams-related flags have been changed:
- common:
--substreams-rpc-cache-chunk-size,--substreams-rpc-cache-store-url,--substreams-rpc-endpoints,--substreams-state-bundle-size,--substreams-state-store-url - tier1:
--substreams-tier1-debug-request-stats,--substreams-tier1-discovery-service-url,--substreams-tier1-grpc-listen-addr,--substreams-tier1-max-subrequests,--substreams-tier1-subrequests-endpoint,--substreams-tier1-subrequests-insecure,--substreams-tier1-subrequests-plaintext,--substreams-tier1-subrequests-size - tier2:
--substreams-tier2-discovery-service-url,--substreams-tier2-grpc-listen-addr
- common:
- Some auth plugins have been removed, the new available plugins for
--common-auth-pluginsaretrust://andgrpc://. See https://github.com/streamingfast/dauth for details - Metering features have been added, the available plugins for
--common-metering-pluginarenull://,logger://,grpc://. See https://github.com/streamingfast/dmetering for details
Added
- Support for firehose protocol 2.3 (for parallel processing of transactions, added to polygon 'bor' v0.4.0
Removed
- Removed the
tools upgrade-merged-blockscommand. Normalization is now part of consolereader within 'codec', not the 'types' package, and cannot be done a posteriori.
v1.4.5
- Updated metering (bumped versions of
dmetering,dauth, andfirehoselibraries.) - Fixed firehose service healthcheck on shutdown
- Fixed panic on download-blocks-from-firehose tool
v1.4.4
Operators
- When upgrading a substreams server to this version, you should delete all existing module caches to benefit from deterministic output
Substreams changes
- Switch default engine from
wasmtimetowazero - Prevent reusing memory between blocks in wasm engine to fix determinism
- Switch our store operations from bigdecimal to fixed point decimal to fix determinism
- Sort the store deltas from
DeletePrefixes()to fix determinism - Implement staged module execution within a single block.
- "Fail fast" on repeating requests with deterministic failures for a "blacklist period", preventing waste of resources
- SessionInit protobuf message now includes resolvedStartBlock and MaxWorkers, sent back to the client
v1.4.3
Highlights
- This release brings an update to
substreamstov1.1.4which includes the following:- Changes the module hash computation implementation to allow reusing caches accross substreams that 'import' other substreams as a dependency.
- Faster shutdown of requests that fail deterministically
- Fixed memory leak in RPC calls
Note for Operators
Note This upgrade procedure applies to you if your Substreams deployment topology includes both
tier1andtier2processes. If you have defined somewhere the config valuesubstreams-tier2: true, then this applies to you, otherwise, if you can ignore the upgrade procedure.
- The components should be deployed simultaneously to
tier1andtier2, or users will end up with backend error(s) saying that some partial file are not found. These errors will be resolved when both tiers are upgraded.
Added
- Added Substreams scheduler tracing support. Enable tracing by setting the ENV variables
SF_TRACINGto one of the following:stdout://cloudtrace://[host:port]?project_id=<project_id>&ratio=<0.25>jaeger://[host:port]?scheme=<http|https>zipkin://[host:port]?scheme=<http|https>otelcol://[host:port]
v1.4.2
Highlights
- This release brings an update to
substreamstov1.1.3which includes the following:- Fixes an important bug that could have generated corrupted store state files. This is important for developers and operators.
- Fixes for race conditions that would return a failure when multiple identical requests are backprocessing.
- Fixes and speed/scaling improvements around the engine.
Note for Operators
Note This upgrade procedure is applies if your Substreams deployment topology includes both
tier1andtier2processes. If you have defined somewhere the config valuesubstreams-tier2: true, then this applies to you, otherwise, if you can ignore the upgrade procedure.
This release includes a small change in the internal RPC layer between tier1 processes and tier2 processes. This change requires an ordered upgrade of the processes to avoid errors.
The components should be deployed in this order:
- Deploy and roll out
tier1processes first - Deploy and roll out
tier2processes in second
If you upgrade in the wrong order or if somehow tier2 processes start using the new protocol without tier1 being aware, user will end up with backend error(s) saying that some partial file are not found. Those will be resolved only when tier1 processes have been upgraded successfully.
v1.4.1
Fixed
- Substreams running without a specific tier2
substreams-client-endpointwill now expose tier2 servicesf.substreams.internal.v2.Substreamsso it can be used internally.
Warning
If you don't use dedicated tier2 nodes, make sure that you don't exposesf.substreams.internal.v2.Substreamsto the public (from your load-balancer or using a firewall)
Breaking changes
- flag
substreams-partial-mode-enabledrenamed tosubstreams-tier2 - flag
substreams-client-endpointnow defaults to empty string, which means it is its own client-endpoint (as it was before the change to protocol V2)