Skip to content

Releases: streamingfast/firehose-ethereum

v1.4.10

27 Jul 17:51

Choose a tag to compare

Fixes

  • Fixed: jobs would hang when flags --substreams-state-bundle-size and --substreams-tier1-subrequests-size had different values. The latter flag has been completely removed, subrequests will be bound to the state bundle size.

Added

  • Added support for continuous authentication via the grpc auth plugin (allowing cutoff triggered by the auth system).

v1.4.9

24 Jul 18:14

Choose a tag to compare

Highlights

Substreams State Store Selection

The substreams server now accepts X-Sf-Substreams-Cache-Tag header to select which Substreams state store URL should be used by the request. When performing a Substreams request, the servers will pick the state store based on the header. This enable consumers to stay on the same cache version when the operators needs to bump the data version (reasons for this could be a bug in Substreams software that caused some cached data to be corrupted on invalid).

To benefit from this, operators that have a version currently in their state store URL should move the version part from --substreams-state-store-url to the new flag --substreams-state-store-default-tag. For example if today you have in your config:

start:
  ...
  flags:
    substreams-state-store-url: /<some>/<path>/v3

You should convert to:

start:
  ...
  flags:
    substreams-state-store-url: /<some>/<path>
    substreams-state-store-default-tag: v3

Substreams Scheduler Improvements for Parallel Processing

The substreams scheduler has been improved to reduce the number of required jobs for parallel processing. This affects backprocessing (preparing the states of modules up to a "start-block") and forward processing (preparing the states and the outputs to speed up streaming in production-mode).

Jobs on tier2 workers are now divided in "stages", each stage generating the partial states for all the modules that have the same dependencies. A substreams that has a single store won't be affected, but one that has 3 top-level stores, which used to run 3 jobs for every segment now only runs a single job per segment to get all the states ready.

Operators Upgrade

The app substreams-tier1 and substreams-tier2 should be upgraded concurrently. Some calls will fail while versions are misaligned.

Backend Changes

  • Substreams bumped to version v1.1.9
  • Authentication plugin trust can now specify an exclusive list of allowed headers (all lowercase), ex: trust://?allowed=x-sf-user-id,x-sf-api-key-id,x-real-ip,x-sf-substreams-cache-tag
  • The tier2 app no longer uses the common-auth-plugin, trust will always be used, so that tier1 can pass down its headers (ex: X-Sf-Substreams-Cache-Tag).

v1.4.8

06 Jul 17:53

Choose a tag to compare

Fixed

  • Fixed a bug in substreams-tier1 and substreams-tier2 which caused "live" blocks to be sent while the stream previously received block(s) were historic.

Added

  • Added a check for readiness of the dauth provider when answering "/healthz" on firehose and substreams

Changed

  • Changed --substreams-tier1-debug-request-stats to --substreams-tier1-request-stats which enabled request stats logging on Substreams Tier1
  • Changed --substreams-tier2-debug-request-stats to --substreams-tier2-request-stats which enabled request stats logging on Substreams Tier2

v1.4.7

23 Jun 20:02

Choose a tag to compare

v1.4.7

  • Fixed an occasional panic in substreams-tier1 caused by a race condition
  • Fixed the grpc error codes for substreams tier1: Unauthenticated on bad auth, Canceled (endpoint is shutting down, please reconnect) on shutdown
  • Fixed the grpc healthcheck method on substreams-tier1 (regression)
  • Fixed the default value for flag common-auth-plugin: now set to 'trusted://' instead of panicking on removed 'null://'

v1.4.6

22 Jun 15:49

Choose a tag to compare

Changed

  • Substreams (@v1.1.6) is now out of the firehose app, and must be started using substreams-tier1 and substreams-tier2 apps!
  • Most substreams-related flags have been changed:
    • common: --substreams-rpc-cache-chunk-size,--substreams-rpc-cache-store-url,--substreams-rpc-endpoints,--substreams-state-bundle-size,--substreams-state-store-url
    • tier1: --substreams-tier1-debug-request-stats,--substreams-tier1-discovery-service-url,--substreams-tier1-grpc-listen-addr,--substreams-tier1-max-subrequests,--substreams-tier1-subrequests-endpoint,--substreams-tier1-subrequests-insecure,--substreams-tier1-subrequests-plaintext,--substreams-tier1-subrequests-size
    • tier2: --substreams-tier2-discovery-service-url,--substreams-tier2-grpc-listen-addr
  • Some auth plugins have been removed, the new available plugins for --common-auth-plugins are trust:// and grpc://. See https://github.com/streamingfast/dauth for details
  • Metering features have been added, the available plugins for --common-metering-plugin are null://, logger://, grpc://. See https://github.com/streamingfast/dmetering for details

Added

  • Support for firehose protocol 2.3 (for parallel processing of transactions, added to polygon 'bor' v0.4.0

Removed

  • Removed the tools upgrade-merged-blocks command. Normalization is now part of consolereader within 'codec', not the 'types' package, and cannot be done a posteriori.

v1.4.5

22 Jun 15:55

Choose a tag to compare

  • Updated metering (bumped versions of dmetering, dauth, and firehose libraries.)
  • Fixed firehose service healthcheck on shutdown
  • Fixed panic on download-blocks-from-firehose tool

v1.4.4

02 Jun 14:44

Choose a tag to compare

Operators

  • When upgrading a substreams server to this version, you should delete all existing module caches to benefit from deterministic output

Substreams changes

  • Switch default engine from wasmtime to wazero
  • Prevent reusing memory between blocks in wasm engine to fix determinism
  • Switch our store operations from bigdecimal to fixed point decimal to fix determinism
  • Sort the store deltas from DeletePrefixes() to fix determinism
  • Implement staged module execution within a single block.
  • "Fail fast" on repeating requests with deterministic failures for a "blacklist period", preventing waste of resources
  • SessionInit protobuf message now includes resolvedStartBlock and MaxWorkers, sent back to the client

v1.4.3

26 May 14:57

Choose a tag to compare

Highlights

  • This release brings an update to substreams to v1.1.4 which includes the following:
    • Changes the module hash computation implementation to allow reusing caches accross substreams that 'import' other substreams as a dependency.
    • Faster shutdown of requests that fail deterministically
    • Fixed memory leak in RPC calls

Note for Operators

Note This upgrade procedure applies to you if your Substreams deployment topology includes both tier1 and tier2 processes. If you have defined somewhere the config value substreams-tier2: true, then this applies to you, otherwise, if you can ignore the upgrade procedure.

  • The components should be deployed simultaneously to tier1 and tier2, or users will end up with backend error(s) saying that some partial file are not found. These errors will be resolved when both tiers are upgraded.

Added

  • Added Substreams scheduler tracing support. Enable tracing by setting the ENV variables SF_TRACING to one of the following:
    • stdout://
    • cloudtrace://[host:port]?project_id=<project_id>&ratio=<0.25>
    • jaeger://[host:port]?scheme=<http|https>
    • zipkin://[host:port]?scheme=<http|https>
    • otelcol://[host:port]

v1.4.2

18 May 16:14

Choose a tag to compare

Highlights

  • This release brings an update to substreams to v1.1.3 which includes the following:
    • Fixes an important bug that could have generated corrupted store state files. This is important for developers and operators.
    • Fixes for race conditions that would return a failure when multiple identical requests are backprocessing.
    • Fixes and speed/scaling improvements around the engine.

Note for Operators

Note This upgrade procedure is applies if your Substreams deployment topology includes both tier1 and tier2 processes. If you have defined somewhere the config value substreams-tier2: true, then this applies to you, otherwise, if you can ignore the upgrade procedure.

This release includes a small change in the internal RPC layer between tier1 processes and tier2 processes. This change requires an ordered upgrade of the processes to avoid errors.

The components should be deployed in this order:

  1. Deploy and roll out tier1 processes first
  2. Deploy and roll out tier2 processes in second

If you upgrade in the wrong order or if somehow tier2 processes start using the new protocol without tier1 being aware, user will end up with backend error(s) saying that some partial file are not found. Those will be resolved only when tier1 processes have been upgraded successfully.

v1.4.1

09 May 19:43
368e493

Choose a tag to compare

Fixed

  • Substreams running without a specific tier2 substreams-client-endpoint will now expose tier2 service sf.substreams.internal.v2.Substreams so it can be used internally.

Warning
If you don't use dedicated tier2 nodes, make sure that you don't expose sf.substreams.internal.v2.Substreams to the public (from your load-balancer or using a firewall)

Breaking changes

  • flag substreams-partial-mode-enabled renamed to substreams-tier2
  • flag substreams-client-endpoint now defaults to empty string, which means it is its own client-endpoint (as it was before the change to protocol V2)