The TRUF.NETWORK SDK provides a comprehensive interface for stream management, offering powerful primitives for data streaming, composition, and on-chain interactions.
Initializes a TrufNetwork client with specified configuration.
config: ObjectprivateKey: string- Ethereum private key (securely managed)network: Objectendpoint: string- RPC endpoint URLchainId: string- Network chain identifier
timeout?: number- Optional request timeout (default: 30000ms)
import { createClient } from '@trufnetwork/sdk-js';
const client = createClient({
privateKey: process.env.PRIVATE_KEY,
network: {
endpoint: 'http://localhost:8484',
chainId: 'tn-v2.1' // Or left empty for local nodes
},
timeout: 45000 // Optional custom timeout
});All network calls have a timeout. You can override it with the timeout option:
const client = new NodeTNClient({
// β¦other optionsβ¦
timeout: 45000, // Example of setting timeout to 45 seconds
});Generates a deterministic, unique stream identifier.
name: string- Descriptive name for the stream
Promise<StreamId>- Unique stream identifier
const marketIndexStreamId = await StreamId.generate('market_index');Deploys a new stream to the TRUF.NETWORK.
streamId: StreamId- Unique stream identifiertype: StreamType- Stream type (Primitive or Composed)
Promise<DeploymentResult>txHash: string- Transaction hashstreamLocator: StreamLocator- Stream location details
const deploymentResult = await client.deployStream(
marketIndexStreamId,
StreamType.Composed
);Permanently removes a stream from the network.
streamLocator: ObjectstreamId: StreamIddataProvider: EthereumAddress
await client.destroyStream({
streamId: marketIndexStreamId,
dataProvider: wallet.address
});Inserts a single record into a stream.
options: Objectstream: StreamLocator- Target streameventTime: number- UNIX timestamp of the record in seconds.value: string- Record value
const insertResult = await primitiveAction.insertRecord({
stream: streamLocator,
eventTime: Date.now(),
value: "100.50"
});Batch inserts multiple records for efficiency.
records: Array<InsertRecordOptions>- Array of record insertion options
const batchResult = await primitiveAction.insertRecords([
{
stream: stockStream,
eventTime: Math.floor(Date.now() / 1000),
value: "150.25",
},
{
stream: commodityStream,
eventTime: Math.floor(Date.now() / 1000),
value: "75.10",
},
]);Retrieves the raw numeric values recorded in a stream for each timestamp. For primitive streams this is a direct read of the stored events; for composed streams the engine performs an on-the-fly aggregation of all underlying child streams using the active taxonomy and weights at each point in time.
The call is the foundation on which getIndex and getIndexChange are builtβuse it whenever you need the exact original numbers without any normalisation.
Key behaviours
- Time window β
fromandtoare inclusive UNIX epoch timestamps in seconds. - LOCF gap-filling β If no event exists exactly at
from, the service automatically carries forward the last known value so that downstream analytics have a continuous series. - Time-travel (
frozenAt) β Supply a block-height timestamp to query the database as it looked in the past (i.e. ignore records created after that height). - Access control β Internally calls
is_allowed_to_read_allensuring the caller has permission to view every sub-stream referenced by a composed stream. - Performance β For large ranges prefer batching or add tighter
from/tofilters.
input: Objectstream: StreamLocatorβ Target stream (primitive or composed)from?: numberβ Optional start timestamp (UNIX seconds). If omitted returns the latest value.to?: numberβ Optional end timestamp (UNIX seconds). Must be β₯from.frozenAt?: numberβ Optional created-at cut-off for historical queries.baseTime?: numberβ Ignored bygetRecord; present only for signature compatibility with other helpers.
const nowInSeconds = Math.floor(Date.now() / 1000);
const { data: records } = await streamAction.getRecord(
marketIndexLocator,
{ from: nowInSeconds - 86400, to: nowInSeconds }
);Transforms raw stream values into an "index" series normalised to a base value of 100 at a reference time. This is useful for turning any price/metric into a percentage-based index so that unrelated streams can be compared on the same scale.
The underlying formula (applied server-side, see get_index action) is:
index_t = (value_t * 100) / baseValue
where baseValue is the stream value obtained at baseTime (or the closest available value before/after that time if no exact sample exists).
input: Objectstream: StreamLocatorβ Target stream (primitive or composed)from?: numberβ Optional start timestamp (UNIX seconds).to?: numberβ Optional end timestamp (UNIX seconds).frozenAt?: numberβ Optional timestamp for "time-travel" queries (records created at or beforefrozenAtonly)baseTime?: numberβ Reference timestamp (UNIX seconds) used for normalisation. If omitted, the SDK will try, in order:default_base_timemetadata on the stream- The first available record in the stream
Promise<StreamRecord[]>β Array of{ eventTime: number, value: string }representing indexed values.
const nowInSeconds = Math.floor(Date.now() / 1000);
const { data: indexSeries } = await streamAction.getIndex(
marketIndexLocator,
{
from: nowInSeconds - 30 * 24 * 60 * 60, // 30 days
to: nowInSeconds,
baseTime: nowInSeconds - 365 * 24 * 60 * 60, // One year ago
}
);Computes the percentage change of the index value over a fixed rolling window timeInterval.
For each returned eventTime the engine looks backwards by timeInterval seconds and picks the closest index value at or before that point. The change is then calculated as:
change_t = ((index_t β index_{tβΞ}) / index_{tβΞ}) * 100
This is equivalent to the classic Ξ% formula used in financial analytics.
input: Object- All properties from
GetRecordInput(stream,from,to,frozenAt,baseTime) timeInterval: numberβ Window size in seconds (e.g.86400for daily change,31536000for yearly change). Required.
- All properties from
Promise<StreamRecord[]>β Array of{ eventTime: number, value: string }wherevalueis the percentage change overtimeInterval.
const nowInSeconds = Math.floor(Date.now() / 1000);
const { data: yearlyChange } = await streamAction.getIndexChange(
marketIndexLocator,
{
from: nowInSeconds - 2 * 365 * 24 * 60 * 60, // Last 2 years
to: nowInSeconds,
timeInterval: 31536000, // 1 year in seconds
}
);
console.log("Year-on-year % change", yearlyChange);streamAction.customProcedureWithArgs(procedure: string, args: Record<string, ValueType | ValueType[]>): Promise<StreamRecord[]>
Allows you to invoke any stored procedure defined in the underlying Kwil database and receive the results in StreamRecord format. Use this when the built-in helpers (getRecord, getIndex, getIndexChange) don't meet a specialised analytics need.
procedure: stringβ Name of the stored procedure.args: Record<string, ValueType | ValueType[]>β Named parameters including the leading$expected by Kwil.
Promise<StreamRecord[]>β Each row emitted by the procedure must exposeevent_timeandvaluecolumns for automatic mapping.
const result = await streamAction.customProcedureWithArgs(
"get_divergence_index_change",
{
$from: 1704067200,
$to: 1746316800,
$frozen_at: null,
$base_time: null,
$time_interval: 31536000,
},
);The SDK can transparently use a node-side cache layer (when the node has the tn_cache extension enabled). The feature is opt-in β you simply pass useCache: true inside the options object of any read helper and the same function now returns a wrapper that includes cache metadata.
useCache(boolean) β optional flag in all data-retrieval helpers (getRecord,getIndex,getIndexChange,getFirstRecord).- Return type becomes
CacheAwareResponse<T>which contains:dataβ the normal payload you used to receive.cacheβ{ hit: boolean; height?: number }when the node emitted cache metadata.logsβ raw NOTICE logs (useful for debugging).
- Legacy signatures are still available but are deprecated β a one-time
console.warnis printed if you call them.
The cache metadata includes both node-provided and SDK-enhanced fields:
interface CacheMetadata {
// Node-provided fields
hit: boolean; // Whether data came from cache
cacheDisabled?: boolean; // Whether cache was disabled for this query
// SDK-provided context fields
streamId?: string; // Stream ID used in the query
dataProvider?: string; // Data provider address
from?: number; // Start time of the query range
to?: number; // End time of the query range
frozenAt?: number; // Frozen time for historical queries
rowsServed?: number; // Number of rows returned
}For batch operations or analytics, use CacheMetadataParser.aggregate() to combine multiple cache metadata entries:
import { CacheMetadataParser } from '@trufnetwork/sdk-js';
const metadataList: CacheMetadata[] = [
{ hit: true, rowsServed: 10, streamId: 'stream-1' },
{ hit: false, rowsServed: 5, streamId: 'stream-2' },
{ hit: true, rowsServed: 15, streamId: 'stream-3' }
];
const aggregated = CacheMetadataParser.aggregate(metadataList);
// Returns: CacheMetadataCollection
// {
// totalQueries: 3,
// cacheHits: 2,
// cacheMisses: 1,
// cacheHitRate: 0.67,
// totalRowsServed: 30,
// entries: [...metadataList]
// }// Enhanced call β identical parameters plus the flag
const { data: records, cache } = await streamAction.getRecord(
aiIndexLocator,
{ from: now - 86400, to: now, useCache: true },
);
if (cache?.hit) {
console.log('Cache hit!');
}Configures stream composition and weight distribution.
options: Objectstream: StreamLocator- Composed streamtaxonomyItems: Array<{childStream: StreamLocator, weight: string}>startDate: number- Effective date for taxonomy
await composedAction.setTaxonomy({
stream: composedMarketIndexLocator,
taxonomyItems: [
{ childStream: stockStream, weight: "0.6" },
{ childStream: commodityStream, weight: "0.4" },
],
startDate: Math.floor(Date.now() / 1000),
});composedAction.listTaxonomiesByHeight(params?: ListTaxonomiesByHeightParams): Promise<TaxonomyQueryResult[]>
Queries taxonomies within a specific block height range for efficient incremental synchronization. This method enables detecting taxonomy changes since a specific block height without expensive full-stream scanning.
params?: Object- Optional query parametersfromHeight?: number- Start height (inclusive). If null, uses earliest available.toHeight?: number- End height (inclusive). If null, uses current height.limit?: number- Maximum number of results to return. Default: 1000offset?: number- Number of results to skip for pagination. Default: 0latestOnly?: boolean- If true, returns only latest group_sequence per stream. Default: false
Promise<TaxonomyQueryResult[]>- Array of taxonomy entries with:dataProvider: EthereumAddress- Parent stream data providerstreamId: StreamId- Parent stream IDchildDataProvider: EthereumAddress- Child stream data providerchildStreamId: StreamId- Child stream IDweight: string- Weight of the child stream in the taxonomycreatedAt: number- Block height when taxonomy was createdgroupSequence: number- Group sequence number for this taxonomy setstartTime: number- Start time timestamp for this taxonomy
// Get taxonomies created between blocks 1000 and 2000
const taxonomies = await composedAction.listTaxonomiesByHeight({
fromHeight: 1000,
toHeight: 2000,
limit: 100,
latestOnly: true
});
// Get latest taxonomies with pagination
const latestTaxonomies = await composedAction.listTaxonomiesByHeight({
latestOnly: true,
limit: 50,
offset: 100
});composedAction.getTaxonomiesForStreams(params: GetTaxonomiesForStreamsParams): Promise<TaxonomyQueryResult[]> π
Batch fetches taxonomies for specific streams. This is the primary method for discovering stream composition relationships. Useful for validating taxonomy data for known streams or processing multiple streams efficiently.
params: Object- Query parameters (required)streams: StreamLocator[]- Array of stream locators to querylatestOnly?: boolean- If true, returns only latest group_sequence per stream. Default: false
Promise<TaxonomyQueryResult[]>- Array of taxonomy entries containing:dataProvider: EthereumAddress- Parent stream data providerstreamId: StreamId- Parent stream IDchildDataProvider: EthereumAddress- Child stream data providerchildStreamId: StreamId- Child stream IDweight: string- Weight of the child stream (0.0 to 1.0)createdAt: number- Block height when taxonomy was createdgroupSequence: number- Group sequence number for this taxonomy setstartTime: number- Start time timestamp for this taxonomy
const streams = [
{ dataProvider: provider1, streamId: streamId1 },
{ dataProvider: provider2, streamId: streamId2 }
];
const taxonomies = await composedAction.getTaxonomiesForStreams({
streams,
latestOnly: true
});
// Process results for each stream
taxonomies.forEach(taxonomy => {
console.log(`Stream ${taxonomy.streamId.getId()} has child ${taxonomy.childStreamId.getId()} with weight ${taxonomy.weight}`);
});
// Example: Build a taxonomy map for visualization
const taxonomyMap = new Map();
taxonomies.forEach(taxonomy => {
const parentId = taxonomy.streamId.getId();
if (!taxonomyMap.has(parentId)) {
taxonomyMap.set(parentId, []);
}
taxonomyMap.get(parentId).push({
childId: taxonomy.childStreamId.getId(),
weight: parseFloat(taxonomy.weight)
});
});The new taxonomy querying methods are also available directly on the client for convenience:
// Equivalent to composedAction.listTaxonomiesByHeight()
const taxonomies = await client.listTaxonomiesByHeight({
fromHeight: 1000,
toHeight: 2000,
limit: 100,
latestOnly: true
});
// Equivalent to composedAction.getTaxonomiesForStreams()
const streamTaxonomies = await client.getTaxonomiesForStreams({
streams: [streamLocator1, streamLocator2],
latestOnly: true
});Controls stream read access.
await streamAction.setReadVisibility(
streamLocator,
visibility.private
);Grants read permissions to specific wallets.
await streamAction.allowReadWallet(
streamLocator,
EthereumAddress.fromString("0x...")
);Critical Understanding: TN operations return success when transactions enter the mempool, NOT when they're executed on-chain. For operations where order matters, you must wait for transactions to be mined before proceeding.
π‘ See Complete Example: For a comprehensive demonstration of transaction lifecycle patterns, see Transaction Lifecycle Example
// β DANGEROUS - Race condition possible
const deployResult = await client.deployStream(streamId, StreamType.Primitive);
// Stream might not be ready yet!
await primitiveAction.insertRecord({ stream: client.ownStreamLocator(streamId), ... }); // Could fail
const destroyResult = await client.destroyStream(client.ownStreamLocator(streamId));
// Stream might not be destroyed yet!
await primitiveAction.insertRecord({ stream: client.ownStreamLocator(streamId), ... }); // Could succeed unexpectedlyWaits for transaction confirmation with optional timeout. Use this for operations where order matters.
txHash: string- Transaction hash from operation resulttimeout?: number- Maximum wait time in milliseconds (default: 30000)
Promise<TransactionReceipt>- Transaction receipt with confirmation status
// β
SAFE - Explicit transaction confirmation
const deployResult = await client.deployStream(streamId, StreamType.Primitive);
if (!deployResult.data) {
throw new Error('Deploy failed');
}
// Wait for deployment to complete
await client.waitForTx(deployResult.data.tx_hash);
// Now safe to proceed
await primitiveAction.insertRecord({
stream: client.ownStreamLocator(streamId),
eventTime: Math.floor(Date.now() / 1000),
value: "100.50"
});- β Stream deployment before data insertion
- β Stream deletion before cleanup verification
- β Sequential operations with dependencies
- β Testing and development scenarios
- β‘ High-throughput data insertion (independent records)
- β‘ Fire-and-forget operations (with proper error handling)
- Use batch record insertions
- Implement client-side caching
- Handle errors with specific catch blocks
- Minimum Node.js Version: 18.x