Skip to content
Permalink

Comparing changes

Choose two branches to see what’s changed or to start a new pull request. If you need to, you can also or learn more about diff comparisons.

Open a pull request

Create a new pull request by comparing changes across two branches. If you need to, you can also . Learn more about diff comparisons here.
base repository: hirosystems/stacks-blockchain-api
Failed to load repositories. Confirm that selected base ref is valid, then try again.
Loading
base: v0.32.0
Choose a base ref
...
head repository: hirosystems/stacks-blockchain-api
Failed to load repositories. Confirm that selected head ref is valid, then try again.
Loading
compare: develop
Choose a head ref
Loading
Showing 775 changed files with 161,296 additions and 77,846 deletions.
195 changes: 195 additions & 0 deletions .env
Original file line number Diff line number Diff line change
@@ -3,21 +3,216 @@ PG_PORT=5490
PG_USER=postgres
PG_PASSWORD=postgres
PG_DATABASE=stacks_blockchain_api
PG_SCHEMA=public
PG_SSL=false
# Idle connection timeout in seconds, defaults to 30
# PG_IDLE_TIMEOUT=30
# Max connection lifetime in seconds, defaults to 60
# PG_MAX_LIFETIME=60
# Seconds before force-ending running queries on connection close, defaults to 5
# PG_CLOSE_TIMEOUT=5

# Can be any string, use to specify a use case specific to a deployment
PG_APPLICATION_NAME=stacks-blockchain-api

# The connection URI below can be used in place of the PG variables above,
# but if enabled it must be defined without others or omitted.
# PG_CONNECTION_URI=

# If your PG deployment implements a combination of primary server and read replicas, you should
# specify the values below to point to the primary server. The API will use primary when
# implementing LISTEN/NOTIFY postgres messages for websocket/socket.io support.
# To avoid any data inconsistencies across replicas, make sure to set `synchronous_commit` to
# `on` or `remote_apply` on the primary database's configuration.
# See https://www.postgresql.org/docs/12/runtime-config-wal.html
# Any value not provided here will fall back to the default equivalent above.
# PG_PRIMARY_HOST=
# PG_PRIMARY_PORT=
# PG_PRIMARY_USER=
# PG_PRIMARY_PASSWORD=
# PG_PRIMARY_DATABASE=
# PG_PRIMARY_SCHEMA=
# PG_PRIMARY_SSL=
# PG_PRIMARY_IDLE_TIMEOUT=
# PG_PRIMARY_MAX_LIFETIME=
# PG_PRIMARY_CLOSE_TIMEOUT=
# The connection URI below can be used in place of the PG variables above,
# but if enabled it must be defined without others or omitted.
# PG_PRIMARY_CONNECTION_URI=

# Limit to how many concurrent connections can be created, defaults to 10
# PG_CONNECTION_POOL_MAX=10

# Insert concurrency when processing new blocks
# If your PostgreSQL is operating on SSD and has multiple CPU cores, consider raising this value, for instance, to 8 or 16.
# STACKS_BLOCK_DATA_INSERT_CONCURRENCY=4

# If specified, controls the Stacks Blockchain API mode. The possible values are:
# * `readonly`: Runs the API endpoints without an Event Server that listens to events from a node and
# writes them to the local database. The API will only read data from the PG database
# specified above to respond to requests.
# * `writeonly`: Runs the Event Server without API endpoints. Useful when looking to query the postgres
# database containing blockchain data exclusively without the overhead of a web server.
# * `offline`: Run the API endpoints without a stacks-node or postgres connection. In this mode,
# only the given Rosetta endpoints are supported:
# https://www.rosetta-api.org/docs/node_deployment.html#offline-mode-endpoints
# If not specified or any other value is provided, the API will run in the default `read-write` mode
# (with both Event Server and API endpoints).
# STACKS_API_MODE=

# To avoid running unnecessary mempool stats during transaction influx, we use a debounce mechanism for the process.
# This variable controls the duration it waits until there are no further mempool updates
# MEMPOOL_STATS_DEBOUNCE_INTERVAL=1000
# MEMPOOL_STATS_DEBOUNCE_MAX_INTERVAL=10000

# If specified, an http server providing profiling capability endpoints will be opened on the given port.
# This port should not be publicly exposed.
# STACKS_PROFILER_PORT=9119

STACKS_CORE_EVENT_PORT=3700
STACKS_CORE_EVENT_HOST=127.0.0.1

# Stacks core event payload body size limit. Defaults to 500MB.
# STACKS_CORE_EVENT_BODY_LIMIT=500000000

STACKS_BLOCKCHAIN_API_PORT=3999
STACKS_BLOCKCHAIN_API_HOST=127.0.0.1

STACKS_CORE_RPC_HOST=127.0.0.1
STACKS_CORE_RPC_PORT=20443

# STACKS_CORE_PROXY_HOST=127.0.0.1
# STACKS_CORE_PROXY_PORT=20443

# Stacks core RPC proxy body size limit. Defaults to 10MB.
# STACKS_CORE_PROXY_BODY_LIMIT=10000000

# Configure a path to a file containing additional stacks-node `POST /v2/tranascation` URLs for the /v2 proxy to mutlicast.
# The file should be a newline-delimited list of URLs.
# STACKS_API_EXTRA_TX_ENDPOINTS_FILE=./config/extra-tx-post-endpoints.txt

# STACKS_FAUCET_NODE_HOST=<IP or hostname>
# STACKS_FAUCET_NODE_PORT=<port number>

# Enables the enhanced transaction fee estimator that will alter results for `POST
# /v2/fees/transaction`.
# STACKS_CORE_FEE_ESTIMATOR_ENABLED=0

# Multiplier for all fee estimations returned by Stacks core. Must be between 0.0 and 1.0.
# STACKS_CORE_FEE_ESTIMATION_MODIFIER=1.0

# How many past tenures the fee estimator will look at to determine if there is a fee market for
# transactions.
# STACKS_CORE_FEE_PAST_TENURE_FULLNESS_WINDOW=5

# Percentage at which past tenure cost dimensions will be considered "full".
# STACKS_CORE_FEE_PAST_DIMENSION_FULLNESS_THRESHOLD=0.9

# Percentage at which current cost tenures will be considered "busy" in order to determine if we
# should check previous tenures for a fee market.
# STACKS_CORE_FEE_CURRENT_DIMENSION_FULLNESS_THRESHOLD=0.5

# Minimum number of blocks the current tenure must have in order to check for "busyness".
# STACKS_CORE_FEE_CURRENT_BLOCK_COUNT_MINIMUM=5

# A comma-separated list of STX private keys which will send faucet transactions to accounts that
# request them. Attempts will always be made from the first account, only once transaction chaining
# gets too long the faucet will start using the next one.
# FAUCET_PRIVATE_KEY=

## configure the chainID/networkID; testnet: 0x80000000, mainnet: 0x00000001
STACKS_CHAIN_ID=0x00000001

# configure custom testnet and mainnet chainIDs for other networks such as subnets,
# multiple values can be set using comma-separated key-value pairs.
# TODO: currently configured with the default subnet testnet ID, the mainnet values
# are placeholders that should be replaced with the actual subnet mainnet chainID
CUSTOM_CHAIN_IDS=testnet=0x55005500,mainnet=12345678,mainnet=0xdeadbeaf

# If enabled, the API will skip the startup validation request to the stacks-node /v2/info RPC endpoint
# SKIP_STACKS_CHAIN_ID_CHECK=1

# Seconds to allow API components to shut down gracefully before force-killing them, defaults to 60
# STACKS_SHUTDOWN_FORCE_KILL_TIMEOUT=60

BTC_RPC_HOST=http://127.0.0.1
BTC_RPC_PORT=18443
BTC_RPC_USER=btc
BTC_RPC_PW=btc
BTC_FAUCET_PK=29c028009a8331358adcc61bb6397377c995d327ac0343ed8e8f1d4d3ef85c27

# The contracts used to query for inbound transactions
TESTNET_SEND_MANY_CONTRACT_ID=ST3F1X4QGV2SM8XD96X45M6RTQXKA1PZJZZCQAB4B.send-many-memo
MAINNET_SEND_MANY_CONTRACT_ID=SP3FBR2AGK5H9QBDH3EEN6DF8EK8JY7RX8QJ5SVTE.send-many-memo

# Enable debug logging
# STACKS_API_LOG_LEVEL=debug

# Directory containing Stacks 1.0 BNS data extracted from https://storage.googleapis.com/blockstack-v1-migration-data/export-data.tar.gz
# BNS_IMPORT_DIR=/extracted/export-data-dir/

# Stacks blockchain node type (L1 or subnet). L1 by default.
# If STACKS_NODE_TYPE is set to subnet, BNS importer is skipped.
STACKS_NODE_TYPE=L1

# Override the default file path for the proxy cache control file
# STACKS_API_PROXY_CACHE_CONTROL_FILE=/path/to/.proxy-cache-control.json

# Enable Rosetta endpoints.
# STACKS_API_ENABLE_ROSETTA=1

# Enable FT metadata processing for Rosetta operations display. Disabled by default.
# STACKS_API_ENABLE_FT_METADATA=1

# Enable legacy API endpoints. Disabled by default.
# STACKS_API_ENABLE_LEGACY_ENDPOINTS=1

# The Rosetta API endpoints require FT metadata to display operations with the proper `symbol` and
# `decimals` values. If FT metadata is enabled, this variable controls the token metadata error
# handling mode when metadata is not found.
# The possible values are:
# * `warning`: The API will issue a warning and not display data for that token.
# * `error`: The API will throw an error. If not specified or any other value is provided, the mode
# will be set to `warning`.
# STACKS_API_TOKEN_METADATA_ERROR_MODE=warning

# Web Socket ping interval to determine client availability, in seconds.
# STACKS_API_WS_PING_INTERVAL=5

# Web Socket ping timeout, in seconds. Clients will be dropped if they do not respond with a pong
# after this time has elapsed.
# STACKS_API_WS_PING_TIMEOUT=5

# Web Socket message timeout, in seconds. Clients will be dropped if they do not acknowledge a
# message after this time has elapsed.
# STACKS_API_WS_MESSAGE_TIMEOUT=5

# Web Socket update queue timeout, in seconds. When an update is scheduled (new block, tx update,
# etc.), we will allow this number of seconds to elapse to allow all subscribed clients to receive
# new data.
# STACKS_API_WS_UPDATE_QUEUE_TIMEOUT=5

# Specify max number of STX address to store in an in-memory LRU cache (CPU optimization).
# Defaults to 50,000, which should result in around 25 megabytes of additional memory usage.
# STACKS_ADDRESS_CACHE_SIZE=10000

# Specify a URL to redirect from /doc. If this URL is not provided, server renders local documentation
# of openapi.yaml for test / development NODE_ENV.
# For production, /doc is not served if this env var is not provided.
# API_DOCS_URL="https://docs.hiro.so/api"

# For use while syncing. Places the API into an "Initial Block Download(IBD)" mode,
# forcing it to stop any redundant processing until the node is fully synced up to its peers.
# Some examples of processing that are avoided are:
# REFRESH MATERIALIZED VIEW SQLs that are extremely CPU intensive on the PG instance, Mempool messages, etc.,
# IBD_MODE_UNTIL_BLOCK=

# Folder with events to be imported by the event-replay.
STACKS_EVENTS_DIR=./events

# If enabled this service will connect to the specified SNP redis server and consume events from the SNP stream.
# This is an alternative to consuming events directly from a stacks-node.
# SNP_EVENT_STREAMING=true
# SNP_REDIS_URL=redis://127.0.0.1:6379
# Only specify `SNP_REDIS_STREAM_KEY_PREFIX` if `REDIS_STREAM_KEY_PREFIX` is configured on the SNP server.
# SNP_REDIS_STREAM_KEY_PREFIX=
6 changes: 5 additions & 1 deletion .eslintignore
Original file line number Diff line number Diff line change
@@ -1,10 +1,14 @@
gulpfile.js
jest.config*.js
.eslintrc.js
migrations/
coverage/
.tmp/
scripts/
stacks-blockchain/
src/schemas/*
docs/
client/
client/src/
config/
utils/src/
client*
28 changes: 19 additions & 9 deletions .eslintrc.js
Original file line number Diff line number Diff line change
@@ -1,25 +1,35 @@
module.exports = {
root: true,
extends: ['@blockstack/eslint-config'],
extends: ['@stacks/eslint-config', 'prettier'],
parser: '@typescript-eslint/parser',
plugins: ['eslint-plugin-tsdoc'],
plugins: ['@typescript-eslint', 'eslint-plugin-tsdoc', 'prettier'],
parserOptions: {
tsconfigRootDir: __dirname,
project: './tsconfig.json',
ecmaVersion: 2019,
ecmaVersion: 2020,
sourceType: 'module',
},
ignorePatterns: [
'lib/*',
'client/*'
],
ignorePatterns: ['lib/*', 'client/*', 'utils/*'],
rules: {
'prettier/prettier': 'error',

'@typescript-eslint/no-inferrable-types': 'off',
'@typescript-eslint/camelcase': 'off',
'@typescript-eslint/no-empty-function': 'off',
'@typescript-eslint/no-use-before-define': ['error', 'nofunc'],
'@typescript-eslint/no-floating-promises': ['error', {'ignoreVoid': true}],
'@typescript-eslint/no-floating-promises': ['error', { ignoreVoid: true }],
'no-warning-comments': 'warn',
'tsdoc/syntax': 'error',
}
// TODO: fix these
'@typescript-eslint/no-unsafe-assignment': 'off',
'@typescript-eslint/no-unsafe-member-access': 'off',
'@typescript-eslint/no-unsafe-call': 'off',
'@typescript-eslint/restrict-template-expressions': 'off',
'@typescript-eslint/explicit-module-boundary-types': 'off',
'@typescript-eslint/restrict-plus-operands': 'off',

// TODO: temporarily disable this until the express async handler is typed correctly
'@typescript-eslint/no-misused-promises': 'off',
'@typescript-eslint/no-unsafe-argument': 'off',
},
};
4 changes: 4 additions & 0 deletions .github/ISSUE_TEMPLATE/config.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
contact_links:
- name: Ask the community
url: https://discord.gg/KrqnVg8D
about: Ask and discuss questions with other Stacks community developers.
2 changes: 0 additions & 2 deletions .github/ISSUE_TEMPLATE/testnet-bug.md
Original file line number Diff line number Diff line change
@@ -34,8 +34,6 @@ that demonstrates the issue.

----

If you think this is eligible for a [bug bounty](https://testnet.blockstack.org/bounties), please check the relevant boxes below:

### Critical, Launch Blocking Bugs
**Consensus critical bugs**
- [ ] Can cause a chain split
6 changes: 3 additions & 3 deletions .github/PULL_REQUEST_TEMPLATE.md
Original file line number Diff line number Diff line change
@@ -11,7 +11,7 @@ Describe the changes that where made in this pull request. When possible start w
5. Provide examples of use cases with code samples and applicable acceptance criteria

Example:
As a Blockstack developer, I would like to encrypt files using the app private key. This is needed because storing unencrypted files is unacceptable. This pull request adds the `encryptContent` function which will take a string and encrypt it using the app private key.
As a Hiro Systems developer, I would like to encrypt files using the app private key. This is needed because storing unencrypted files is unacceptable. This pull request adds the `encryptContent` function which will take a string and encrypt it using the app private key.

```
encryptContent('my data')
@@ -41,7 +41,7 @@ Workarounds for or expected timeline for deprecation
- Change in instructions inside tutorials/guides
- etc...
The best way to find these is by searching inside the docs at https://github.com/blockstack/docs
The best way to find these is by searching inside the docs at https://github.com/hirosystems/docs
-->
- [ ] Link to documentation updates:

@@ -60,4 +60,4 @@ Provide context on how tests should be performed.
- [ ] Unit test coverage for new or modified code paths
- [ ] `npm run test` passes
- [ ] Changelog is updated
- [ ] Tag 1 of @kyranjamie or @zone117x for review
- [ ] Tag 1 of @rafaelcr or @zone117x for review
21 changes: 21 additions & 0 deletions .github/dependabot.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
version: 2
updates:

# Maintain dependencies for GitHub Actions
- package-ecosystem: "github-actions"
target-branch: "develop"
directory: "/"
schedule:
interval: "daily"
# Disable version updates for npm dependencies
# https://docs.github.com/en/code-security/supply-chain-security/keeping-your-dependencies-updated-automatically/configuration-options-for-dependency-updates#open-pull-requests-limit
open-pull-requests-limit: 0

# Maintain dependencies for npm
- package-ecosystem: "npm"
target-branch: "develop"
directory: "/"
schedule:
interval: "daily"
# Disable version updates for npm dependencies
open-pull-requests-limit: 0
20 changes: 20 additions & 0 deletions .github/stale.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
# Number of days of inactivity before an issue becomes stale
daysUntilStale: 180
# Number of days of inactivity before a stale issue is closed
daysUntilClose: 7
# Issues with these labels will never be considered stale
exemptLabels:
- pinned
- security
# Label to use when marking an issue as stale
staleLabel: stale
# Comment to post when marking an issue as stale. Set to `false` to disable
markComment: >
This issue has been automatically marked as stale because it has not had
recent activity. It will be closed if no further activity occurs. Thank you
for your contributions.
# Comment to post when closing a stale issue. Set to `false` to disable
closeComment: >
This issue has been automatically closed. Please reopen if needed.
pulls:
daysUntilStale: 90
Loading