Skip to content

RFC: Signal K Database API #2407

@dirkwa

Description

@dirkwa

RFC: Signal K Database API

Status: Proposal (validated with working implementation)
Version: 0.3
Date: 2026-03-05
Related: HistoryAPI (#2100), Plugin API architecture


Summary

This RFC proposes a central Database API for Signal K server that gives plugins access to a shared, server-managed relational database. The server ships with a default SQLite provider using better-sqlite3 (bundled as a server dependency) and an optional built-in node:sqlite provider for forward-looking deployments. Advanced users can replace either with an alternative provider such as PostgreSQL via the standard plugin registration pattern.

The goal is to eliminate the current pattern of every plugin shipping its own database binary, which inflates installation size, breaks on platform upgrades, and causes install failures on resource-constrained hardware.


Motivation

The Current Problem

Many Signal K plugins need to persist structured data — chart tile metadata, widget layouts, device calibration, alarm rules. Today each plugin solves this independently:

  • signalk-charts-* → ships its own better-sqlite3 (native binary, rebuilds on every Node.js upgrade)
  • @mxtommy/kip → ships DuckDB for time-series history (heavy native binary, caused crashes on WSL2 and ARM). Note: KIP's DuckDB usage is for History API purposes (Parquet-based time-series storage), not relational storage — it would not migrate to the Database API but illustrates the native binary problem.
  • Other plugins → their own SQLite instances, running in parallel

The consequences are concrete and serious on the hardware Signal K typically runs on:

  • AppStore --ignore-scripts blocks native binaries — The Signal K AppStore installs plugins with npm install --ignore-scripts for security reasons (src/modules.ts). This means better-sqlite3's install script — which downloads or compiles the native binary — never runs. Plugins that depend on better-sqlite3 simply fail to work when installed through the AppStore. This is an increasingly common source of bug reports and there is no workaround for plugin authors.
  • Multiple native binary copies loaded into the same Node.js process
  • Rebuild hell on every Node.js upgrade — plugin authors must publish new prebuilds for every platform/architecture combination
  • Install failures on Raspberry Pi, Pi Zero, and other ARM devices where native compilation often fails silently
  • Wasted disk and RAM — a Pi running three chart plugins has three independent SQLite files and three copies of better-sqlite3 in memory

The Database API solves all of these by moving SQLite to the server level. Server dependencies install normally (without --ignore-scripts), so better-sqlite3's native binary builds correctly during server installation. Plugins access it through the API — they carry no native dependencies of their own and install cleanly through the AppStore.

This is a plugin developer ergonomics problem, a user experience problem, and a resource problem — all with the same fix.

What This Is Not

This RFC addresses structured relational storage only: configuration tables, metadata, device registrations, widget layouts. It deliberately does not address time-series ingestion or historical signal data — that is the domain of the existing History API (/signalk/v2/api/history), which already provides the correct abstraction with a pluggable provider system.

The two APIs are orthogonal and complementary. Code review of KIP confirmed that its DuckDB usage is entirely for time-series history (Parquet-based signal ingestion and retention), while display configurations are stored via Signal K's delta/put mechanism — KIP has no relational storage needs that the Database API would serve.


Design

Architecture: Same Pattern as History API

The Database API follows the exact same provider registration pattern already established by the History API (PR #2100). This is intentional — it reuses an architecture the maintainers have already reviewed and accepted, and it gives plugin authors a consistent mental model.

Signal K Server
├── DatabaseApi (server core)
│   ├── _builtin provider: SQLite via better-sqlite3 (always available)
│   ├── _builtin_nodesqlite provider: SQLite via node:sqlite (when available)
│   └── optional providers: PostgreSQL, etc. (community plugins)
└── Plugins
    ├── postgsail      → app.getDatabaseApi().getPluginDb('postgsail')     ✓ tested
    ├── plugin-a       → app.getDatabaseApi().getPluginDb('plugin-a')      (any plugin with own data)
    └── charts-simple  → opens .mbtiles files directly                     ✗ not a fit

Plugins that store their own data never open database files or manage connections directly. Isolation between plugins is enforced by the server — not by convention.

Plugins that need to open external SQLite files (e.g. MBTiles) at arbitrary paths are outside the scope of this API and continue to manage their own database dependencies.

Plugin Isolation: Separate Database Files

Each plugin receives its own SQLite database file, stored in {configPath}/plugin-db/{pluginId}.db. This provides true filesystem-level isolation with no SQL rewriting required. Plugins write plain table names; there is no namespacing, prefixing, or query transformation. A plugin cannot access another plugin's data because each operates on a physically separate database file.

This approach was chosen over table-name prefixing (rewriting chartspluginid__charts transparently) because SQL rewriting is fragile — it breaks on CTEs, subqueries, quoted identifiers, and string literals that happen to contain table names. Separate files are simple and robust.

TypeScript Interface

// packages/server-api/src/databaseapi.ts

export interface DatabaseApi {
  /**
   * Returns an isolated database handle for the given plugin.
   * Each plugin gets its own database — a plugin cannot access
   * another plugin's data.
   * Always resolves: the built-in SQLite provider is always available.
   */
  getPluginDb(pluginId: string): Promise<PluginDb>
}

export interface PluginDb {
  /**
   * Apply schema migrations in order. The provider tracks which
   * migrations have already been applied and skips them on
   * subsequent starts. Forward-only — no down-migrations.
   */
  migrate(migrations: Migration[]): Promise<void>

  /**
   * Execute a SQL query and return typed rows.
   */
  query<T = Record<string, unknown>>(
    sql: string,
    params?: unknown[]
  ): Promise<T[]>

  /**
   * Execute a SQL statement (INSERT, UPDATE, DELETE) and return
   * the number of affected rows and last inserted row ID.
   */
  run(sql: string, params?: unknown[]): Promise<RunResult>

  /**
   * Execute multiple statements atomically.
   * If the callback throws, the transaction is rolled back.
   */
  transaction<T>(fn: (db: PluginDb) => Promise<T>): Promise<T>
}

export interface RunResult {
  changes: number
  lastInsertRowid: number | bigint
}

export interface Migration {
  version: number   // monotonically increasing
  sql: string       // DDL statement(s) to apply
}

Why async? The built-in SQLite providers are synchronous under the hood (both better-sqlite3 and node:sqlite are sync APIs). However, the interface is async (Promise) to allow non-SQLite providers — such as PostgreSQL — to implement the contract natively without sync wrappers. For the SQLite providers, the async wrapping adds negligible overhead since these are low-frequency operations (config reads, schema migrations, occasional writes).

ServerAPI Integration

// packages/server-api/src/serverapi.ts

export interface ServerAPI
  extends
    // ... existing mixins ...
    WithDatabaseApi,      // consumer side: getDatabaseApi()
    DatabaseApiRegistry { // provider side: registerDatabaseProvider()
  // ...
}

export interface WithDatabaseApi {
  /**
   * Access the Database API.
   * Optional on the interface because the app object is constructed
   * before providers are initialized (matches WithHistoryApi pattern).
   * Always available at runtime once the server has started.
   */
  getDatabaseApi?: () => DatabaseApi
}

export interface DatabaseApiRegistry {
  /**
   * Register a custom database provider (e.g. PostgreSQL).
   */
  registerDatabaseProvider(provider: DatabaseProvider): void
  unregisterDatabaseProvider(): void
}

export interface DatabaseProvider {
  getPluginDb(pluginId: string): Promise<PluginDb>
  close(): Promise<void>
}

Provider Registration (Plugin Side)

A plugin that provides an alternative database backend (e.g. PostgreSQL) registers exactly as a History API provider does:

// signalk-database-postgres/index.ts
module.exports = (app: ServerAPI) => {
  const plugin = {
    id: 'signalk-database-postgres',
    name: 'PostgreSQL Database Provider',

    start: async (config) => {
      const provider = new PostgresProvider(config)
      app.registerDatabaseProvider(provider)
    },

    stop: () => {
      app.unregisterDatabaseProvider()
    }
  }
  return plugin
}

Consumer Usage (Plugin Side)

A plugin that uses the Database API for its own storage. This example is based on the real-world migration of signalk-postgsail, which was successfully tested as the proof-of-concept consumer:

// signalk-postgsail/index.js
module.exports = function (app) {
  var db

  plugin.start = function (options) {
    app.getDatabaseApi().getPluginDb(plugin.id)
      .then((pluginDb) => {
        db = pluginDb
        return db.migrate([{
          version: 1,
          sql: `CREATE TABLE IF NOT EXISTS buffer (
            time REAL, client_id TEXT, latitude REAL,
            longitude REAL, speedoverground REAL,
            courseovergroundtrue REAL, windspeedapparent REAL,
            anglespeedapparent REAL, status TEXT, metrics JSON
          )`
        }])
      })

    // Note: db is assigned asynchronously — guard usage sites
    setInterval(() => {
      if (!db) return
      db.query('SELECT * FROM buffer ORDER BY time')
        .then((data) => { /* process rows */ })
    }, 31 * 1000)
  }

  plugin.stop = function () {
    // No db.close() needed — server manages the lifecycle
  }
}

HTTP Endpoints

Following the History API pattern, the Database API exposes provider management endpoints:

  • GET /signalk/v2/api/database/_providers — list all registered providers with isDefault flag
  • GET /signalk/v2/api/database/_providers/_default — get the current default provider ID
  • POST /signalk/v2/api/database/_providers/_default/:id — change the default provider (requires authorization)

Default Provider: better-sqlite3

The server ships with better-sqlite3 as a bundled dependency and registers it as the _builtin provider. It is always available with zero configuration required.

Why better-sqlite3 and not node:sqlite:

node:sqlite is the ideal long-term answer — zero footprint, built into Node.js. However, as of Node.js 22–24, it remains experimental (stability level 1.1). On Node.js 22 (LTS), it requires the --experimental-sqlite flag. It only reached Release Candidate status (1.2) in Node.js 25.7. Signal K runs on boats — shipping an experimental API as the default database backend for production installations is not appropriate.

better-sqlite3 is battle-tested, ships prebuilt binaries for all major platforms (including ARM), and is the most widely used SQLite library in the Node.js ecosystem. The key insight is that the Database API already solves the core problem even with a native binary: one better-sqlite3 managed by the server replaces N copies managed by N plugins. Plugins remove their own better-sqlite3 dependency entirely. The rebuild-on-upgrade problem is reduced from "every plugin author must publish prebuilds" to "the Signal K server manages one dependency."

When node:sqlite reaches stable status, the server can switch the default provider transparently. Plugins are unaffected — they use the API, not the backend.

Optional built-in: node:sqlite

The server also ships a _builtin_nodesqlite provider that uses Node.js's built-in node:sqlite module. This provider is registered automatically when node:sqlite is available (Node.js 23+ without flags, or Node.js 22 with --experimental-sqlite). Users who prefer zero native dependencies can switch the default to _builtin_nodesqlite via the HTTP endpoint or server configuration.

This gives users the choice today while the ecosystem matures, and positions the server for a seamless transition when node:sqlite stabilizes.

Migration tracking

Each plugin's database contains a _migrations table:

CREATE TABLE _migrations (
  version   INTEGER PRIMARY KEY,
  applied_at TEXT NOT NULL
)

The provider checks this table before applying migrations and skips versions that have already been applied. Migrations are forward-only — no rollback support. Plugins that need to undo a previous migration can add a new version that reverses the change.


Migration Path for Existing Plugins

The SQL that plugins already write requires no changes. Only the boilerplate around it changes.

Before (plugin owns its own SQLite)

const Database = require('better-sqlite3')
const path = require('path')

// Plugin manages its own DB file, connection, and lifecycle
const dbFile = path.join(app.getDataDirPath(), 'myplugin.db')
const db = new Database(dbFile)
db.exec(`CREATE TABLE IF NOT EXISTS buffer (
  time REAL, status TEXT, metrics JSON
)`)

// Synchronous — db handle available immediately
db.prepare('SELECT * FROM buffer ORDER BY time').all()
db.prepare('INSERT INTO buffer VALUES (?,?,?)').run(time, status, json)

// Plugin must close on stop
plugin.stop = () => { db.close() }

After (using Database API)

// No database require, no file path, no lifecycle management
let db
app.getDatabaseApi().getPluginDb(plugin.id).then((pluginDb) => {
  db = pluginDb
  return db.migrate([{
    version: 1,
    sql: `CREATE TABLE IF NOT EXISTS buffer (
      time REAL, status TEXT, metrics JSON
    )`
  }])
})

// Async — guard against db not ready yet
if (!db) return
const rows = await db.query('SELECT * FROM buffer ORDER BY time')
await db.run('INSERT INTO buffer VALUES (?,?,?)', [time, status, json])

// Nothing to clean up — server manages the lifecycle
plugin.stop = () => { }

What changes:

  • Remove import Database from 'better-sqlite3'
  • Remove better-sqlite3 from plugin's package.json
  • Wrap schema creation in migrate() (gains versioned migrations for free)
  • API is async
  • Remove file path management
  • Remove db.close() from plugin.stop() — the server manages the database lifecycle

What does not change:

  • SQL strings are identical
  • Query parameters work identically
  • Schema design is unchanged

Important: async initialization requires guards. With the old synchronous new Database(), the db handle was available immediately. With the async getPluginDb(), there is a brief window where db is undefined. Plugins that use setInterval or event-driven callbacks must guard against this:

// Before (sync — db always available when intervals fire)
const db = new Database(dbFile)
setInterval(() => {
  db.prepare('SELECT ...').all()
}, 30000)

// After (async — db may not be ready yet)
app.getDatabaseApi().getPluginDb(plugin.id).then((pluginDb) => { db = pluginDb })
setInterval(() => {
  if (!db) return  // guard required
  db.query('SELECT ...').then(/* ... */)
}, 30000)

An experienced plugin author can complete the migration in under an hour.


Performance Characteristics

For the target use case: no perceptible difference

The relational storage use case — chart metadata, plugin config, widget layouts — involves low-frequency, low-volume operations. Reading chart bounds on startup, saving a widget layout on user action. SQLite handles these in microseconds. Whether the connection is shared or dedicated is irrelevant at this scale.

Write contention

With the separate-file-per-plugin design, write contention between plugins is eliminated entirely. Each plugin has its own SQLite file with its own write lock. Concurrent writes from different plugins never block each other.

Baseline improvement

Today, multiple plugins each load their own better-sqlite3 native addon into the Node.js process. Native addons carry memory and startup cost. Replacing N independent copies with a single server-managed instance is a strict improvement. The node:sqlite provider eliminates native addon overhead entirely for users who opt in.

Hard boundary: this API is not for time-series

High-frequency ingestion — logging SOG every second, recording anchor position, storing signal history — must not use the Database API. That is the History API's domain. If a plugin attempts to use the Database API for time-series ingestion, the performance will make the boundary obvious. The API design naturally guides plugin authors toward the right tool.


What This Does Not Solve

To keep the scope honest:

  • Time-series ingestion — use the History API. Extend it with a write interface if needed (separate RFC).
  • Routes, waypoints, charts, POIs — use the Resources API. These already have a proper home.
  • Binary/blob storage — out of scope. Use the filesystem via app.getDataDirPath().
  • Cross-plugin data sharing — plugins cannot access each other's databases. Intentional.
  • Opening external SQLite files — the Database API provides a managed database for the plugin's own data. Plugins that need to open arbitrary SQLite files at specific paths (e.g. reading MBTiles chart files, which are SQLite databases containing map tiles) cannot use this API. Those plugins still need their own better-sqlite3 dependency or an alternative approach. This was confirmed during proof-of-concept testing with signalk-charts-provider-simple, which opens user-provided .mbtiles files — a fundamentally different use case from plugin-managed storage.

Open Questions

  1. Provider replacement semantics: If a PostgreSQL provider plugin is installed and later removed, should the server automatically fall back to the SQLite default, or alert the user that data may be inaccessible?

  2. Backup integration: With separate database files per plugin, the backup story is straightforward — back up {configPath}/plugin-db/. Should the server expose this path via the API for integration with backup systems like Keeper?

  3. SQL portability: Plugin authors who use SQLite-specific syntax (e.g. INTEGER PRIMARY KEY AUTOINCREMENT) will have queries that break if the provider is swapped to PostgreSQL. Should the RFC recommend a portability guide, or is this an acceptable tradeoff given that SQLite will cover 95%+ of installations?


Summary of Benefits

Stakeholder Benefit
Plugin authors Remove native database dependencies, gain versioned migrations, no file path handling, no lifecycle management
Users Plugins install cleanly via AppStore on every platform after every Node.js upgrade, reduced native binary footprint
Maintainers One better-sqlite3 to manage instead of N plugins each shipping their own; clear path to node:sqlite when it stabilizes
Keeper / backup All plugin databases in one directory (plugin-db/), easy to include in backups

What this does NOT help

Plugins that open external SQLite files (e.g. MBTiles chart files) at user-specified paths. These plugins need direct file-level database access that is outside the scope of the managed Database API. They continue to carry their own better-sqlite3 dependency.


Validation

This RFC has been validated with a working implementation and real-world plugin migration.

Implementation

A complete implementation exists on the feat-database-api branch of signalk-server, including:

  • packages/server-api/src/databaseapi.ts — public interfaces
  • src/api/database/sqliteprovider.tsbetter-sqlite3 provider
  • src/api/database/nodesqliteprovider.tsnode:sqlite provider
  • src/api/database/index.tsDatabaseApiHttpRegistry (provider management + HTTP routes)
  • Wiring in serverapi.ts, features.ts, api/index.ts, interfaces/plugins.ts
  • All 451 existing tests pass. Format and build succeed.

Proof-of-concept: signalk-postgsail

signalk-postgsail was migrated from direct better-sqlite3 usage to the Database API. The plugin uses SQLite as a write-ahead buffer — collecting vessel metrics (position, speed, wind, electrical, tanks) and periodically submitting batches to the PostgSail cloud API.

What was verified:

  • Plugin starts cleanly, getPluginDb() returns a working handle
  • Migration v1 applied automatically, _migrations table tracks it
  • db.run() inserts buffer rows with real vessel data
  • db.query() reads buffered rows for status display and batch submission
  • db.run() deletes submitted rows after successful cloud upload
  • Per-plugin DB file created at {configPath}/plugin-db/signalk-postgsail.db with WAL mode
  • Server manages the database lifecycle — plugin's stop() does not call close()
  • HTTP endpoints (GET /_providers, GET /_providers/_default, POST /_providers/_default/:id) all functional

Scope boundary confirmed: signalk-charts-provider-simple

Attempted migration of signalk-charts-provider-simple confirmed that the Database API correctly scopes to plugin-owned data only. This plugin opens external .mbtiles files (which are SQLite databases containing map tiles) at user-specified paths in read-only mode. It cannot use the Database API because it needs arbitrary file access, not a managed per-plugin database. This validates that the API's design correctly draws the line between managed storage and direct file I/O.


Prior Art Within Signal K

This proposal deliberately mirrors the architecture of:

  • History API (PR History api provider system #2100) — pluggable provider registration, consumer access via app.getHistoryApi()
  • Resources API — server-managed, provider-registered, typed consumer interface
  • Weather API — same provider/consumer split

The Database API adds a fourth member to this established family of server-managed, provider-backed APIs. Plugin authors already familiar with any of the above will find the pattern immediately recognizable.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions