Skip to content

Database Migration examples #34

@BSpendlove

Description

@BSpendlove

DB migrations can be quite awful sometimes. It's not our responsibility to migrate data in the open source solution however I think it would benefit both for users and internally.

Problems encountered so far

  • Too many metrics/rows to use traditional tools like pg_dump or timescales parallel worker
  • Uncompressed tables = larger backups
  • Pipe COPY to gzip program is quite CPU intensive but shows roughly same compression ratio as the timescaledb compression policy (I see around up to 85% compression ratio on hourly exports)
  • How do we do a live migration with 0 downtime

Idea Dumping Grounds

TimescaleDB provides a live migration tool/container (apparently works well for 100+GB workloads)

Could we potentially backup intervals of metrics (eg. every x mins) and offload to an external data source for long term archiving? Whether it be physical storage or something like S3/Glacier and keep a metadata table/file for when users query timestamps longer than X days. I don't think there is any use case to perform aggregation/summaries on long term data, its rather just a problem of keeping X length of data for Y amount of years.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    Status

    In Progress

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions