Skip to content

[REQUEST] Extractor Checkpoints  #301

Closed
Closed
@zprobst

Description

@zprobst

Is your feature request related to a problem? Please describe.

A common pattern in pipelines is to wake up, extract data from a data lake or object store, and then turn it into a graph. However, this can get tricky when the pipeline throws an unhandled exception, is no longer scheduled, or some other change happens that means that it terminates progress. Assuming the user in some way restarts the pipeline, the pipeline will start from the beginning.

A quick google search of "data pipeline checkpoints" will reveal that this is a common pattern/solution. While nodestream doesn't endeavor to have every bell and whistle, this one seems like an obvious win for long running pipelines.

Describe the solution you'd like

The proposed solution has several components:

  1. Allow users to specify an interval that checkpoints occur
  2. Provide an interface that allows extractors to produce a checkpoint: async def checkpoint(self). The extractor can return anything that is pickle-able.
  3. Provide an interface that allows extractors to resume from a checkpoint: async def resume_checkpoint(self, checkpoint): If this method throws an exception, the error will be logged and extract_records will be called like normal.
  4. Introduce a pluggable ObjectStore api that can be used for more than just checkpoints. [REQUEST] Record Schema Inference and Enforcement  #37 would also benefit from this. Possible implementations are (null, tempfile, and s3)

Describe alternatives you've considered
The only other alternative i can think of is for each step to implement that behavior on its own. However, this is troublesome.

Metadata

Metadata

Assignees

Labels

enhancementNew feature or request

Type

No type

Projects

No projects

Relationships

None yet

Development

No branches or pull requests

Issue actions