Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support DynamoDB streams #64

Open
samritchie opened this issue Nov 25, 2022 · 1 comment · May be fixed by #80
Open

Support DynamoDB streams #64

samritchie opened this issue Nov 25, 2022 · 1 comment · May be fixed by #80
Assignees

Comments

@samritchie
Copy link
Collaborator

From #59 (comment)

Currently if we write to a table using the table context it will get automatically serialised with the internal picklers.

Once this same data hits the DynamoDB Stream, we can't deserialise it again because the TableContext serialisation code is all private in the library.

Look into supporting Stream -> Record deserialisation so that F# code can consume Dynamo changes.

Questions to resolve:

  • should this just cover the DynamoDB Streams Client API directly?
  • should Kinesis streams also be supported?
  • should Lambda trigger events also be supported?
@bartelink
Copy link
Member

Given the good overall support for DDB Streams via Lambda Event Source Mappings (and AIUI the lack of meaningful .NET support for client api directly), I'd like to see it cover the Lambda side.

An example case for me is that the Propulsion.DynamoStore Indexer consumes writes by Equinox.DynamoStore

Here, I was initially hoping to 'just' parse directly by reading it back into the write model, but the nature of my task does not involve parsing all the content. Hence consuming it directly happens to work ok without it. But being able to succinctly express the fields I do want to map would definitely make the code far more intelligible.

I'll watch this and try to relate any proposed APIs to how they could be applied to this particular task.

Speculating on the Kinesis side: I'd envisage the parsing being more likely to be valuable in that context; DDB Streams consumption is by its nature going to tend to be a single thing that can more easily be done as a hardwired/hardcoded one-off given the maximum consumer count limits. For Kinesis, I'd say there are far more likely to be multiple consumers with different roles which you then want to maintain the logic and parsing of over time.

@samritchie samritchie self-assigned this Feb 2, 2024
@samritchie samritchie linked a pull request Sep 4, 2024 that will close this issue
@samritchie samritchie linked a pull request Sep 4, 2024 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants