Conversation
|
@tlonny Hi, some thoughts:
|
|
Hi @radex Thanks for your reply! We were foolish enough to try to build a multi-inbox, cross platform, offline-first IMAP-based e-mail client. Our intention is to hold e-mail metadata relevant for searching + summary presentation (not message text/attachments) client-side. Even when stored in a relatively concise/compacted format (SQLite) the storage footprint for a single, large inbox can run to >250MB. Presumably, held as an array of JS objects as per the return value of Unfortunately, I don't think turbo would help, as a user is able to add a new email inbox at any point in time and thus we are at risk of memory blow-out at any time - not just during the initial sync. Appreciate your comment about inconsistency - For our use-case, I don't think we would be significantly affected by this - but can certainly understand how other apps could be... If I tidied up this PR, gating this behavior behind a default-false flag (as per my draft), would it be something you'd consider merging? Many Thanks, Tim |
Yes, if it's not the default, then I'm happy with having that in the codebase. I suggest naming potentially unsafe features "unsafeXxxx" just to discourage users unwilling to read into it and make sure if their use case is safe. For merging, a quick update to tests, a mention in changelog-unreleased and a mention in the sync docs would be highly appreciated. PS. Good luck with the app! |
|
Hi @radex Progress update from me:
Things left to do:
If you have time, please can you provide feedback on the following:
|
I would consider this. While I love the fast-async project, unfortunately it's not super popular, and the complexity of Watermelon's babel config makes maintenance much more difficult. But if you found another solution, that's fine too.
I'm fine with |
|
Would be great if this can be merged |
|
...Its been so long - I totally forgot about this. I ended up using |
Coming from your blog post only :D How is the replicache experience so far ? |
a3673d6 to
10c1a4d
Compare
unsafePullChangesAsyncGenerator is an alternative to pullChanges (now
made optional), that returns an AsyncGenerator of PullResults. This
should make large updates/initial syncs more tractable as the sync can
be split up into several smaller chunks.
1. Updated TS types to be robust discriminated union (i.e. either
pullChanges OR unsafePullChangesAsyncGenerator HAVE to be present)
2. Updated Flow types to make pullChanges and
unsafePullChangesAsyncGenerator optional
3. Added an invariant check to ensure there is at least one pull
strategy provided (for JS lads)
4. Added code that "lifts" the result from pullChanges into an
AsyncGenerator - ensuring only 1 code path going forward
5. Added a loop that pulls of the generator and performs DB writes
until the generator is exhausted.
6. Added tests
7. Added babel support for async generators
|
Hi @radex - finally got some free time and figured I'd try and get this ready to merge (if its still a welcome contribution): @octalpixel - I think Replicache is a fine solution. It may not promise a lot (i.e. AFAIK Replicache only allows lookup by ID and scan. Searching/indexing needs to be done outside of Replicache) but what it does promise on - it delivers very well. |
Hi @radex,
I was wondering if doing something like as proposed in my draft PR would work for facilitating "chunked" pulls. I want to avoid a scenario where we have to hold the entire changeset in memory (as in our specific use-case, the amount of changes could be very large) (see: #650)
All I did was add an optional (useGenerator : boolean) parameter. When true, the pullChanges method instead returns an AsyncGenerator, yielding "chunks" of pulled changes vs. the entire payload at once. I've then just naively placed all code that persists/commits these changes into a loop - that runs for each chunk...
My peanut brain can't come up with any reasons why this shouldn't work - but was hoping you could provide some wisdom as to why/how this approach is flawed.
Many Thanks!
Tim