-
Notifications
You must be signed in to change notification settings - Fork 6
Example for using DAG-PB for storing multiple chunks #85
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
examples/store_dag.js
Outdated
| await storeProof(api, who_pair, expectedRootCid); | ||
|
|
||
| // Store DAG-PB node in IPFS | ||
| const dagCid = await ipfs.dag.put(dagNode, { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ideally, this should be stored in the transaction storage as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ideally, this should be stored in the transaction storage as well.
Yes, right :). Originally, I was storing it with storeProof -> store(dagFile), but something has not been working as I expected, but I am still experimenting with this
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Probably this is because transaction storage treats all chunks as "raw", but for dag we need "dag-pb" codec to be set. Commented on this regard in the Bulletin Chain design doc.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Probably this is because transaction storage treats all chunks as "raw", but for dag we need "dag-pb" codec to be set.
Exactly, that's why I reverted that, because then it generates different CID besides rootCID of DAG file :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I finished the example with chunking - manually and/or DAG support how does that work with storing also proof for DAG: 193b928
examples/store_dag.js
Outdated
| const chunks = []; | ||
| for (let i = 0; i < contents.length; i++) { | ||
| const cid = await store(api, who_pair, contents[i], nonce.addn(i)); | ||
| console.log(`Stored data with CID${i + 1}:`, cid); | ||
| chunks.push({ cid, len: contents[i].length }); | ||
| } | ||
| await new Promise(resolve => setTimeout(resolve, 5000)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
dq: you said We could store chunks on different Bulletin (para)chains in parallel (connected as a one IPFS cluster)
What do you mean by storing chunks on different Bulletin chains in parallel?
IIUC, achieving parallel storage and retrieval would require a Para-Relay-like architecture, where the Relay coordinates splitting a blob into multiple chunks and then handles storing/retrieving those chunks across one or several parachains. Is that the intended behavior? or I misunderstood the goal?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, do I understand correctly that your example refers to polkadot-bulletin-chain/pallets/transaction-storage pallet?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, I just read about Facade in the new architecture doc
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What do you mean by storing chunks on different Bulletin chains in parallel?
All the examples are not storing data in one chain (await store(api, chunk..) -> CID, but with: #83 - we want to run multiple Bulletin chains, so we could:
await await store(api1, chunk1..) -> CID
await await store(api2, chunk2..) -> CID
await await store(api3, chunk3..) -> CID
...
await await store(apiN, chunkN..) -> CID
|
@bkontur What do you think about extending the transaction storage with capability of storing the codec, so we can do it completely in Bulletin without the need to deploy IPFS nodes? Not sure if you are aware, but IPFS doesn't store anything in a distributed way and only serves the local data. |
Just for completness, do you mean "IPFS nodes" => "IPFS DAG nodes"? |
Yes, if we can store DAG on chain we won't need dedicated "IPFS DAG nodes" for this. |
This PR contains example for multipart / chunked content / big files.
The code stores one file, splits into chunks and then uploads those chunks to the Bulletin.
It collects all the partial CIDs for each chunk and saves them as a custom metadata JSON file in the Bulletin.
Now we have two examples:
http://localhost:8080/ipfs/QmW2WQi7j6c7UgJTarActp7tDNikE4B2qXtFCfLPdsgaTQ).