From 8354818ba19cdd10490495fa561c43f2a054b507 Mon Sep 17 00:00:00 2001 From: Dmitry Sinyavin Date: Fri, 16 May 2025 22:08:36 +0200 Subject: [PATCH 01/30] Add RFC-144 --- ...0144-remove-unnecessary-allocator-usage.md | 337 ++++++++++++++++++ 1 file changed, 337 insertions(+) create mode 100644 text/0144-remove-unnecessary-allocator-usage.md diff --git a/text/0144-remove-unnecessary-allocator-usage.md b/text/0144-remove-unnecessary-allocator-usage.md new file mode 100644 index 000000000..8653a379a --- /dev/null +++ b/text/0144-remove-unnecessary-allocator-usage.md @@ -0,0 +1,337 @@ +# RFC-0144: Remove the host-side runtime memory allocator + +| | | +| --------------- | ------------------------------------------------------------------------------------------- | +| **Start Date** | 2025-05-16 | +| **Description** | Update the runtime-host interface to no longer make use of a host-side allocator | +| **Authors** | Pierre Krieger, Someone Unknown | +## Summary + +Update the runtime-host interface so that it no longer uses the host-side allocator. + +## Prior Art + +The API of these new functions was heavily inspired by the API used by the C programming language. + +This RFC is mainly based on [RFC-4](https://github.com/polkadot-fellows/RFCs/pull/4) by @tomaka, which has never been adopted, and supercedes it. + +### Changes + +* The original RFC required checking if an output buffer address provided to a host function is inside the VM address space range and to stop the runtime execution if that's not the case. That requirement has been removed in this version of the RFC, as in the general case, the host doesn't have exhaustive information about the VM's memory organization. Thus, attempting to write to an out-of-bound region will result in a "normal" runtime panic. +* Function signatures introduced by [PPP#7](https://github.com/w3f/PPPs/pull/7) have been used in this RFC, as the PPP has already been [properly implemented](https://github.com/paritytech/substrate/pull/11490) and [documented](https://github.com/w3f/polkadot-spec/pull/592/files). However, it has never been officially adopted, nor have its functions been in use. +* Added new versions of `ext_misc_runtime_version` and `ext_offchain_random_seed`. +* Addressed discussions from the original RFC-4 discussion flow. + +## Motivation + +The heap allocation of the runtime is currently controlled by the host using a memory allocator on the host side. + +The API of many host functions contains buffer allocations. For example, when calling `ext_hashing_twox_256_version_1`, the host allocates a 32-byte buffer using the host allocator, and returns a pointer to this buffer to the runtime. The runtime later has to call `ext_allocator_free_version_1` on this pointer to free the buffer. + +Even though no benchmark has been done, it is pretty obvious that this design is very inefficient. To continue with the example of `ext_hashing_twox_256_version_1`, it would be more efficient to instead write the output hash to a buffer allocated by the runtime on its stack and passed by pointer to the function. Allocating a buffer on the stack in the worst-case scenario consists of simply decreasing a number; in the best-case scenario, it is free. Doing so would save many VM memory reads and writes by the allocator, and would save a function call to `ext_allocator_free_version_1`. + +Furthermore, the existence of the host-side allocator has become questionable over time. It is implemented in a very naive way, and for determinism and backwards compatibility reasons, it needs to be implemented exactly identically in every client implementation. Runtimes make substantial use of heap memory allocations, and each allocation needs to go through the runtime <-> host boundary twice (once for allocating and once for freeing). Moving the allocator to the runtime side would be a good idea, although it would increase the runtime size. But before the host-side allocator can be deprecated, all the host functions that use it must be updated to avoid using it. + +## Stakeholders + +No attempt was made to convince stakeholders. + +## Explanation + +### New host functions + +This section contains a list of new host functions to introduce and amendments to the existing ones. + +```wat +(func $ext_storage_read_version_2 + (param $key i64) (param $value_out i64) (param $offset i32) (result i64)) +(func $ext_default_child_storage_read_version_2 + (param $child_storage_key i64) (param $key i64) (param $value_out i64) + (param $offset i32) (result i64)) +``` + +The signature and behaviour of `ext_storage_read_version_2` and `ext_default_child_storage_read_version_2` are identical to their version 1 counterparts, but the return value has a different meaning. + +The new functions directly return the number of bytes written into the `value_out` buffer. If the entry doesn't exist, `-1` is returned. Given that the host must never write more bytes than the size of the buffer in `value_out`, and that the size of this buffer is expressed as a 32-bit number, the 64-bit value of `-1` is not ambiguous. + +```wat +(func $ext_storage_next_key_version_2 + (param $key i64) (param $out i64) (return i32)) +(func $ext_default_child_storage_next_key_version_2 + (param $child_storage_key i64) (param $key i64) (param $out i64) (return i32)) +``` + +The behaviour of these functions is identical to their version 1 counterparts. + +Instead of allocating a buffer, writing the next key to it, and returning a pointer to it, the new version of these functions accepts an `out` parameter containing [a pointer-size](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size) to the memory location where the host writes the output. + +These functions return the size, in bytes, of the next key, or `0` if there is no next key. If the size of the next key is larger than the buffer in `out`, the bytes of the key that fit the buffer are written to `out`, and any extra bytes that don't fit are discarded. + +Some notes: + +- It is never possible for the next key to be an empty buffer, because an empty key has no preceding key. For this reason, a return value of `0` can unambiguously be used to indicate the lack of the next key. +- The `ext_storage_next_key_version_2` and `ext_default_child_storage_next_key_version_2` are typically used to enumerate keys that start with a certain prefix. Since storage keys are constructed by concatenating hashes, the runtime is expected to know the size of the next key and can allocate a buffer that can fit said key. When the next key doesn't belong to the desired prefix, it might not fit the buffer, but given that the start of the key is written to the buffer anyway, this can be detected to avoid calling the function the second time with a larger buffer. + +```wat +(func $ext_hashing_keccak_256_version_2 + (param $data i64) (param $out i32)) +(func $ext_hashing_keccak_512_version_2 + (param $data i64) (param $out i32)) +(func $ext_hashing_sha2_256_version_2 + (param $data i64) (param $out i32)) +(func $ext_hashing_blake2_128_version_2 + (param $data i64) (param $out i32)) +(func $ext_hashing_blake2_256_version_2 + (param $data i64) (param $out i32)) +(func $ext_hashing_twox_64_version_2 + (param $data i64) (param $out i32)) +(func $ext_hashing_twox_128_version_2 + (param $data i64) (param $out i32)) +(func $ext_hashing_twox_256_version_2 + (param $data i64) (param $out i32)) +(func $ext_trie_blake2_256_root_version_3 + (param $data i64) (param $version i32) (param $out i32)) +(func $ext_trie_blake2_256_ordered_root_version_3 + (param $data i64) (param $version i32) (param $out i32)) +(func $ext_trie_keccak_256_root_version_3 + (param $data i64) (param $version i32) (param $out i32)) +(func $ext_trie_keccak_256_ordered_root_version_3 + (param $data i64) (param $version i32) (param $out i32)) +(func $ext_crypto_ed25519_generate_version_2 + (param $key_type_id i32) (param $seed i64) (param $out i32)) +(func $ext_crypto_sr25519_generate_version_2 + (param $key_type_id i32) (param $seed i64) (param $out i32) (return i32)) +(func $ext_crypto_ecdsa_generate_version_2 + (param $key_type_id i32) (param $seed i64) (param $out i32) (return i32)) +``` + +The behaviour of these functions is identical to their version 1 or version 2 counterparts. Instead of allocating a buffer, writing the output to it, and returning a pointer to it, the new version of these functions accepts an `out` parameter containing the memory location where the host writes the output. The output is always of a size known at compilation time. + +```wat +(func $ext_default_child_storage_root_version_3 + (param $child_storage_key i64) (param $out i32)) +(func $ext_storage_root_version_3 + (param $out i32)) +``` + +The behaviour of these functions is identical to their version 1 and version 2 counterparts. Instead of allocating a buffer, writing the output to it, and returning a pointer to it, the new versions of these functions accept an `out` parameter containing the memory location where the host writes the output. The output is always of a size known at compilation time. + +The version 1 of these functions has been taken as a base rather than the version 2, as a [PPP#6](https://github.com/w3f/PPPs/pull/6) deprecating the version 2 of these functions has previously been accepted. + +```wat +(func $ext_storage_clear_prefix_version_3 + (param $maybe_prefix i64) (param $maybe_limit i64) + (param $maybe_cursor_in_out i64) (param $backend_out i32) + (param $unique_out i32) (param $loops_out i32) (return i32)) +(func $ext_default_child_storage_clear_prefix_version_3 + (param $child_storage_key i64) (param $prefix i64) (param $maybe_limit i64) + (param $maybe_cursor_in_out i64) (param $backend_out i32) + (param $unique_out i32) (param $loops_out i32) (return i32)) +(func $ext_default_child_storage_kill_version_4 + (param $child_storage_key i64) (param $maybe_limit i64) + (param $maybe_cursor_in_out i64) (param $backend_out i32) + (param $unique_out i32) (param $loops_out i32) (return i32)) +``` + +These functions amend already implemented but still unused functions introduced by [PPP#7](https://github.com/w3f/PPPs/pull/7), hence there's no version number change. `maybe_limit` defines the limit of backend deletions, not counting keys in the current overlay. `maybe_cursor_in_out` may be used to pass a continuation cursor. The cursor is written into the same field if the limit was reached and not all the keys were cleared; otherwise, `None` is written. (CAVEAT: It's impossible to determine appropriate buffer size; the approach is discussible). `backend_out`, `unique_out` and `loops_out` parameters contain the memory location where the output is written (respectively, the number of items removed from the backend DB; the number of unique keys removes, including overlay; the number of iterations done). Any of the output parameters may be `-1`, in which case no output is written. The functions return `0` to indicate success, or `1` if `maybe_cursor_in_out` buffer length was not enough to write the new cursor; in the latter case, `None` is written to the buffer. + +```wat +(func $ext_crypto_ed25519_sign_version_2 + (param $key_type_id i32) (param $key i32) (param $msg i64) (param $out i32) (return i32)) +(func $ext_crypto_sr25519_sign_version_2 + (param $key_type_id i32) (param $key i32) (param $msg i64) (param $out i32) (return i32)) +func $ext_crypto_ecdsa_sign_version_2 + (param $key_type_id i32) (param $key i32) (param $msg i64) (param $out i32) (return i32)) +(func $ext_crypto_ecdsa_sign_prehashed_version_2 + (param $key_type_id i32) (param $key i32) (param $msg i64) (param $out i32) (return i64)) +``` + +The behaviour of these functions is identical to their version 1 counterparts. The new versions of these functions accept an `out` parameter containing the memory location where the host writes the signature. The signatures are always of a size known at compilation time. On success, these functions return `0`. If the public key can't be found in the keystore, these functions return `1` and do not write anything to `out`. + +Note that the return value is `0` on success and `1` on failure, while the previous version of these functions wrote `1` on success (as it represents a SCALE-encoded `Some`) and `0` on failure (as it represents a SCALE-encoded `None`). Returning `0` on success and non-zero on failure is consistent with standard practices in the C programming language and is less surprising than the opposite. + +```wat +(func $ext_crypto_secp256k1_ecdsa_recover_version_3 + (param $sig i32) (param $msg i32) (param $out i32) (return i32)) +(func $ext_crypto_secp256k1_ecdsa_recover_compressed_version_3 + (param $sig i32) (param $msg i32) (param $out i32) (return i32)) +``` + +The behaviour of these functions is identical to their version 2 counterparts. The new versions of these functions accept an `out` parameter containing the memory location where the host writes the signature. The signatures are always of a size known at compilation time. On success, these functions return `0`. On failure, these functions return a non-zero value and do not write anything to `out`. + +The non-zero value written on failure is: + +- 1: incorrect value of R or S +- 2: incorrect value of V +- 3: invalid signature + +These values are equal to the values returned on error by the version 2 (see ), but incremented by 1 to reserve 0 for success. + +```wat +(func $ext_crypto_ed25519_num_public_keys_version_1 + (param $key_type_id i32) (return i32)) +(func $ext_crypto_ed25519_public_key_version_2 + (param $key_type_id i32) (param $key_index i32) (param $out i32)) +(func $ext_crypto_sr25519_num_public_keys_version_1 + (param $key_type_id i32) (return i32)) +(func $ext_crypto_sr25519_public_key_version_2 + (param $key_type_id i32) (param $key_index i32) (param $out i32)) +(func $ext_crypto_ecdsa_num_public_keys_version_1 + (param $key_type_id i32) (return i32)) +(func $ext_crypto_ecdsa_public_key_version_2 + (param $key_type_id i32) (param $key_index i32) (param $out i32)) +``` + +The functions supersede the `ext_crypto_ed25519_public_key_version_1`, `ext_crypto_sr25519_public_key_version_1`, and `ext_crypto_ecdsa_public_key_version_1` host functions. + +Instead of calling `ext_crypto_ed25519_public_key_version_1` to obtain the list of all the keys at once, the runtime should instead call `ext_crypto_ed25519_num_public_keys_version_1` to get the number of public keys available, then `ext_crypto_ed25519_public_key_version_2` repeatedly. +The `ext_crypto_ed25519_public_key_version_2` function writes the public key of the given `key_index` to the memory location designated by `out`. The `key_index` must be between 0 (included) and `n` (excluded), where `n` is the value returned by `ext_crypto_ed25519_num_public_keys_version_1`. Execution must trap if `n` is out of range. + +The same explanations apply for `ext_crypto_sr25519_public_key_version_1` and `ext_crypto_ecdsa_public_key_version_1`. + +Host implementers should be aware that the list of public keys (including their ordering) must not change while the runtime is running. That is most likely done by copying the list of all available keys either at the start of the execution or the first time the list is accessed. + +```wat +(func $ext_offchain_http_request_start_version_2 + (param $method i64) (param $uri i64) (param $meta i64) (result i32)) +``` + +The behaviour of this function is identical to its version 1 counterpart. Instead of allocating a buffer, writing the request identifier in it, and returning a pointer to it, version 2 of this function simply returns the newly-assigned identifier to the HTTP request. On failure, this function returns `-1`. An identifier of `-1` is invalid and is reserved to indicate failure. + +```wat +(func $ext_offchain_http_request_write_body_version_2 + (param $method i64) (param $uri i64) (param $meta i64) (result i32)) +(func $ext_offchain_http_response_read_body_version_2 + (param $request_id i32) (param $buffer i64) (param $deadline i64) (result i64)) +``` + +The behaviour of these functions is identical to their version 1 counterpart. Instead of allocating a buffer, writing two bytes in it, and returning a pointer to it, the new version of these functions simply indicates what happened: + +- For `ext_offchain_http_request_write_body_version_2`, 0 on success. +- For `ext_offchain_http_response_read_body_version_2`, 0 or a non-zero number of bytes on success. +- -1 if the deadline was reached. +- -2 if there was an I/O error while processing the request. +- -3 if the identifier of the request is invalid. + +These values are equal to the values returned on error by version 1 (see ), but tweaked to reserve positive numbers for success. + +When it comes to `ext_offchain_http_response_read_body_version_2`, the host implementers must not read too much data at once to avoid ambiguity in the returned value. Given that the `buffer` size is always inferior or equal to 4 GiB, this is not a problem. + +```wat +(func $ext_offchain_http_response_wait_version_2 + (param $ids i64) (param $deadline i64) (param $out i32)) +``` + +The behaviour of this function is identical to its version 1 counterpart. Instead of allocating a buffer, writing the output to it, and returning a pointer to it, the new version of this function accepts an `out` parameter containing the memory location where the host writes the output. + +The encoding of the response code is also modified compared to its version 1 counterpart, and each response code now encodes up to 4 little-endian bytes as described below: + +- 100-999: The request has finished with the given HTTP status code. +- -1: The deadline was reached. +- -2: There was an I/O error while processing the request. +- -3: The identifier of the request is invalid. + +The buffer passed to `out` must always have a size of `4 * n` where `n` is the number of elements in the `ids`. + +```wat +(func $ext_offchain_http_response_header_name_version_1 + (param $request_id i32) (param $header_index i32) (param $out i64) (result i64)) +(func $ext_offchain_http_response_header_value_version_1 + (param $request_id i32) (param $header_index i32) (param $out i64) (result i64)) +``` + +These functions supersede the `ext_offchain_http_response_headers_version_1` host function. + +Contrary to `ext_offchain_http_response_headers_version_1`, only one header indicated by `header_index` can be read at a time. Instead of calling `ext_offchain_http_response_headers_version_1` once, the runtime should call `ext_offchain_http_response_header_name_version_1` and `ext_offchain_http_response_header_value_version_1` multiple times with an increasing `header_index`, until a value of `-1` is returned. + +These functions accept an `out` parameter containing [a pointer-size](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size) to the memory location where the header name or value should be written. + +These functions return the size, in bytes, of the header name or header value. If the request doesn't exist or is in an invalid state (as documented for `ext_offchain_http_response_headers_version_1`) or the `header_index` is out of range, a value of `-1` is returned. Given that the host must never write more bytes than the size of the buffer in `out`, and that the size of this buffer is expressed as a 32-bit number, a 64-bit value of `-1` is not ambiguous. + +If the buffer in `out` is too small to fit the entire header name or value, only the bytes that fit are written, and the rest are discarded. + +```wat +(func $ext_offchain_submit_transaction_version_2 + (param $data i64) (return i32)) +(func $ext_offchain_http_request_add_header_version_2 + (param $request_id i32) (param $name i64) (param $value i64) (result i32)) +``` + +Instead of allocating a buffer, writing `1` or `0` in it, and returning a pointer to it, the version 2 of these functions returns `0` or `1`, where `0` indicates success and `1` indicates failure. The runtime must interpret any non-`0` value as failure, but the client must always return `1` in case of failure. + +```wat +(func $ext_offchain_local_storage_read_version_1 + (param $kind i32) (param $key i64) (param $value_out i64) (param $offset i32) (result i64)) +``` + +This function supercedes the `ext_offchain_local_storage_get_version_1` host function, and uses an API and logic similar to `ext_storage_read_version_2`. + +It reads the offchain local storage key indicated by `kind` and `key` starting at the byte indicated by `offset`, and writes the value to the [pointer-size](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size) indicated by `value_out`. + +The function returns the number of bytes written into the `value_out` buffer. If the entry doesn't exist, the `-1` value is returned. Given that the host must never write more bytes than the size of the buffer in `value_out`, and that the size of this buffer is expressed as a 32-bit number, a 64-bit value of `-1` is not ambiguous. + +```wat +(func $ext_offchain_network_peer_id_version_1 + (param $out i64)) +``` + +This function writes [the `PeerId` of the local node](https://spec.polkadot.network/chap-networking#id-node-identities) to the memory location indicated by `out`. A `PeerId` is always 38 bytes long. + +```wat +(func $ext_misc_runtime_version_version_2 + (param $wasm i64) (param $out i64)) +``` + +The behaviour of this function is identical to its version 1 counterpart. Instead of allocating a buffer, writing the output to it, and returning a pointer to it, the new version of this function accepts an `out` parameter containing [pointer-size](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size) to the memory location where the host writes the output. + +```wat +(func $ext_offchain_random_seed_version_2 (param $out i32)) +``` + +The behaviour of this function is identical to its version 1 counterpart. Instead of allocating a buffer, writing the output to it, and returning a pointer to it, the new version of this function accepts an `out` parameter containing the address of the memory location where the host writes the output. The size is output is always 32 bytes. + +```wat +(func $ext_misc_input_read_version_1 + (param $offset i64) (param $out i64) (return i32)) +``` + +When a runtime function is called, the host uses the allocator to allocate memory within the runtime to write some input data. The new host function provides an alternative way to access the input that doesn't use the allocator. + +The function copies some data from the input data to the runtime's memory. The `offset` parameter indicates the offset within the input data from which to start copying, and must lie inside the output buffer provided. The `out` parameter is [a pointer-size](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size) and contains the buffer where to write. + +The runtime execution stops with an error if `offset` is strictly greater than the input data size. + +The return value is the number of bytes written unless `out` has zero length, in which case the full length of input data in bytes is returned, and nothing is written into the output buffer. + +### Other changes + +In addition to the new host functions, this RFC proposes two changes to the runtime-host interface: + +- The following function signature is now also accepted for runtime entry points: `(func (result i64))`. +- Runtimes no longer need to expose a constant named `__heap_base`. + +All the host functions superseded by new host functions are now considered deprecated and should no longer be used. + +The following other host functions are also considered deprecated: + +- `ext_storage_get_version_1` +- `ext_storage_changes_root_version_1` +- `ext_default_child_storage_get_version_1` +- `ext_allocator_malloc_version_1` +- `ext_allocator_free_version_1` +- `ext_offchain_network_state_version_1` + +## Unresolved Questions + +The changes in this RFC would need to be benchmarked. That involves implementing the RFC and measuring the speed difference. + +It is expected that most host functions are faster or equal in speed to their deprecated counterparts, with the following exceptions: + +- `ext_input_size_version_1`/`ext_input_read_version_1` is inherently slower than obtaining a buffer with the entire data due to the two extra function calls and the extra copying. However, given that this only happens once per runtime call, the cost is expected to be negligible. + +- The `ext_crypto_*_public_keys`, `ext_offchain_network_state`, and `ext_offchain_http_*` host functions are likely slightly slower than their deprecated counterparts, but given that they are used only in offchain workers, that is acceptable. + +- It is unclear how replacing `ext_storage_get` with `ext_storage_read` and `ext_default_child_storage_get` with `ext_default_child_storage_read` will impact performance. + +- It is unclear how the changes to `ext_storage_next_key` and `ext_default_child_storage_next_key` will impact performance. + From 64afafd2999cd98fc329e75e1e44c1a69f1a0da3 Mon Sep 17 00:00:00 2001 From: Dmitry Sinyavin Date: Fri, 16 May 2025 22:12:16 +0200 Subject: [PATCH 02/30] Actually, RFC-145 --- ...ator-usage.md => 0145-remove-unnecessary-allocator-usage.md} | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) rename text/{0144-remove-unnecessary-allocator-usage.md => 0145-remove-unnecessary-allocator-usage.md} (99%) diff --git a/text/0144-remove-unnecessary-allocator-usage.md b/text/0145-remove-unnecessary-allocator-usage.md similarity index 99% rename from text/0144-remove-unnecessary-allocator-usage.md rename to text/0145-remove-unnecessary-allocator-usage.md index 8653a379a..20e1ed5b7 100644 --- a/text/0144-remove-unnecessary-allocator-usage.md +++ b/text/0145-remove-unnecessary-allocator-usage.md @@ -1,4 +1,4 @@ -# RFC-0144: Remove the host-side runtime memory allocator +# RFC-0145: Remove the host-side runtime memory allocator | | | | --------------- | ------------------------------------------------------------------------------------------- | From f6aa092ee849d9a3555529d2a68255780223cef0 Mon Sep 17 00:00:00 2001 From: Dmitry Sinyavin Date: Fri, 16 May 2025 22:17:55 +0200 Subject: [PATCH 03/30] Minor fix --- text/0145-remove-unnecessary-allocator-usage.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/text/0145-remove-unnecessary-allocator-usage.md b/text/0145-remove-unnecessary-allocator-usage.md index 20e1ed5b7..eb0129bd3 100644 --- a/text/0145-remove-unnecessary-allocator-usage.md +++ b/text/0145-remove-unnecessary-allocator-usage.md @@ -327,7 +327,7 @@ The changes in this RFC would need to be benchmarked. That involves implementing It is expected that most host functions are faster or equal in speed to their deprecated counterparts, with the following exceptions: -- `ext_input_size_version_1`/`ext_input_read_version_1` is inherently slower than obtaining a buffer with the entire data due to the two extra function calls and the extra copying. However, given that this only happens once per runtime call, the cost is expected to be negligible. +- `ext_misc_input_read_version_1` is inherently slower than obtaining a buffer with the entire data due to the two extra function calls and the extra copying. However, given that this only happens once per runtime call, the cost is expected to be negligible. - The `ext_crypto_*_public_keys`, `ext_offchain_network_state`, and `ext_offchain_http_*` host functions are likely slightly slower than their deprecated counterparts, but given that they are used only in offchain workers, that is acceptable. From 1bbb85c40a78049fdb8730d2851a7cd24057ecc4 Mon Sep 17 00:00:00 2001 From: Dmitry Sinyavin Date: Sun, 25 May 2025 17:32:04 +0200 Subject: [PATCH 04/30] Implementation-related changes --- ...0145-remove-unnecessary-allocator-usage.md | 40 ++++++++++--------- 1 file changed, 22 insertions(+), 18 deletions(-) diff --git a/text/0145-remove-unnecessary-allocator-usage.md b/text/0145-remove-unnecessary-allocator-usage.md index eb0129bd3..53c76e816 100644 --- a/text/0145-remove-unnecessary-allocator-usage.md +++ b/text/0145-remove-unnecessary-allocator-usage.md @@ -19,6 +19,10 @@ This RFC is mainly based on [RFC-4](https://github.com/polkadot-fellows/RFCs/pul * The original RFC required checking if an output buffer address provided to a host function is inside the VM address space range and to stop the runtime execution if that's not the case. That requirement has been removed in this version of the RFC, as in the general case, the host doesn't have exhaustive information about the VM's memory organization. Thus, attempting to write to an out-of-bound region will result in a "normal" runtime panic. * Function signatures introduced by [PPP#7](https://github.com/w3f/PPPs/pull/7) have been used in this RFC, as the PPP has already been [properly implemented](https://github.com/paritytech/substrate/pull/11490) and [documented](https://github.com/w3f/polkadot-spec/pull/592/files). However, it has never been officially adopted, nor have its functions been in use. +* For `*_next_key` input buffer is reused for output. +* Error codes were harmonized to be always represented by negative values. +* Return values were harmonized to `i64` everywhere where they represent either a positive outcome as a positive integer or a negative outcome as a negative error code. +* `ext_offchain_network_peer_id_version_1` now returns a result code instead of silently failing if the network status is unavailable. * Added new versions of `ext_misc_runtime_version` and `ext_offchain_random_seed`. * Addressed discussions from the original RFC-4 discussion flow. @@ -56,16 +60,16 @@ The new functions directly return the number of bytes written into the `value_ou ```wat (func $ext_storage_next_key_version_2 - (param $key i64) (param $out i64) (return i32)) + (param $key_in_out i64) (return i32)) (func $ext_default_child_storage_next_key_version_2 - (param $child_storage_key i64) (param $key i64) (param $out i64) (return i32)) + (param $child_storage_key i64) (param $key_in_out i64) (return i32)) ``` The behaviour of these functions is identical to their version 1 counterparts. -Instead of allocating a buffer, writing the next key to it, and returning a pointer to it, the new version of these functions accepts an `out` parameter containing [a pointer-size](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size) to the memory location where the host writes the output. +Instead of allocating a buffer, writing the next key to it, and returning a pointer to it, the new version of these functions accepts an `key_in_out` parameter containing [a pointer-size](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size) to the memory location where the host first reads the input from, and then writes the output to. -These functions return the size, in bytes, of the next key, or `0` if there is no next key. If the size of the next key is larger than the buffer in `out`, the bytes of the key that fit the buffer are written to `out`, and any extra bytes that don't fit are discarded. +These functions return the size, in bytes, of the next key, or `0` if there is no next key. If the size of the next key is larger than the buffer in `key_in_out`, the bytes of the key that fit the buffer are written to `key_in_out`, and any extra bytes that don't fit are discarded. Some notes: @@ -170,22 +174,22 @@ These values are equal to the values returned on error by the version 2 (see Date: Tue, 3 Jun 2025 17:53:04 +0200 Subject: [PATCH 05/30] `clear_prefix` interface revamp --- ...0145-remove-unnecessary-allocator-usage.md | 20 +++++++++++-------- 1 file changed, 12 insertions(+), 8 deletions(-) diff --git a/text/0145-remove-unnecessary-allocator-usage.md b/text/0145-remove-unnecessary-allocator-usage.md index 53c76e816..ff091ad98 100644 --- a/text/0145-remove-unnecessary-allocator-usage.md +++ b/text/0145-remove-unnecessary-allocator-usage.md @@ -125,26 +125,30 @@ The version 1 of these functions has been taken as a base rather than the versio ```wat (func $ext_storage_clear_prefix_version_3 (param $maybe_prefix i64) (param $maybe_limit i64) - (param $maybe_cursor_in_out i64) (param $backend_out i32) - (param $unique_out i32) (param $loops_out i32) (return i32)) + (param $maybe_cursor_in i64) (param $removal_results_out i32)) (func $ext_default_child_storage_clear_prefix_version_3 (param $child_storage_key i64) (param $prefix i64) (param $maybe_limit i64) - (param $maybe_cursor_in_out i64) (param $backend_out i32) - (param $unique_out i32) (param $loops_out i32) (return i32)) + (param $maybe_cursor_in i64) (param $removal_results_out i32)) (func $ext_default_child_storage_kill_version_4 (param $child_storage_key i64) (param $maybe_limit i64) - (param $maybe_cursor_in_out i64) (param $backend_out i32) - (param $unique_out i32) (param $loops_out i32) (return i32)) + (param $maybe_cursor_in i64) (param $removal_results_out i32)) ``` -These functions amend already implemented but still unused functions introduced by [PPP#7](https://github.com/w3f/PPPs/pull/7), hence there's no version number change. `maybe_limit` defines the limit of backend deletions, not counting keys in the current overlay. `maybe_cursor_in_out` may be used to pass a continuation cursor. The cursor is written into the same field if the limit was reached and not all the keys were cleared; otherwise, `None` is written. (CAVEAT: It's impossible to determine appropriate buffer size; the approach is discussible). `backend_out`, `unique_out` and `loops_out` parameters contain the memory location where the output is written (respectively, the number of items removed from the backend DB; the number of unique keys removes, including overlay; the number of iterations done). Any of the output parameters may be `-1`, in which case no output is written. The functions return `0` to indicate success, or `1` if `maybe_cursor_in_out` buffer length was not enough to write the new cursor; in the latter case, `None` is written to the buffer. +These functions amend already implemented but still unused functions introduced by [PPP#7](https://github.com/w3f/PPPs/pull/7), hence there's no version number change. `maybe_limit` defines the limit of backend deletions, not counting keys in the current overlay. `maybe_cursor_in` may be used to pass a continuation cursor. After the operation is completed, a SCALE-encoded [varying data](https://spec.polkadot.network/id-cryptography-encoding#defn-varrying-data-type) are written to the provided output buffer. The varying data consists from the following fields, in order: +* [Optional](https://spec.polkadot.network/id-cryptography-encoding#defn-option-type) continuation cursor. Absence of the cursor denotes the end of the operation; +* 32-bit unsigned integer representing the number of items removed from the backend DB; +* 32-bit unsigned integer representing the number of unique keys removes, including overlay; +* 32-bit unsigned integer representing the number of iterations done. + +The size of the output buffer must be determined at the compile time. If the SCALE-encoded data do not fit into the buffer, the data are silently trucated. The caller may determine the truncation by checking the value length data contained in the SCALE-encoded data header. + ```wat (func $ext_crypto_ed25519_sign_version_2 (param $key_type_id i32) (param $key i32) (param $msg i64) (param $out i32) (return i32)) (func $ext_crypto_sr25519_sign_version_2 (param $key_type_id i32) (param $key i32) (param $msg i64) (param $out i32) (return i32)) -func $ext_crypto_ecdsa_sign_version_2 +(func $ext_crypto_ecdsa_sign_version_2 (param $key_type_id i32) (param $key i32) (param $msg i64) (param $out i32) (return i32)) (func $ext_crypto_ecdsa_sign_prehashed_version_2 (param $key_type_id i32) (param $key i32) (param $msg i64) (param $out i32) (return i64)) From b7a3e622e09c513c2980531514866b2268708734 Mon Sep 17 00:00:00 2001 From: Dmitry Sinyavin Date: Wed, 4 Jun 2025 16:32:54 +0200 Subject: [PATCH 06/30] Fix typos --- ...0145-remove-unnecessary-allocator-usage.md | 34 +++++++++---------- 1 file changed, 17 insertions(+), 17 deletions(-) diff --git a/text/0145-remove-unnecessary-allocator-usage.md b/text/0145-remove-unnecessary-allocator-usage.md index ff091ad98..d311291b4 100644 --- a/text/0145-remove-unnecessary-allocator-usage.md +++ b/text/0145-remove-unnecessary-allocator-usage.md @@ -13,7 +13,7 @@ Update the runtime-host interface so that it no longer uses the host-side alloca The API of these new functions was heavily inspired by the API used by the C programming language. -This RFC is mainly based on [RFC-4](https://github.com/polkadot-fellows/RFCs/pull/4) by @tomaka, which has never been adopted, and supercedes it. +This RFC is mainly based on [RFC-4](https://github.com/polkadot-fellows/RFCs/pull/4) by @tomaka, which has never been adopted, and supersedes it. ### Changes @@ -60,9 +60,9 @@ The new functions directly return the number of bytes written into the `value_ou ```wat (func $ext_storage_next_key_version_2 - (param $key_in_out i64) (return i32)) + (param $key_in_out i64) (result i32)) (func $ext_default_child_storage_next_key_version_2 - (param $child_storage_key i64) (param $key_in_out i64) (return i32)) + (param $child_storage_key i64) (param $key_in_out i64) (result i32)) ``` The behaviour of these functions is identical to their version 1 counterparts. @@ -104,9 +104,9 @@ Some notes: (func $ext_crypto_ed25519_generate_version_2 (param $key_type_id i32) (param $seed i64) (param $out i32)) (func $ext_crypto_sr25519_generate_version_2 - (param $key_type_id i32) (param $seed i64) (param $out i32) (return i32)) + (param $key_type_id i32) (param $seed i64) (param $out i32) (result i32)) (func $ext_crypto_ecdsa_generate_version_2 - (param $key_type_id i32) (param $seed i64) (param $out i32) (return i32)) + (param $key_type_id i32) (param $seed i64) (param $out i32) (result i32)) ``` The behaviour of these functions is identical to their version 1 or version 2 counterparts. Instead of allocating a buffer, writing the output to it, and returning a pointer to it, the new version of these functions accepts an `out` parameter containing the memory location where the host writes the output. The output is always of a size known at compilation time. @@ -141,17 +141,17 @@ These functions amend already implemented but still unused functions introduced * 32-bit unsigned integer representing the number of unique keys removes, including overlay; * 32-bit unsigned integer representing the number of iterations done. -The size of the output buffer must be determined at the compile time. If the SCALE-encoded data do not fit into the buffer, the data are silently trucated. The caller may determine the truncation by checking the value length data contained in the SCALE-encoded data header. +The size of the output buffer must be determined at the compile time. If the SCALE-encoded data do not fit into the buffer, the data are silently truncated. The caller may determine the truncation by checking the value length data contained in the SCALE-encoded data header. ```wat (func $ext_crypto_ed25519_sign_version_2 - (param $key_type_id i32) (param $key i32) (param $msg i64) (param $out i32) (return i32)) + (param $key_type_id i32) (param $key i32) (param $msg i64) (param $out i32) (result i32)) (func $ext_crypto_sr25519_sign_version_2 - (param $key_type_id i32) (param $key i32) (param $msg i64) (param $out i32) (return i32)) + (param $key_type_id i32) (param $key i32) (param $msg i64) (param $out i32) (result i32)) (func $ext_crypto_ecdsa_sign_version_2 - (param $key_type_id i32) (param $key i32) (param $msg i64) (param $out i32) (return i32)) + (param $key_type_id i32) (param $key i32) (param $msg i64) (param $out i32) (result i32)) (func $ext_crypto_ecdsa_sign_prehashed_version_2 - (param $key_type_id i32) (param $key i32) (param $msg i64) (param $out i32) (return i64)) + (param $key_type_id i32) (param $key i32) (param $msg i64) (param $out i32) (result i64)) ``` The behaviour of these functions is identical to their version 1 counterparts. The new versions of these functions accept an `out` parameter containing the memory location where the host writes the signature. The signatures are always of a size known at compilation time. On success, these functions return `0`. If the public key can't be found in the keystore, these functions return `1` and do not write anything to `out`. @@ -160,9 +160,9 @@ Note that the return value is `0` on success and `1` on failure, while the previ ```wat (func $ext_crypto_secp256k1_ecdsa_recover_version_3 - (param $sig i32) (param $msg i32) (param $out i32) (return i32)) + (param $sig i32) (param $msg i32) (param $out i32) (result i32)) (func $ext_crypto_secp256k1_ecdsa_recover_compressed_version_3 - (param $sig i32) (param $msg i32) (param $out i32) (return i32)) + (param $sig i32) (param $msg i32) (param $out i32) (result i32)) ``` The behaviour of these functions is identical to their version 2 counterparts. The new versions of these functions accept an `out` parameter containing the memory location where the host writes the signature. The signatures are always of a size known at compilation time. On success, these functions return `0`. On failure, these functions return a non-zero value and do not write anything to `out`. @@ -177,15 +177,15 @@ These values are equal to the values returned on error by the version 2 (see Date: Mon, 25 Aug 2025 19:34:58 +0200 Subject: [PATCH 07/30] Final (?) rewrite --- ...0145-remove-unnecessary-allocator-usage.md | 747 +++++++++++++----- 1 file changed, 560 insertions(+), 187 deletions(-) diff --git a/text/0145-remove-unnecessary-allocator-usage.md b/text/0145-remove-unnecessary-allocator-usage.md index d311291b4..e0df74248 100644 --- a/text/0145-remove-unnecessary-allocator-usage.md +++ b/text/0145-remove-unnecessary-allocator-usage.md @@ -4,7 +4,8 @@ | --------------- | ------------------------------------------------------------------------------------------- | | **Start Date** | 2025-05-16 | | **Description** | Update the runtime-host interface to no longer make use of a host-side allocator | -| **Authors** | Pierre Krieger, Someone Unknown | +| **Authors** | Pierre Krieger, Someone Unknown + | ## Summary Update the runtime-host interface so that it no longer uses the host-side allocator. @@ -13,18 +14,16 @@ Update the runtime-host interface so that it no longer uses the host-side alloca The API of these new functions was heavily inspired by the API used by the C programming language. -This RFC is mainly based on [RFC-4](https://github.com/polkadot-fellows/RFCs/pull/4) by @tomaka, which has never been adopted, and supersedes it. +This RFC is mainly based on [RFC-4](https://github.com/polkadot-fellows/RFCs/pull/4) by @tomaka, which was never adopted, and this RFC supersedes it. -### Changes +### Changes from RFC-4 -* The original RFC required checking if an output buffer address provided to a host function is inside the VM address space range and to stop the runtime execution if that's not the case. That requirement has been removed in this version of the RFC, as in the general case, the host doesn't have exhaustive information about the VM's memory organization. Thus, attempting to write to an out-of-bound region will result in a "normal" runtime panic. +* The original RFC required checking if an output buffer address provided to a host function is inside the VM address space range and to stop the runtime execution if that's not the case. That requirement has been removed in this version of the RFC, as in the general case, the host doesn't have exhaustive information about the VM's memory organization. Thus, attempting to write to an out-of-bounds region will result in a "normal" runtime panic. * Function signatures introduced by [PPP#7](https://github.com/w3f/PPPs/pull/7) have been used in this RFC, as the PPP has already been [properly implemented](https://github.com/paritytech/substrate/pull/11490) and [documented](https://github.com/w3f/polkadot-spec/pull/592/files). However, it has never been officially adopted, nor have its functions been in use. -* For `*_next_key` input buffer is reused for output. -* Error codes were harmonized to be always represented by negative values. * Return values were harmonized to `i64` everywhere where they represent either a positive outcome as a positive integer or a negative outcome as a negative error code. * `ext_offchain_network_peer_id_version_1` now returns a result code instead of silently failing if the network status is unavailable. * Added new versions of `ext_misc_runtime_version` and `ext_offchain_random_seed`. -* Addressed discussions from the original RFC-4 discussion flow. +* Addressed discussions from the original RFC-4 discussion thread. ## Motivation @@ -32,7 +31,7 @@ The heap allocation of the runtime is currently controlled by the host using a m The API of many host functions contains buffer allocations. For example, when calling `ext_hashing_twox_256_version_1`, the host allocates a 32-byte buffer using the host allocator, and returns a pointer to this buffer to the runtime. The runtime later has to call `ext_allocator_free_version_1` on this pointer to free the buffer. -Even though no benchmark has been done, it is pretty obvious that this design is very inefficient. To continue with the example of `ext_hashing_twox_256_version_1`, it would be more efficient to instead write the output hash to a buffer allocated by the runtime on its stack and passed by pointer to the function. Allocating a buffer on the stack in the worst-case scenario consists of simply decreasing a number; in the best-case scenario, it is free. Doing so would save many VM memory reads and writes by the allocator, and would save a function call to `ext_allocator_free_version_1`. +Even though no benchmark has been done, it is pretty obvious that this design is very inefficient. To continue with the example of `ext_hashing_twox_256_version_1`, it would be more efficient to instead write the output hash to a buffer allocated by the runtime on its stack and passed by pointer to the function. Allocating a buffer on the stack, in the worst case, consists simply of decreasing a number; in the best case, it is free. Doing so would save many VM memory reads and writes by the allocator, and would save a function call to `ext_allocator_free_version_1`. Furthermore, the existence of the host-side allocator has become questionable over time. It is implemented in a very naive way, and for determinism and backwards compatibility reasons, it needs to be implemented exactly identically in every client implementation. Runtimes make substantial use of heap memory allocations, and each allocation needs to go through the runtime <-> host boundary twice (once for allocating and once for freeing). Moving the allocator to the runtime side would be a good idea, although it would increase the runtime size. But before the host-side allocator can be deprecated, all the host functions that use it must be updated to avoid using it. @@ -42,304 +41,678 @@ No attempt was made to convince stakeholders. ## Explanation -### New host functions +### New definitions -This section contains a list of new host functions to introduce and amendments to the existing ones. +#### New Definition I: Runtime Optional Positive Integer + +The Runtime optional positive integer is a signed 64-bit value. Positive values in the range of [0..2³²) represent corresponding unsigned 32-bit values. The value of `-1` represents a non-existing value (an _absent_ value). All other values are invalid. + +#### New Definition II: Runtime Optional Pointer-Size + +The runtime optional pointer-size has exactly the same definition as runtime pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) with the value of 2⁶⁴-1 representing a non-existing value (an _absent_ value). + +### Changes to host functions + +#### ext_storage_get + +The function is deprecated. Users are encouraged to use `ext_storage_read_version_2` instead. + +#### ext_storage_read + +The new version 2 is introduced, deprecating `ext_storage_read_version_1`. The new signature is ```wat (func $ext_storage_read_version_2 - (param $key i64) (param $value_out i64) (param $offset i32) (result i64)) -(func $ext_default_child_storage_read_version_2 - (param $child_storage_key i64) (param $key i64) (param $value_out i64) - (param $offset i32) (result i64)) + (param $key i64) (param $value_out i64) (param $value_offset i32) (result i64)) ``` -The signature and behaviour of `ext_storage_read_version_2` and `ext_default_child_storage_read_version_2` are identical to their version 1 counterparts, but the return value has a different meaning. +##### Arguments -The new functions directly return the number of bytes written into the `value_out` buffer. If the entry doesn't exist, `-1` is returned. Given that the host must never write more bytes than the size of the buffer in `value_out`, and that the size of this buffer is expressed as a 32-bit number, the 64-bit value of `-1` is not ambiguous. +* `key` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the storage key being read; +* `value_out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer where the value read should be stored. If the buffer is not long enough to accommodate the value, the value is truncated to the length of the buffer; +* `value_offset` is a 32-bit offset from which the value reading should start. + +##### Result + +The result is an optional positive integer ([New Definition I](#new-def-i)), representing either the full length of the value in storage or the _absence_ of such a value in storage. + +##### Changes + +The logic of the function is unchanged since the previous version. Only the result representation has changed. + +#### ext_storage_clear_prefix + +The new version 3 is introduced, deprecating `ext_storage_clear_prefix_version_2`. The new signature is + +```wat +(func $ext_storage_clear_prefix_version_3 + (param $maybe_prefix i64) (param $maybe_limit i64) (param $maybe_cursor_in i64) + (param $maybe_cursor_out i64) (param $backend i32) (param $unique i32) (param $loops i32) + (result i32)) +``` + +##### Arguments + +* `maybe_prefix` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) containing a (possibly empty) storage prefix being cleared; +* `maybe_limit` is an optional positive integer ([New Definition I](#new-def-i)) representing either the maximum number of backend deletions which may happen, or the _absence_ of such a limit. The number of backend iterations may surpass this limit by no more than one; +* `maybe_cursor_in` is an optional pointer-size ([New Definition II](#new-def-ii)) representing the cursor returned by the previous (unfinished) call to this function. It should be _absent_ on the first call; +* `maybe_cursor_out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer where the continuation cursor will optionally be written (see also the Result section); +* `backend` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to a 4-byte buffer where a 32-bit integer representing the number of items removed from the backend database will be written; +* `unique` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to a 4-byte buffer where a 32-bit integer representing the number of unique keys removed, taking into account both the backend and the overlay; +* `loops` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to a 4-byte buffer where a 32-bit integer representing the number of iterations (each requiring a storage seek/read) which were done will be written. + +##### Result + +The result represents the length of the continuation cursor which was written to the buffer provided in `maybe_cursor_out`. A zero value represents the absence of such a cursor and no need for continuation (the prefix has been completely cleared). If the buffer is not large enough to accommodate the cursor, the latter will be truncated, but the full length of the cursor will always be returned. + +##### Changes + +The new version adopts [PPP#7](https://github.com/w3f/PPPs/pull/7), hence the significant change in the function interface with respect to the previous version. The reasoning for such a change was provided in the [original proposal discussion](https://github.com/w3f/polkadot-spec/issues/588). + +#### ext_storage_root + +The new version 3 is introduced, deprecating `ext_storage_root_version_2`. The signature is + +```wat +(func $ext_storage_root_version_3 + (param $out i64) (result i32)) +``` + +##### Arguments + +* `out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer where the SCALE-encoded storage root, calculated after committing all the existing operations, will be stored. + +##### Results + +The result is the length of the output stored in the buffer provided in `out`. If the buffer is not large enough to accommodate the data, the latter will be truncated, but the full length of the output data will always be returned. + +##### Changes + +The new version adopts [PPP#6](https://github.com/w3f/PPPs/pull/6) deprecating the argument that used to represent the storage version. + +#### ext_storage_next_key + +The new version 2 is introduced, deprecating `ext_storage_next_key_version_1`. The signature is ```wat (func $ext_storage_next_key_version_2 - (param $key_in_out i64) (result i32)) -(func $ext_default_child_storage_next_key_version_2 - (param $child_storage_key i64) (param $key_in_out i64) (result i32)) + (param $key_in i64) (param $key_out i64) (result i32)) ``` +##### Changes + +The logic of the function is unchanged since the previous version. The signature has changed to align with the new memory allocation strategy. + +##### Arguments + +* `key_in` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer containing a storage key; +* `key_out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to an output buffer where the next key in the storage in the lexicographical order will be written. + +##### Result -The behaviour of these functions is identical to their version 1 counterparts. +The result is the length of the output key, or zero if no next key was found. If the buffer provided in `key_out` is not large enough to accommodate the data, the latter will be truncated, but the full length of the output data will always be returned. -Instead of allocating a buffer, writing the next key to it, and returning a pointer to it, the new version of these functions accepts an `key_in_out` parameter containing [a pointer-size](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size) to the memory location where the host first reads the input from, and then writes the output to. +#### ext_default_child_storage_get -These functions return the size, in bytes, of the next key, or `0` if there is no next key. If the size of the next key is larger than the buffer in `key_in_out`, the bytes of the key that fit the buffer are written to `key_in_out`, and any extra bytes that don't fit are discarded. +The function is deprecated. Users are encouraged to use `ext_default_child_storage_read_version_2` instead. -Some notes: +#### ext_default_child_storage_read -- It is never possible for the next key to be an empty buffer, because an empty key has no preceding key. For this reason, a return value of `0` can unambiguously be used to indicate the lack of the next key. -- The `ext_storage_next_key_version_2` and `ext_default_child_storage_next_key_version_2` are typically used to enumerate keys that start with a certain prefix. Since storage keys are constructed by concatenating hashes, the runtime is expected to know the size of the next key and can allocate a buffer that can fit said key. When the next key doesn't belong to the desired prefix, it might not fit the buffer, but given that the start of the key is written to the buffer anyway, this can be detected to avoid calling the function the second time with a larger buffer. +The new version 2 is introduced, deprecating `ext_default_child_storage_read_version_1`. The new signature is ```wat -(func $ext_hashing_keccak_256_version_2 - (param $data i64) (param $out i32)) -(func $ext_hashing_keccak_512_version_2 - (param $data i64) (param $out i32)) -(func $ext_hashing_sha2_256_version_2 - (param $data i64) (param $out i32)) -(func $ext_hashing_blake2_128_version_2 - (param $data i64) (param $out i32)) -(func $ext_hashing_blake2_256_version_2 - (param $data i64) (param $out i32)) -(func $ext_hashing_twox_64_version_2 - (param $data i64) (param $out i32)) -(func $ext_hashing_twox_128_version_2 - (param $data i64) (param $out i32)) -(func $ext_hashing_twox_256_version_2 - (param $data i64) (param $out i32)) -(func $ext_trie_blake2_256_root_version_3 - (param $data i64) (param $version i32) (param $out i32)) -(func $ext_trie_blake2_256_ordered_root_version_3 - (param $data i64) (param $version i32) (param $out i32)) -(func $ext_trie_keccak_256_root_version_3 - (param $data i64) (param $version i32) (param $out i32)) -(func $ext_trie_keccak_256_ordered_root_version_3 - (param $data i64) (param $version i32) (param $out i32)) -(func $ext_crypto_ed25519_generate_version_2 - (param $key_type_id i32) (param $seed i64) (param $out i32)) -(func $ext_crypto_sr25519_generate_version_2 - (param $key_type_id i32) (param $seed i64) (param $out i32) (result i32)) -(func $ext_crypto_ecdsa_generate_version_2 - (param $key_type_id i32) (param $seed i64) (param $out i32) (result i32)) +(func $ext_storage_read_version_2 + (param $storage_key i64) (param $key i64) (param $value_out i64) (param $value_offset i32) + (result i64)) ``` -The behaviour of these functions is identical to their version 1 or version 2 counterparts. Instead of allocating a buffer, writing the output to it, and returning a pointer to it, the new version of these functions accepts an `out` parameter containing the memory location where the host writes the output. The output is always of a size known at compilation time. +##### Arguments + +* `storage_key` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the child storage key ([Definition 219](https://spec.polkadot.network/chap-host-api#defn-child-storage-type)); +* `key` is the storage key being read; +* `value_out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer where the value read should be stored. If the buffer is not long enough to accommodate the value, the value is truncated to the length of the buffer; +* `value_offset` is a 32-bit offset from which the value reading should start. + +##### Result + +The result is an optional positive integer ([New Definition I](#new-def-i)), representing either the full length of the value in storage or the _absence_ of such a value in storage. + +##### Changes + +The logic of the function is unchanged since the previous version. Only the result representation has changed. + +#### ext_default_child_storage_storage_kill + +The new version 4 is introduced, deprecating `ext_default_child_storage_storage_kill_version_3`. The new signature is ```wat -(func $ext_default_child_storage_root_version_3 - (param $child_storage_key i64) (param $out i32)) -(func $ext_storage_root_version_3 - (param $out i32)) +(func $ext_default_child_storage_storage_kill_version_4 + (param $storage_key i64) (param $maybe_limit i64) (param $maybe_cursor_in i64) + (param $maybe_cursor_out i64) (param $backend i32) (param $unique i32) (param $loops i32) + (result i32)) ``` -The behaviour of these functions is identical to their version 1 and version 2 counterparts. Instead of allocating a buffer, writing the output to it, and returning a pointer to it, the new versions of these functions accept an `out` parameter containing the memory location where the host writes the output. The output is always of a size known at compilation time. +##### Arguments + +* `storage_key` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the child storage key ([Definition 219](https://spec.polkadot.network/chap-host-api#defn-child-storage-type)); +* `maybe_limit` is an optional positive integer representing either the maximum number of backend deletions which may happen, or the absence of such a limit. The number of backend iterations may surpass this limit by no more than one; +* `maybe_cursor_in` is an optional pointer-size representing the cursor returned by the previous (unfinished) call to this function. It should be _absent_ on the first call; +* `maybe_cursor_out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer where the continuation cursor will optionally be written (see also the Result section); +* `backend` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to a 4-byte buffer where a 32-bit integer representing the number of items removed from the backend database will be written; +* `unique` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to a 4-byte buffer where a 32-bit integer representing the number of unique keys removed, taking into account both the backend and the overlay; +* `loops` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to a 4-byte buffer where a 32-bit integer representing the number of iterations (each requiring a storage seek/read) which were done will be written. + +##### Result + +The result represents the length of the continuation cursor which was written to the buffer provided in `maybe_cursor_out`. A zero value represents the absence of such a cursor and no need for continuation (the prefix has been completely cleared). If the buffer is not large enough to accommodate the cursor, the latter will be truncated, but the full length of the cursor will always be returned. -The version 1 of these functions has been taken as a base rather than the version 2, as a [PPP#6](https://github.com/w3f/PPPs/pull/6) deprecating the version 2 of these functions has previously been accepted. +##### Changes + +The new version adopts [PPP#7](https://github.com/w3f/PPPs/pull/7), hence the significant change in the function interface with respect to the previous version. The reasoning for such a change was provided in the [original proposal discussion](https://github.com/w3f/polkadot-spec/issues/588). + +#### ext_default_child_storage_clear_prefix + +The new version 3 is introduced, deprecating `ext_default_child_storage_clear_prefix_version_2`. The new signature is ```wat -(func $ext_storage_clear_prefix_version_3 - (param $maybe_prefix i64) (param $maybe_limit i64) - (param $maybe_cursor_in i64) (param $removal_results_out i32)) (func $ext_default_child_storage_clear_prefix_version_3 - (param $child_storage_key i64) (param $prefix i64) (param $maybe_limit i64) - (param $maybe_cursor_in i64) (param $removal_results_out i32)) -(func $ext_default_child_storage_kill_version_4 - (param $child_storage_key i64) (param $maybe_limit i64) - (param $maybe_cursor_in i64) (param $removal_results_out i32)) + (param $storage_key i64) (param $prefix i64) (param $maybe_limit i64) + (param $maybe_cursor_in i64) (param $maybe_cursor_out i64) (param $backend i32) + (param $unique i32) (param $loops i32) (result i32)) ``` -These functions amend already implemented but still unused functions introduced by [PPP#7](https://github.com/w3f/PPPs/pull/7), hence there's no version number change. `maybe_limit` defines the limit of backend deletions, not counting keys in the current overlay. `maybe_cursor_in` may be used to pass a continuation cursor. After the operation is completed, a SCALE-encoded [varying data](https://spec.polkadot.network/id-cryptography-encoding#defn-varrying-data-type) are written to the provided output buffer. The varying data consists from the following fields, in order: +##### Arguments + +* `storage_key` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the child storage key ([Definition 219](https://spec.polkadot.network/chap-host-api#defn-child-storage-type)); +* `prefix` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) containing a storage prefix being cleared; +* `maybe_limit` is an optional positive integer representing either the maximum number of backend deletions which may happen, or the absence of such a limit. The number of backend iterations may surpass this limit by no more than one; +* `maybe_cursor_in` is an optional pointer-size representing the cursor returned by the previous (unfinished) call to this function. It should be _absent_ on the first call; +* `maybe_cursor_out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer where the continuation cursor will optionally be written (see also the Result section); +* `backend` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to a 4-byte buffer where a 32-bit integer representing the number of items removed from the backend database will be written; +* `unique` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to a 4-byte buffer where a 32-bit integer representing the number of unique keys removed, taking into account both the backend and the overlay; +* `loops` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to a 4-byte buffer where a 32-bit integer representing the number of iterations (each requiring a storage seek/read) which were done will be written. + +##### Result + +The result represents the length of the continuation cursor which was written to the buffer provided in `maybe_cursor_out`. A zero value represents the absence of such a cursor and no need for continuation (the prefix has been completely cleared). If the buffer is not large enough to accommodate the cursor, the latter will be truncated, but the full length of the cursor will always be returned. -* [Optional](https://spec.polkadot.network/id-cryptography-encoding#defn-option-type) continuation cursor. Absence of the cursor denotes the end of the operation; -* 32-bit unsigned integer representing the number of items removed from the backend DB; -* 32-bit unsigned integer representing the number of unique keys removes, including overlay; -* 32-bit unsigned integer representing the number of iterations done. +##### Changes + +The new version adopts [PPP#7](https://github.com/w3f/PPPs/pull/7), hence the significant change in the function interface with respect to the previous version. The reasoning for such a change was provided in the [original proposal discussion](https://github.com/w3f/polkadot-spec/issues/588). + +#### ext_default_child_storage_root + +The new version 3 is introduced, deprecating `ext_default_child_storage_root_version_2`. The signature is -The size of the output buffer must be determined at the compile time. If the SCALE-encoded data do not fit into the buffer, the data are silently truncated. The caller may determine the truncation by checking the value length data contained in the SCALE-encoded data header. - ```wat -(func $ext_crypto_ed25519_sign_version_2 - (param $key_type_id i32) (param $key i32) (param $msg i64) (param $out i32) (result i32)) -(func $ext_crypto_sr25519_sign_version_2 - (param $key_type_id i32) (param $key i32) (param $msg i64) (param $out i32) (result i32)) -(func $ext_crypto_ecdsa_sign_version_2 - (param $key_type_id i32) (param $key i32) (param $msg i64) (param $out i32) (result i32)) -(func $ext_crypto_ecdsa_sign_prehashed_version_2 - (param $key_type_id i32) (param $key i32) (param $msg i64) (param $out i32) (result i64)) +(func $ext_default_child_storage_root_version_3 + (param $storage_key i64) (param $out i64) (result i32)) ``` -The behaviour of these functions is identical to their version 1 counterparts. The new versions of these functions accept an `out` parameter containing the memory location where the host writes the signature. The signatures are always of a size known at compilation time. On success, these functions return `0`. If the public key can't be found in the keystore, these functions return `1` and do not write anything to `out`. +##### Arguments + +* `storage_key` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the child storage key ([Definition 219](https://spec.polkadot.network/chap-host-api#defn-child-storage-type)); +* `out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer where the SCALE-encoded storage root, calculated after committing all the existing operations, will be stored. + +##### Results + +The result is the length of the output stored in the buffer provided in `out`. If the buffer is not large enough to accommodate the data, the latter will be truncated, but the full length of the output data will always be returned. -Note that the return value is `0` on success and `1` on failure, while the previous version of these functions wrote `1` on success (as it represents a SCALE-encoded `Some`) and `0` on failure (as it represents a SCALE-encoded `None`). Returning `0` on success and non-zero on failure is consistent with standard practices in the C programming language and is less surprising than the opposite. +##### Changes + +The new version adopts [PPP#6](https://github.com/w3f/PPPs/pull/6) deprecating the argument that used to represent the storage version. + +#### ext_default_child_storage_next_key + +The new version 2 is introduced, deprecating `ext_default_child_storage_next_key_version_1`. The signature is ```wat -(func $ext_crypto_secp256k1_ecdsa_recover_version_3 - (param $sig i32) (param $msg i32) (param $out i32) (result i32)) -(func $ext_crypto_secp256k1_ecdsa_recover_compressed_version_3 - (param $sig i32) (param $msg i32) (param $out i32) (result i32)) +(func $ext_default_child_storage_next_key_version_2 + (param $storage_key i64) (param $key_in i64) (param $key_out i64) (result i32)) ``` -The behaviour of these functions is identical to their version 2 counterparts. The new versions of these functions accept an `out` parameter containing the memory location where the host writes the signature. The signatures are always of a size known at compilation time. On success, these functions return `0`. On failure, these functions return a non-zero value and do not write anything to `out`. +##### Arguments + +* `storage_key` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the child storage key ([Definition 219](https://spec.polkadot.network/chap-host-api#defn-child-storage-type)); +* `key_in` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer containing a storage key; +* `key_out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to an output buffer where the next key in the storage in the lexicographical order will be written. + +##### Result + +The result is the length of the output key, or zero if no next key was found. If the buffer provided in `key_out` is not large enough to accommodate the data, the latter will be truncated, but the full length of the output data will always be returned. -The non-zero value written on failure is: +##### Changes -- 1: incorrect value of R or S -- 2: incorrect value of V -- 3: invalid signature +The logic of the function is unchanged since the previous version. The signature has changed to align with the new memory allocation strategy. -These values are equal to the values returned on error by the version 2 (see ), but incremented by 1 to reserve 0 for success. +#### ext_trie_{blake2|keccak}\_256_\[ordered_]root + +The following functions share the same signatures and set of changes: +* `ext_trie_blake2_256_root` +* `ext_trie_blake2_256_ordered_root` +* `ext_trie_keccak_256_root` +* `ext_trie_keccak_256_ordered_root` + +For the aforementioned functions, versions 3 were introduced, and the corresponding versions 2 were deprecated. The signature is: ```wat -(func $ext_crypto_ed25519_num_public_keys_version_1 - (param $key_type_id i32) (result i32)) -(func $ext_crypto_ed25519_public_key_version_1 - (param $key_type_id i32) (param $key_index i32) (param $out i32)) -(func $ext_crypto_sr25519_num_public_keys_version_1 - (param $key_type_id i32) (result i32)) -(func $ext_crypto_sr25519_public_key_version_1 - (param $key_type_id i32) (param $key_index i32) (param $out i32)) -(func $ext_crypto_ecdsa_num_public_keys_version_1 - (param $key_type_id i32) (result i32)) -(func $ext_crypto_ecdsa_public_key_version_1 - (param $key_type_id i32) (param $key_index i32) (param $out i32)) +(func $ext_trie_{blake2|keccak}_256_[ordered_]root_version_3 + (param $input i64) (param $version i32) (param $out i32)) ``` -The functions supersede the `ext_crypto_ed25519_public_key_version_1`, `ext_crypto_sr25519_public_key_version_1`, and `ext_crypto_ecdsa_public_key_version_1` host functions. +##### Arguments + +* `input` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the SCALE-encoded vector of the trie key-value pairs; +* `version` is the state version, where `0` denotes V0 and `1` denotes V1 state version. Other state versions may be introduced in the future; +* `out` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to a 32-byte buffer, where the calculated trie root will be stored. + +##### Changes -Instead of calling `ext_crypto_ed25519_public_key_version_1` to obtain the list of all the keys at once, the runtime should instead call `ext_crypto_ed25519_num_public_keys_version_1` to get the number of public keys available, then `ext_crypto_ed25519_public_key_version_1` repeatedly. -The `ext_crypto_ed25519_public_key_version_1` function writes the public key of the given `key_index` to the memory location designated by `out`. The `key_index` must be between 0 (included) and `n` (excluded), where `n` is the value returned by `ext_crypto_ed25519_num_public_keys_version_1`. Execution must trap if `n` is out of range. +The logic of the function is unchanged since the previous version. The signature has changed to align with the new memory allocation strategy. -The same explanations apply for `ext_crypto_sr25519_public_key_version_1` and `ext_crypto_ecdsa_public_key_version_1`. +#### ext_misc_runtime_version -Host implementers should be aware that the list of public keys (including their ordering) must not change while the runtime is running. That is most likely done by copying the list of all available keys either at the start of the execution or the first time the list is accessed. +The new version 2 is introduced, deprecating `ext_default_child_storage_next_key_version_1`. The signature is ```wat -(func $ext_offchain_http_request_start_version_2 - (param $method i64) (param $uri i64) (param $meta i64) (result i64)) +(func $ext_misc_runtime_version_version_2 + (param $wasm i64) (param $out i64) (result i64)) ``` +##### Arguments + +* `wasm` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the Wasm blob from which the version information should be extracted; +* `out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the buffer where the SCALE-encoded extracted version information will be stored. + +##### Result + +The result is an optional positive integer ([New Definition I](#new-def-i)) representing the length of the output data. If the buffer is not large enough to accommodate the data, the latter will be truncated, but the full length of the output data will always be returned. An _absent_ value represents the absence of the version information in the Wasm blob or a failure to read one. + +##### Changes + +The logic of the function is unchanged since the previous version. The signature has changed to align with the new memory allocation strategy. + +#### ext_crypto_{ed25519|sr25519|ecdsa}_public_keys + +The following functions are deprecated: +* `ext_crypto_ed25519_public_keys_version_1` +* `ext_crypto_sr25519_public_keys_version_1` +* `ext_crypto_ecdsa_public_keys_version_1` -The behaviour of this function is identical to its version 1 counterpart. Instead of allocating a buffer, writing the request identifier in it, and returning a pointer to it, version 2 of this function simply returns the newly-assigned identifier to the HTTP request. On failure, this function returns `-1`. An identifier of `-1` is invalid and is reserved to indicate failure. +Users are encouraged to use the new `*_num_public_keys` and `*_public_key` counterparts. + +#### ext_crypto_{ed25519|sr25519|ecdsa}_num_public_keys + +New functions, all sharing the same signature and logic, are introduced: +* `ext_crypto_ed25519_num_public_keys_version_1` +* `ext_crypto_sr25519_num_public_keys_version_1` +* `ext_crypto_ecdsa_num_public_keys_version_1` + +The signature is: ```wat -(func $ext_offchain_http_request_write_body_version_2 - (param $method i64) (param $uri i64) (param $meta i64) (result i64)) -(func $ext_offchain_http_response_read_body_version_2 - (param $request_id i32) (param $buffer i64) (param $deadline i64) (result i64)) +(func $ext_crypto_{ed25519|sr25519|ecdsa}_num_public_keys + (param $id i32) (result i32)) ``` -The behaviour of these functions is identical to their version 1 counterpart. Instead of allocating a buffer, writing two bytes in it, and returning a pointer to it, the new version of these functions simply indicates what happened: +##### Arguments + +* `id` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to the key type identifier ([Definition 220](https://spec.polkadot.network/chap-host-api#defn-key-type-id)). + +##### Result -- For `ext_offchain_http_request_write_body_version_2`, 0 on success. -- For `ext_offchain_http_response_read_body_version_2`, 0 or a non-zero number of bytes on success. -- -1 if the deadline was reached. -- -2 if there was an I/O error while processing the request. -- -3 if the identifier of the request is invalid. +The result represents a (possibly zero) number of keys of the given type known to the keystore. -These values are equal to the values returned on error by version 1 (see ), but tweaked to reserve positive numbers for success. +#### ext_crypto_{ed25519|sr25519|ecdsa}_public_key -When it comes to `ext_offchain_http_response_read_body_version_2`, the host implementers must not read too much data at once to avoid ambiguity in the returned value. Given that the `buffer` size is always inferior or equal to 4 GiB, this is not a problem. +New functions, all sharing the same signature and logic, are introduced: +* `ext_crypto_ed25519_public_key_version_1` +* `ext_crypto_sr25519_public_key_version_1` +* `ext_crypto_ecdsa_public_key_version_1` + +The signature is: ```wat -(func $ext_offchain_http_response_wait_version_2 - (param $ids i64) (param $deadline i64) (param $out i32)) +(func $ext_crypto_{ed25519|sr25519|ecdsa}_public_key + (param $id i32) (param $index i32) (param $out)) ``` -The behaviour of this function is identical to its version 1 counterpart. Instead of allocating a buffer, writing the output to it, and returning a pointer to it, the new version of this function accepts an `out` parameter containing the memory location where the host writes the output. +##### Arguments + +* `id` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to the key type identifier ([Definition 220](https://spec.polkadot.network/chap-host-api#defn-key-type-id)). +* `index` is the index of the key in the keystore. If the index is out of bounds (determined by the value returned by the respective `_num_public_keys` function) the function will panic; +* `out` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to the output buffer of the respective size (depending on key type) where the key will be written. -The encoding of the response code is also modified compared to its version 1 counterpart, and each response code now encodes up to 4 little-endian bytes as described below: +#### ext_crypto_{ed25519|sr25519|ecdsa}_generate -- 100-999: The request has finished with the given HTTP status code. -- -1: The deadline was reached. -- -2: There was an I/O error while processing the request. -- -3: The identifier of the request is invalid. +The following functions share the same signatures and set of changes: +* `ext_crypto_ed25519_generate` +* `ext_crypto_sr25519_generate` +* `ext_crypto_ecdsa_generate` -The buffer passed to `out` must always have a size of `4 * n` where `n` is the number of elements in the `ids`. +For the aforementioned functions, versions 2 are introduced, and the corresponding versions 1 are deprecated. The signature is: ```wat -(func $ext_offchain_http_response_header_name_version_1 - (param $request_id i32) (param $header_index i32) (param $out i64) (result i64)) -(func $ext_offchain_http_response_header_value_version_1 - (param $request_id i32) (param $header_index i32) (param $out i64) (result i64)) +(func $ext_crypto_{ed25519|sr25519|ecdsa}_generate_version_2 + (param $id i32) (param $seed i64) (param $out i32)) ``` -These functions supersede the `ext_offchain_http_response_headers_version_1` host function. +##### Arguments + +* `id` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to the key type identifier ([Definition 220](https://spec.polkadot.network/chap-host-api#defn-key-type-id)). The function will panic if the identifier is invalid; +* `seed` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the SCALE-encoded Option value ([Definition 200](https://spec.polkadot.network/id-cryptography-encoding#defn-option-type)) containing the BIP-39 seed which must be valid UTF-8. The function will panic if the seed is not valid UTF-8; +* `out` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to the output buffer of the respective size (depending on key type) where the generated key will be written. + +##### Changes -Contrary to `ext_offchain_http_response_headers_version_1`, only one header indicated by `header_index` can be read at a time. Instead of calling `ext_offchain_http_response_headers_version_1` once, the runtime should call `ext_offchain_http_response_header_name_version_1` and `ext_offchain_http_response_header_value_version_1` multiple times with an increasing `header_index`, until a value of `-1` is returned. +The logic of the functions is unchanged since the previous version. The signature has changed to align with the new memory allocation strategy. -These functions accept an `out` parameter containing [a pointer-size](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size) to the memory location where the header name or value should be written. +#### ext_crypto_{ed25519|sr25519|ecdsa}_sign\[_prehashed] -These functions return the size, in bytes, of the header name or header value. If the request doesn't exist or is in an invalid state (as documented for `ext_offchain_http_response_headers_version_1`) or the `header_index` is out of range, a value of `-1` is returned. Given that the host must never write more bytes than the size of the buffer in `out`, and that the size of this buffer is expressed as a 32-bit number, a 64-bit value of `-1` is not ambiguous. +The following functions share the same signatures and set of changes: +* `ext_crypto_ed25519_sign` +* `ext_crypto_sr25519_sign` +* `ext_crypto_ecdsa_sign` +* `ext_crypto_ecdsa_sign_prehashed` -If the buffer in `out` is too small to fit the entire header name or value, only the bytes that fit are written, and the rest are discarded. +For the aforementioned functions, versions 2 are introduced, and the corresponding versions 1 are deprecated. The signature is: + +```wat +(func $ext_crypto_{ed25519|sr25519|ecdsa}_sign{_prehashed|}_version_2 + (param $id i32) (param $pub_key i32) (param $msg i64) (param $out i64) (result i64)) +``` + +##### Arguments + +* `id` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to the key type identifier ([Definition 220](https://spec.polkadot.network/chap-host-api#defn-key-type-id)). The function will panic if the identifier is invalid; +* `pub_key` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to the public key bytes (as returned by the respective `_public_key` function); +* `msg` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the message that is to be signed; +* `out` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to the output buffer of the respective size (depending on key type) where the signature will be written. + +##### Result + +The function returns `0` on success. On error, `-1` is returned and the output buffer should be considered uninitialized. + +##### Changes + +The logic of the functions is unchanged since the previous version. The signature has changed to align with the new memory allocation strategy. + +#### ext_crypto_secp256k1_ecdsa_recover\[_compressed] + +The following functions share the same signatures and set of changes: +* `ext_crypto_secp256k1_ecdsa_recover` +* `ext_crypto_secp256k1_ecdsa_recover_compressed` + +For the aforementioned functions, versions 3 are introduced, and the corresponding versions 2 are deprecated. The signature is: + +```wat +(func $ext_crypto_secp256k1_ecdsa_recover\[_compressed]_version_3 + (param $sig i32) (param $msg i32) (param $out i32) (result i64)) +``` + +##### Arguments + +* `sig` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to the buffer containing the 65-byte signature in RSV format. V must be either 0/1 or 27/28; +* `msg` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to the buffer containing the 256-bit Blake2 hash of the message; +* `out` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to the output buffer of the respective size (depending on key type) where the recovered public key will be written. + +##### Result + +The function returns `0` on success. On error, it returns a negative ECDSA verification error code, where `-1` stands for incorrect R or S, `-2` stands for invalid V, and `-3` stands for invalid signature. + +##### Changes + +The signature has changed to align with the new memory allocation strategy. The return error encoding, defined under [Definition 221](https://spec.polkadot.network/chap-host-api#defn-ecdsa-verify-error), is changed to promote the unification of host function result reporting (zero and positive values are for success, and the negative values are for failure codes). + +#### ext_hashing_{keccak|sha2|blake2|twox}_{64|128|256|512} + +The following functions share the same signatures and set of changes: +* `ext_hashing_keccak_256` +* `ext_hashing_keccak_512` +* `ext_hashing_sha2_256` +* `ext_hashing_blake2_128` +* `ext_hashing_blake2_256` +* `ext_hashing_twox_64` +* `ext_hashing_twox_128` +* `ext_hashing_twox_256` + +For the aforementioned functions, versions 2 are introduced, and the corresponding versions 1 are deprecated. The signature is: + +```wat +(func $ext_hashing_{keccak|sha2|blake2|twox}_{64|128|256|512}_version_2 + (param $data i64) (param $out i32)) +``` + +##### Arguments + +* `data` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the data to be hashed. +* `out` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to the output buffer of the respective size (depending on hash type) where the calculated hash will be written. + +##### Changes + +The logic of the functions is unchanged since the previous version. The signature has changed to align with the new memory allocation strategy. + +#### ext_offchain_submit_transaction + +The new version 2 is introduced, deprecating `ext_offchain_submit_transaction_version_1`. The signature is unchanged. ```wat (func $ext_offchain_submit_transaction_version_2 - (param $data i64) (result i32)) -(func $ext_offchain_http_request_add_header_version_2 - (param $request_id i32) (param $name i64) (param $value i64) (result i64)) + (param $data i64) (result i64)) +``` + +##### Arguments + +* `data` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the byte array storing the encoded extrinsic. + +##### Result + +The result is `0` for success or `-1` for failure. + +##### Changes + +The logic and the signature of the function are unchanged since the previous version. The only change is the interpretation of the result value to avoid an unneeded allocation and promote the unification of host function result reporting (zero and positive values are for success, and the negative values are for failure codes). + +#### ext_offchain_network_state + +The function is deprecated. Users are encouraged to use `ext_offchain_network_peer_id_version_1` instead. + +#### ext_offchain_network_peer_id + +A new function is introduced. The signature is + +```wat +(func $ext_offchain_submit_transaction_version_2 + (param $out i32) (result i64)) ``` -Instead of allocating a buffer, writing `1` or `0` in it, and returning a pointer to it, the version 2 of these functions returns `0` or `-1`, where `0` indicates success and `-1` indicates failure. +##### Arguments + +* `out` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to the output buffer, 38 bytes long, where the network peer ID will be written. + +##### Result + +The result is `0` for success or `-1` for failure. + +#### ext_offchain_random_seed + +The new version 2 is introduced, deprecating `ext_offchain_random_seed_version_1`. The signature is unchanged. + +```wat +(func $ext_offchain_random_seed_version_2 + (param $out i32)) +``` + +##### Arguments + +* `out` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to the output buffer, 32 bytes long, where the random seed will be written. + +##### Changes + +The logic of the functions is unchanged since the previous version. The signature has changed to align with the new memory allocation strategy and promote the unification of host function result returning (zero and positive values are for success, and the negative values are for failure codes). + +#### ext_offchain_local_storage_get + +The function is deprecated. Users are encouraged to use `ext_offchain_local_storage_read_version_1` instead. + +#### ext_offchain_local_storage_read + +A new function is introduced. The signature is ```wat (func $ext_offchain_local_storage_read_version_1 (param $kind i32) (param $key i64) (param $value_out i64) (param $offset i32) (result i64)) ``` -This function supersedes the `ext_offchain_local_storage_get_version_1` host function, and uses an API and logic similar to `ext_storage_read_version_2`. +##### Arguments -It reads the offchain local storage key indicated by `kind` and `key` starting at the byte indicated by `offset`, and writes the value to the [pointer-size](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size) indicated by `value_out`. +* `kind` is an offchain storage kind, where `0` denotes the persistent storage ([Definition 222](https://spec.polkadot.network/chap-host-api#defn-offchain-persistent-storage)), and `1` denotes the local storage ([Definition 223](https://spec.polkadot.network/chap-host-api#defn-offchain-persistent-storage)); +* `key` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the storage key being read; +* `value_out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer where the value read should be stored. If the buffer is not large enough to accommodate the value, the value is truncated to the length of the buffer; +* `offset` is a 32-bit offset from which the value reading should start. -The function returns the number of bytes written into the `value_out` buffer. If the entry doesn't exist, the `-1` value is returned. Given that the host must never write more bytes than the size of the buffer in `value_out`, and that the size of this buffer is expressed as a 32-bit number, a 64-bit value of `-1` is not ambiguous. +##### Result + +The result is an optional positive integer ([New Definition I](#new-def-i)), representing either the full length of the value in storage or the _absence_ of such a value in storage. + +#### ext_offchain_http_request_start + +The new version 2 is introduced, deprecating `ext_offchain_http_request_start_version_1`. The signature is unchanged. ```wat -(func $ext_offchain_network_peer_id_version_1 - (param $out i64) (result i64)) +(func $ext_offchain_http_request_start_version_2 + (param $method i64) (param $uri i64) (param $meta i64) (result i64)) ``` -This function writes [the `PeerId` of the local node](https://spec.polkadot.network/chap-networking#id-node-identities) to the memory location indicated by `out`. A `PeerId` is always 38 bytes long. This function returns `0` on success or `-1` if the network state is unavailable. +##### Arguments + +`method` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the HTTP method. Possible values are “GET” and “POST”; +`uri` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the URI; +`meta` is a future-reserved field containing additional, SCALE-encoded parameters. Currently, an empty array should be passed. + +##### Result + +On success, a positive request identifier is returned. On error, `-1` is returned. + +##### Changes + +The logic and the signature of the function are unchanged since the previous version. The only change is the interpretation of the result value to avoid an unneeded allocation and promote the unification of host function result returning (zero and positive values are for success, and the negative values are for failure codes). + +#### ext_offchain_http_request_add_header + +The new version 2 is introduced, deprecating `ext_offchain_http_request_add_header_version_1`. The signature is unchanged. ```wat -(func $ext_misc_runtime_version_version_2 - (param $wasm i64) (param $out i64) (result i64)) +(func $ext_offchain_http_request_add_header_version_2 + (param $request_id i32) (param $name i64) (param $value i64) (result i64)) +``` + +##### Arguments + +* `request_id` is an i32 integer indicating the ID of the started request, as returned by `ext_offchain_http_request_start`; +* `name` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the HTTP header name; +* `value` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the HTTP header value. + +##### Result + +The result is `0` for success or `-1` for failure. + +##### Changes + +The logic and the signature of the function are unchanged since the previous version. The only change is the interpretation of the result value to avoid an unneeded allocation and promote the unification of host function result returning (zero and positive values are for success, and the negative values are for failure codes). + +#### ext_offchain_http_request_write_body + +The new version 2 is introduced, deprecating `ext_offchain_http_request_write_body_version_1`. The signature is unchanged. + +```wat +(func $ext_offchain_http_request_write_body_version_2 + (param $request_id i32) (param $chunk i64) (param $deadline i64) (result i64)) ``` -The behaviour of this function is identical to its version 1 counterpart. Instead of allocating a buffer, writing the output to it, and returning a pointer to it, the new version of this function accepts an `out` parameter containing [pointer-size](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size) to the memory location where the host writes the output. If the output buffer is not large enough, the version information is truncated. Returns the length of the encoded version information, or `-1` in case of any failure. +##### Arguments + +* `request_id` is an i32 integer indicating the ID of the started request, as returned by `ext_offchain_http_request_start`; +* `chunk` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the chunk of bytes. Writing an empty chunk finalizes the request; +* `deadline` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the SCALE-encoded Option value ([Definition 200](https://spec.polkadot.network/id-cryptography-encoding#defn-option-type)) containing the UNIX timestamp ([Definition 191](https://spec.polkadot.network/id-cryptography-encoding#defn-unix-time)). Passing `None` blocks indefinitely. + +##### Result + +On success, `0` is returned. On failure, a negative error code is returned, where `-1` denotes the deadline was reached, `-2` denotes that an I/O error occurred, and `-3` denotes that the request ID provided was invalid. + +##### Changes + +The logic and the signature of the function are unchanged since the previous version. The only change is the interpretation of the result value to avoid an unneeded allocation and promote the unification of host function result returning (zero and positive values are for success, and the negative values are for failure codes). + +#### ext_offchain_http_request_wait + +The new version 2 is introduced, deprecating `ext_offchain_http_request_wait_version_1`. The signature is: ```wat -(func $ext_offchain_random_seed_version_2 (param $out i32)) +(func $ext_offchain_http_request_wait_version_2 + (param $ids i64) (param $deadline i64) (param $out i64)) ``` -The behaviour of this function is identical to its version 1 counterpart. Instead of allocating a buffer, writing the output to it, and returning a pointer to it, the new version of this function accepts an `out` parameter containing the address of the memory location where the host writes the output. The size is output is always 32 bytes. +##### Arguments + +* `ids` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the SCALE-encoded array of started request IDs, as returned by `ext_offchain_http_request_start`; +* `deadline` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the SCALE-encoded Option value ([Definition 200](https://spec.polkadot.network/id-cryptography-encoding#defn-option-type)) containing the UNIX timestamp ([Definition 191](https://spec.polkadot.network/id-cryptography-encoding#defn-unix-time)). Passing `None` blocks indefinitely; +* `out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the buffer of `i32` integers where the request statuses will be stored. The number of elements of the buffer must be strictly equal to the number of elements in the `ids` array; otherwise, the function panics. + +##### Changes + +The logic of the functions is unchanged since the previous version. The signature has changed to align with the new memory allocation strategy. + +#### ext_offchain_http_response_read_body + +The new version 2 is introduced, deprecating `ext_offchain_http_response_read_body_version_1`. The signature is unchanged. ```wat -(func $ext_misc_input_read_version_1 - (param $offset i64) (param $out i64) (result i64)) +(func $ext_offchain_http_response_read_body_version_2 + (param $request_id i32) (param $buffer i64) (param $deadline i64) (result i64)) ``` -When a runtime function is called, the host uses the allocator to allocate memory within the runtime to write some input data. The new host function provides an alternative way to access the input that doesn't use the allocator. +##### Arguments -The function copies some data from the input data to the runtime's memory. The `offset` parameter indicates the offset within the input data from which to start copying, and must lie inside the output buffer provided. The `out` parameter is [a pointer-size](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size) and contains the buffer where to write. +* `request_id` is an i32 integer indicating the ID of the started request, as returned by `ext_offchain_http_request_start`; +* `buffer` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the buffer where the body is written; +* `deadline` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the SCALE-encoded Option value ([Definition 200](https://spec.polkadot.network/id-cryptography-encoding#defn-option-type)) containing the UNIX timestamp ([Definition 191](https://spec.polkadot.network/id-cryptography-encoding#defn-unix-time)). Passing `None` blocks indefinitely. -The runtime execution stops with an error if `offset` is strictly greater than the input data size. +##### Result -The return value is the number of bytes written unless `out` has zero length, in which case the full length of input data in bytes is returned, and nothing is written into the output buffer. +On success, the number of bytes written to the buffer is returned. A value of `0` means the entire response was consumed and no further calls to the function are needed for the provided request ID. On failure, a negative error code is returned, where `-1` denotes the deadline was reached, `-2` denotes that an I/O error occurred, and `-3` denotes that the request ID provided was invalid. -### Other changes +##### Changes + +The logic and the signature of the function are unchanged since the previous version. The only change is the interpretation of the result value to avoid an unneeded allocation and promote the unification of host function result returning (zero and positive values are for success, and the negative values are for failure codes). -In addition to the new host functions, this RFC proposes two changes to the runtime-host interface: +#### ext_allocator_{malloc|free} -- The following function signature is now also accepted for runtime entry points: `(func (result i64))`. -- Runtimes no longer need to expose a constant named `__heap_base`. +The functions are deprecated and must not be used in new code. -All the host functions superseded by new host functions are now considered deprecated and should no longer be used. +#### ext_input_read -The following other host functions are also considered deprecated: +A new function is introduced. The signature is -- `ext_storage_get_version_1` -- `ext_storage_changes_root_version_1` -- `ext_default_child_storage_get_version_1` -- `ext_allocator_malloc_version_1` -- `ext_allocator_free_version_1` -- `ext_offchain_network_state_version_1` +```wat +(func $ext_input_read_version_1 + (param $buffer i64)) +``` -## Unresolved Questions +##### Arguments -The changes in this RFC would need to be benchmarked. That involves implementing the RFC and measuring the speed difference. +* `buffer` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the buffer where the input data will be written. If the buffer is not large enough to accommodate the input data, the function will panic. -It is expected that most host functions are faster or equal in speed to their deprecated counterparts, with the following exceptions: +### Other changes -- `ext_misc_input_read_version_1` is inherently slower than obtaining a buffer with the entire data due to the two extra function calls and the extra copying. However, given that this only happens once per runtime call, the cost is expected to be negligible. +Currently, all runtime entrypoints have the following identical Wasm function signatures: -- The `ext_crypto_*_public_keys`, `ext_offchain_network_state`, and `ext_offchain_http_*` host functions are likely slightly slower than their deprecated counterparts, but given that they are used only in offchain workers, that is acceptable. +```wat +(func $runtime_entrypoint (param $data i32) (param $len i32) (result i64)) +``` -- It is unclear how replacing `ext_storage_get` with `ext_storage_read` and `ext_default_child_storage_get` with `ext_default_child_storage_read` will impact performance. +After this RFC is implemented, such entrypoints are still supported, but considered deprecated. New entrypoints must have the following signature: -- It is unclear how the changes to `ext_storage_next_key` and `ext_default_child_storage_next_key` will impact performance. +```wat +(func $runtime_entrypoint (param $len i32) (result i64)) +``` +A runtime function called through such an entrypoint gets the length of SCALE-encoded input data as its only argument. After that, the function must allocate exactly the amount of bytes it is requested, and call the `ext_input_read` host function to obtain the encoded input data. From d209a4ee2e01b466b789b7ff26b17939d30ef46e Mon Sep 17 00:00:00 2001 From: Dmitry Sinyavin Date: Thu, 9 Oct 2025 15:34:29 +0200 Subject: [PATCH 08/30] Address discussions --- ...0145-remove-unnecessary-allocator-usage.md | 539 ++++++++++++++---- 1 file changed, 417 insertions(+), 122 deletions(-) diff --git a/text/0145-remove-unnecessary-allocator-usage.md b/text/0145-remove-unnecessary-allocator-usage.md index e0df74248..e43c0c991 100644 --- a/text/0145-remove-unnecessary-allocator-usage.md +++ b/text/0145-remove-unnecessary-allocator-usage.md @@ -33,11 +33,13 @@ The API of many host functions contains buffer allocations. For example, when ca Even though no benchmark has been done, it is pretty obvious that this design is very inefficient. To continue with the example of `ext_hashing_twox_256_version_1`, it would be more efficient to instead write the output hash to a buffer allocated by the runtime on its stack and passed by pointer to the function. Allocating a buffer on the stack, in the worst case, consists simply of decreasing a number; in the best case, it is free. Doing so would save many VM memory reads and writes by the allocator, and would save a function call to `ext_allocator_free_version_1`. -Furthermore, the existence of the host-side allocator has become questionable over time. It is implemented in a very naive way, and for determinism and backwards compatibility reasons, it needs to be implemented exactly identically in every client implementation. Runtimes make substantial use of heap memory allocations, and each allocation needs to go through the runtime <-> host boundary twice (once for allocating and once for freeing). Moving the allocator to the runtime side would be a good idea, although it would increase the runtime size. But before the host-side allocator can be deprecated, all the host functions that use it must be updated to avoid using it. +Furthermore, the existence of the host-side allocator has become questionable over time. It is implemented in a very naive way: every allocation is rounded up to the next power of two, and once a piece of memory is allocated it can only be reused for allocations which also round up to the exactly the same size. So in theory it's possible to end up in a situation where we still technically have plenty of free memory, but our allocations will fail because all of that memory is reserved for differently sized buckets. That behavior is de-facto hardcoded into the current protocol and for determinism and backwards compatibility reasons, it needs to be implemented exactly identically in every client implementation. + +In addition to that, runtimes make substantial use of heap memory allocations, and each allocation needs to go through the runtime <-> host boundary twice (once for allocating and once for freeing). Moving the allocator to the runtime side would be a good idea, although it would increase the runtime size. But before the host-side allocator can be deprecated, all the host functions that use it must be updated to avoid using it. ## Stakeholders -No attempt was made to convince stakeholders. +Runtime developers, who will benefit from the improved performance and more deterministic behavior of the runtime code. ## Explanation @@ -49,17 +51,37 @@ The Runtime optional positive integer is a signed 64-bit value. Positive values #### New Definition II: Runtime Optional Pointer-Size -The runtime optional pointer-size has exactly the same definition as runtime pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) with the value of 2⁶⁴-1 representing a non-existing value (an _absent_ value). +The Runtime optional pointer-size has exactly the same definition as Runtime pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) with the value of 2⁶⁴-1 representing a non-existing value (an _absent_ value). ### Changes to host functions #### ext_storage_get -The function is deprecated. Users are encouraged to use `ext_storage_read_version_2` instead. +##### Existing prototype + +```wat +(func $ext_storage_get_version_1 + (param $key i64) (result i64)) +``` + +##### Changes + +The function is considered obsolete, as it only implements a subset of functionality of `ext_storage_read` and uses host-allocated buffers. Users are encouraged to use `ext_storage_read_version_2` instead. #### ext_storage_read -The new version 2 is introduced, deprecating `ext_storage_read_version_1`. The new signature is +##### Existing prototype + +```wat +(func $ext_storage_read_version_1 + (param $key i64) (param $value_out i64) (param $offset i32) (result i64)) +``` + +##### Changes + +The function was returning a SCALE-encoded `Option`-wrapped 32-bit integer representing the number of bytes left at supplied `offset`. It was using a host-allocated buffer to return it. It is changed to always return the full length of the value directly as a primitive value. + +##### New prototype ```wat (func $ext_storage_read_version_2 @@ -69,20 +91,27 @@ The new version 2 is introduced, deprecating `ext_storage_read_version_1`. The n ##### Arguments * `key` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the storage key being read; -* `value_out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer where the value read should be stored. If the buffer is not long enough to accommodate the value, the value is truncated to the length of the buffer; +* `value_out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer where the value read should be stored. The value is actually stored only if the buffer is large enough. Otherwise, the buffer contents are undefined; * `value_offset` is a 32-bit offset from which the value reading should start. ##### Result The result is an optional positive integer ([New Definition I](#new-def-i)), representing either the full length of the value in storage or the _absence_ of such a value in storage. -##### Changes +#### ext_storage_clear_prefix -The logic of the function is unchanged since the previous version. Only the result representation has changed. +##### Existing prototype -#### ext_storage_clear_prefix +```wat +(func $ext_storage_clear_prefix_version_2 + (param $prefix i64) (param $limit i64) (result i64)) +``` + +##### Changes + +The function used to accept only a prefix and a limit and return a SCALE-encoded `enum` representing the number of iterations performed, wrapped into a discriminator to differentiate if all the keys were removed. It was using a host-allocated buffer to return the value. As [discussed](https://github.com/w3f/polkadot-spec/issues/588), such implementation was suboptimal, and a better implementation was proposed in [PPP#7](https://github.com/w3f/PPPs/pull/7), but the PPP has never been adopted. The new version adopts the PPP, providing a means of returning much more exhaustive information about the work performed, and also accepts an optional input cursor and makes the limit optional as well. It always returns the full length of the continuation cursor. -The new version 3 is introduced, deprecating `ext_storage_clear_prefix_version_2`. The new signature is +##### New prototype ```wat (func $ext_storage_clear_prefix_version_3 @@ -96,22 +125,29 @@ The new version 3 is introduced, deprecating `ext_storage_clear_prefix_version_2 * `maybe_prefix` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) containing a (possibly empty) storage prefix being cleared; * `maybe_limit` is an optional positive integer ([New Definition I](#new-def-i)) representing either the maximum number of backend deletions which may happen, or the _absence_ of such a limit. The number of backend iterations may surpass this limit by no more than one; * `maybe_cursor_in` is an optional pointer-size ([New Definition II](#new-def-ii)) representing the cursor returned by the previous (unfinished) call to this function. It should be _absent_ on the first call; -* `maybe_cursor_out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer where the continuation cursor will optionally be written (see also the Result section); +* `maybe_cursor_out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer where the continuation cursor will optionally be written (see also the Result section). The value is actually stored only if the buffer is large enough. Otherwise, the buffer contents are undefined; * `backend` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to a 4-byte buffer where a 32-bit integer representing the number of items removed from the backend database will be written; * `unique` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to a 4-byte buffer where a 32-bit integer representing the number of unique keys removed, taking into account both the backend and the overlay; * `loops` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to a 4-byte buffer where a 32-bit integer representing the number of iterations (each requiring a storage seek/read) which were done will be written. ##### Result -The result represents the length of the continuation cursor which was written to the buffer provided in `maybe_cursor_out`. A zero value represents the absence of such a cursor and no need for continuation (the prefix has been completely cleared). If the buffer is not large enough to accommodate the cursor, the latter will be truncated, but the full length of the cursor will always be returned. +The result represents the length of the continuation cursor which might have been written to the buffer provided in `maybe_cursor_out`. A zero value represents the absence of such a cursor and no need for continuation (the prefix has been completely cleared). -##### Changes +#### ext_storage_root -The new version adopts [PPP#7](https://github.com/w3f/PPPs/pull/7), hence the significant change in the function interface with respect to the previous version. The reasoning for such a change was provided in the [original proposal discussion](https://github.com/w3f/polkadot-spec/issues/588). +##### Existing prototype -#### ext_storage_root +```wat +(func $ext_storage_root_version_2 + (param $version i32) (result i64)) +``` + +##### Changes + +The old version accepted the state version as an argument and returned a SCALE-encoded trie root hash through a host-allocated buffer. The new version adopts [PPP#6](https://github.com/w3f/PPPs/pull/6) getting rid of the argument that used to represent the state version. It accepts a pointer to a runtime-allocated buffer and fills it with the output value. The length of the encoded result is returned. -The new version 3 is introduced, deprecating `ext_storage_root_version_2`. The signature is +##### New prototype ```wat (func $ext_storage_root_version_3 @@ -120,47 +156,72 @@ The new version 3 is introduced, deprecating `ext_storage_root_version_2`. The s ##### Arguments -* `out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer where the SCALE-encoded storage root, calculated after committing all the existing operations, will be stored. +* `out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer where the SCALE-encoded storage root, calculated after committing all the existing operations, will be stored. The value is actually stored only if the buffer is large enough. Otherwise, the buffer contents are undefined. ##### Results -The result is the length of the output stored in the buffer provided in `out`. If the buffer is not large enough to accommodate the data, the latter will be truncated, but the full length of the output data will always be returned. +The result is the full length of the output that might have been stored in the buffer provided in `out`. -##### Changes +#### ext_storage_next_key -The new version adopts [PPP#6](https://github.com/w3f/PPPs/pull/6) deprecating the argument that used to represent the storage version. +##### Existing prototype -#### ext_storage_next_key +```wat +(func $ext_storage_next_key_version_1 + (param $key i64) (result i64)) +``` + +##### Changes -The new version 2 is introduced, deprecating `ext_storage_next_key_version_1`. The signature is +The old version accepted the key and returned the SCALE-encoded next key in a host-allocated buffer. The new version additionally accepts a runtime-allocated output buffer and returns full next key length. + +##### New prototype ```wat (func $ext_storage_next_key_version_2 (param $key_in i64) (param $key_out i64) (result i32)) ``` -##### Changes - -The logic of the function is unchanged since the previous version. The signature has changed to align with the new memory allocation strategy. ##### Arguments * `key_in` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer containing a storage key; -* `key_out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to an output buffer where the next key in the storage in the lexicographical order will be written. +* `key_out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to an output buffer where the next key in the storage in the lexicographical order will be written. The value is actually stored only if the buffer is large enough. Otherwise, the buffer contents are undefined. ##### Result -The result is the length of the output key, or zero if no next key was found. If the buffer provided in `key_out` is not large enough to accommodate the data, the latter will be truncated, but the full length of the output data will always be returned. +The result is the full length of the output key that might have been stored in `key_out`, or zero if no next key was found. #### ext_default_child_storage_get -The function is deprecated. Users are encouraged to use `ext_default_child_storage_read_version_2` instead. +##### Existing prototype + +```wat +(func $ext_default_child_storage_get_version_1 + (param $child_storage_key i64) (param $key i64) (result i64)) +``` + +##### Changes + +The function is considered obsolete, as it only implements a subset of functionality of `ext_default_child_storage_read` and uses host-allocated buffers. Users are encouraged to use `ext_default_child_storage_read_version_2` instead. #### ext_default_child_storage_read -The new version 2 is introduced, deprecating `ext_default_child_storage_read_version_1`. The new signature is +##### Existing prototype ```wat -(func $ext_storage_read_version_2 +(func $ext_default_child_storage_read_version_1 + (param $child_storage_key i64) (param $key i64) (param $value_out i64) (param $offset i32) + (result i64)) +``` + +##### Changes + +The function was returning a SCALE-encoded `Option`-wrapped 32-bit integer representing the number of bytes left at supplied `offset`. It was using a host-allocated buffer to return it. It is changed to always return the full length of the value directly as a primitive value. + +##### New prototype + +```wat +(func $ext_default_child_storage_read_version_2 (param $storage_key i64) (param $key i64) (param $value_out i64) (param $value_offset i32) (result i64)) ``` @@ -169,20 +230,28 @@ The new version 2 is introduced, deprecating `ext_default_child_storage_read_ver * `storage_key` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the child storage key ([Definition 219](https://spec.polkadot.network/chap-host-api#defn-child-storage-type)); * `key` is the storage key being read; -* `value_out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer where the value read should be stored. If the buffer is not long enough to accommodate the value, the value is truncated to the length of the buffer; +* `value_out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer where the value read should be stored. The value is actually stored only if the buffer is large enough. Otherwise, the buffer contents are undefined; * `value_offset` is a 32-bit offset from which the value reading should start. ##### Result The result is an optional positive integer ([New Definition I](#new-def-i)), representing either the full length of the value in storage or the _absence_ of such a value in storage. -##### Changes +#### ext_default_child_storage_storage_kill -The logic of the function is unchanged since the previous version. Only the result representation has changed. +##### Existing prototype -#### ext_default_child_storage_storage_kill +```wat +(func $ext_default_child_storage_storage_kill_version_3 + (param $child_storage_key i64) (param $limit i64) + (result i64)) +``` + +##### Changes + +The function used to accept only a child storage key and a limit and return a SCALE-encoded `enum` representing the number of iterations performed, wrapped into a discriminator to differentiate if all the keys were removed. It was using a host-allocated buffer to return the value. As [discussed](https://github.com/w3f/polkadot-spec/issues/588), such implementation was suboptimal, and a better implementation was proposed in [PPP#7](https://github.com/w3f/PPPs/pull/7), but the PPP has never been adopted. The new version adopts the PPP, providing a means of returning much more exhaustive information about the work performed, and also accepts an optional input cursor and makes the limit optional as well. It always returns the full length of the continuation cursor. -The new version 4 is introduced, deprecating `ext_default_child_storage_storage_kill_version_3`. The new signature is +##### New prototype ```wat (func $ext_default_child_storage_storage_kill_version_4 @@ -196,22 +265,30 @@ The new version 4 is introduced, deprecating `ext_default_child_storage_storage_ * `storage_key` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the child storage key ([Definition 219](https://spec.polkadot.network/chap-host-api#defn-child-storage-type)); * `maybe_limit` is an optional positive integer representing either the maximum number of backend deletions which may happen, or the absence of such a limit. The number of backend iterations may surpass this limit by no more than one; * `maybe_cursor_in` is an optional pointer-size representing the cursor returned by the previous (unfinished) call to this function. It should be _absent_ on the first call; -* `maybe_cursor_out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer where the continuation cursor will optionally be written (see also the Result section); +* `maybe_cursor_out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer where the continuation cursor will optionally be written (see also the Result section). The value is actually stored only if the buffer is large enough. Otherwise, the buffer contents are undefined; * `backend` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to a 4-byte buffer where a 32-bit integer representing the number of items removed from the backend database will be written; * `unique` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to a 4-byte buffer where a 32-bit integer representing the number of unique keys removed, taking into account both the backend and the overlay; * `loops` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to a 4-byte buffer where a 32-bit integer representing the number of iterations (each requiring a storage seek/read) which were done will be written. ##### Result -The result represents the length of the continuation cursor which was written to the buffer provided in `maybe_cursor_out`. A zero value represents the absence of such a cursor and no need for continuation (the prefix has been completely cleared). If the buffer is not large enough to accommodate the cursor, the latter will be truncated, but the full length of the cursor will always be returned. +The result represents the length of the continuation cursor which might have been written to the buffer provided in `maybe_cursor_out`. A zero value represents the absence of such a cursor and no need for continuation (the prefix has been completely cleared). -##### Changes +#### ext_default_child_storage_clear_prefix -The new version adopts [PPP#7](https://github.com/w3f/PPPs/pull/7), hence the significant change in the function interface with respect to the previous version. The reasoning for such a change was provided in the [original proposal discussion](https://github.com/w3f/polkadot-spec/issues/588). +##### Existing prototype -#### ext_default_child_storage_clear_prefix +```wat +(func $ext_default_child_storage_clear_prefix_version_2 + (param $child_storage_key i64) (param $prefix i64) (param $limit i64) + (result i64)) +``` + +##### Changes + +The function used to accept (along with the child storage key) only a prefix and a limit and return a SCALE-encoded `enum` representing the number of iterations performed, wrapped into a discriminator to differentiate if all the keys were removed. It was using a host-allocated buffer to return the value. As [discussed](https://github.com/w3f/polkadot-spec/issues/588), such implementation was suboptimal, and a better implementation was proposed in [PPP#7](https://github.com/w3f/PPPs/pull/7), but the PPP has never been adopted. The new version adopts the PPP, providing a means of returning much more exhaustive information about the work performed, and also accepts an optional input cursor and makes the limit optional as well. It always returns the full length of the continuation cursor. -The new version 3 is introduced, deprecating `ext_default_child_storage_clear_prefix_version_2`. The new signature is +##### New prototype ```wat (func $ext_default_child_storage_clear_prefix_version_3 @@ -226,22 +303,29 @@ The new version 3 is introduced, deprecating `ext_default_child_storage_clear_pr * `prefix` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) containing a storage prefix being cleared; * `maybe_limit` is an optional positive integer representing either the maximum number of backend deletions which may happen, or the absence of such a limit. The number of backend iterations may surpass this limit by no more than one; * `maybe_cursor_in` is an optional pointer-size representing the cursor returned by the previous (unfinished) call to this function. It should be _absent_ on the first call; -* `maybe_cursor_out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer where the continuation cursor will optionally be written (see also the Result section); +* `maybe_cursor_out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer where the continuation cursor will optionally be written (see also the Result section). The value is actually stored only if the buffer is large enough. Otherwise, the buffer contents are undefined; * `backend` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to a 4-byte buffer where a 32-bit integer representing the number of items removed from the backend database will be written; * `unique` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to a 4-byte buffer where a 32-bit integer representing the number of unique keys removed, taking into account both the backend and the overlay; * `loops` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to a 4-byte buffer where a 32-bit integer representing the number of iterations (each requiring a storage seek/read) which were done will be written. ##### Result -The result represents the length of the continuation cursor which was written to the buffer provided in `maybe_cursor_out`. A zero value represents the absence of such a cursor and no need for continuation (the prefix has been completely cleared). If the buffer is not large enough to accommodate the cursor, the latter will be truncated, but the full length of the cursor will always be returned. +The result represents the length of the continuation cursor which might have been written to the buffer provided in `maybe_cursor_out`. A zero value represents the absence of such a cursor and no need for continuation (the prefix has been completely cleared). -##### Changes +#### ext_default_child_storage_root -The new version adopts [PPP#7](https://github.com/w3f/PPPs/pull/7), hence the significant change in the function interface with respect to the previous version. The reasoning for such a change was provided in the [original proposal discussion](https://github.com/w3f/polkadot-spec/issues/588). +##### Existing prototype -#### ext_default_child_storage_root +```wat +(func $ext_default_child_storage_root_version_2 + (param $child_storage_key i64) (param $version i32) (result i64)) +``` + +##### Changes + +The old version accepted (along with the child storage key) the state version as an argument and returned a SCALE-encoded trie root hash through a host-allocated buffer. The new version adopts [PPP#6](https://github.com/w3f/PPPs/pull/6) getting rid of the argument that used to represent the state version. It accepts a pointer to a runtime-allocated buffer and fills it with the output value. The length of the encoded result is returned. -The new version 3 is introduced, deprecating `ext_default_child_storage_root_version_2`. The signature is +##### New prototype ```wat (func $ext_default_child_storage_root_version_3 @@ -251,19 +335,26 @@ The new version 3 is introduced, deprecating `ext_default_child_storage_root_ver ##### Arguments * `storage_key` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the child storage key ([Definition 219](https://spec.polkadot.network/chap-host-api#defn-child-storage-type)); -* `out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer where the SCALE-encoded storage root, calculated after committing all the existing operations, will be stored. +* `out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer where the SCALE-encoded storage root, calculated after committing all the existing operations, will be stored. The value is actually stored only if the buffer is large enough. Otherwise, the buffer contents are undefined. ##### Results -The result is the length of the output stored in the buffer provided in `out`. If the buffer is not large enough to accommodate the data, the latter will be truncated, but the full length of the output data will always be returned. +The result is the length of the output that mught have been stored in the buffer provided in `out`. -##### Changes +#### ext_default_child_storage_next_key -The new version adopts [PPP#6](https://github.com/w3f/PPPs/pull/6) deprecating the argument that used to represent the storage version. +##### Existing prototype -#### ext_default_child_storage_next_key +```wat +(func $ext_default_child_storage_next_key_version_1 + (param $child_storage_key i64) (param $key i64) (result i64)) +``` + +##### Changes -The new version 2 is introduced, deprecating `ext_default_child_storage_next_key_version_1`. The signature is +The old version accepted (along with the child storage key) the key and returned the SCALE-encoded next key in a host-allocated buffer. The new version additionally accepts a runtime-allocated output buffer and returns full next key length. + +##### New prototype ```wat (func $ext_default_child_storage_next_key_version_2 @@ -274,17 +365,22 @@ The new version 2 is introduced, deprecating `ext_default_child_storage_next_key * `storage_key` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the child storage key ([Definition 219](https://spec.polkadot.network/chap-host-api#defn-child-storage-type)); * `key_in` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer containing a storage key; -* `key_out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to an output buffer where the next key in the storage in the lexicographical order will be written. +* `key_out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to an output buffer where the next key in the storage in the lexicographical order will be written. The value is actually stored only if the buffer is large enough. Otherwise, the buffer contents are undefined. ##### Result -The result is the length of the output key, or zero if no next key was found. If the buffer provided in `key_out` is not large enough to accommodate the data, the latter will be truncated, but the full length of the output data will always be returned. +The result is the length of the output key that might have been written into `key_out`, or zero if no next key was found. -##### Changes +#### ext_trie_{blake2|keccak}\_256_\[ordered_]root -The logic of the function is unchanged since the previous version. The signature has changed to align with the new memory allocation strategy. +##### Existing prototypes -#### ext_trie_{blake2|keccak}\_256_\[ordered_]root +```wat +(func $ext_trie_{blake2|keccak}_256_[ordered_]root_version_2 + (param $data i64) (param $version i32) (result i32)) +``` + +##### Changes The following functions share the same signatures and set of changes: * `ext_trie_blake2_256_root` @@ -292,7 +388,9 @@ The following functions share the same signatures and set of changes: * `ext_trie_keccak_256_root` * `ext_trie_keccak_256_ordered_root` -For the aforementioned functions, versions 3 were introduced, and the corresponding versions 2 were deprecated. The signature is: +The functions used to return the root in a 32-byte host-allocated buffer. They now accept a runtime-allocated output buffer as an argument, and doesn't return anything. + +##### New prototypes ```wat (func $ext_trie_{blake2|keccak}_256_[ordered_]root_version_3 @@ -305,13 +403,20 @@ For the aforementioned functions, versions 3 were introduced, and the correspond * `version` is the state version, where `0` denotes V0 and `1` denotes V1 state version. Other state versions may be introduced in the future; * `out` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to a 32-byte buffer, where the calculated trie root will be stored. -##### Changes +#### ext_misc_runtime_version -The logic of the function is unchanged since the previous version. The signature has changed to align with the new memory allocation strategy. +##### Existing prototype -#### ext_misc_runtime_version +```wat +(func $ext_misc_runtime_version_version_1 + (param $data i64) (result i64)) +``` + +##### Changes -The new version 2 is introduced, deprecating `ext_default_child_storage_next_key_version_1`. The signature is +The function used to return the SCALE-encoded runtime version information in a host-allocated buffer. It is changed to accept a runtime-allocated buffer as an arguments and to return the length of the SCALE-encoded result. + +##### New prototype ```wat (func $ext_misc_runtime_version_version_2 @@ -320,33 +425,46 @@ The new version 2 is introduced, deprecating `ext_default_child_storage_next_key ##### Arguments * `wasm` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the Wasm blob from which the version information should be extracted; -* `out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the buffer where the SCALE-encoded extracted version information will be stored. +* `out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the buffer where the SCALE-encoded extracted version information will be stored. The value is actually stored only if the buffer is large enough. Otherwise, the buffer contents are undefined. ##### Result -The result is an optional positive integer ([New Definition I](#new-def-i)) representing the length of the output data. If the buffer is not large enough to accommodate the data, the latter will be truncated, but the full length of the output data will always be returned. An _absent_ value represents the absence of the version information in the Wasm blob or a failure to read one. +The result is an optional positive integer ([New Definition I](#new-def-i)) representing the length of the output data that might have been stored in `out`. An _absent_ value represents the absence of the version information in the Wasm blob or a failure to read one. -##### Changes +#### ext_crypto_{ed25519|sr25519|ecdsa}_public_keys -The logic of the function is unchanged since the previous version. The signature has changed to align with the new memory allocation strategy. +##### Existing prototypes -#### ext_crypto_{ed25519|sr25519|ecdsa}_public_keys +```wat +(func $ext_crypto_ed25519_public_keys_version_1 + (param $key_type_id i32) (result i64)) +(func $ext_crypto_sr25519_public_keys_version_1 + (param $key_type_id i32) (result i64)) +(func $ext_crypto_ecdsa_public_keys_version_1 + (param $key_type_id i32) (result i64)) +``` -The following functions are deprecated: +##### Changes + +The following functions are considered obsolete: * `ext_crypto_ed25519_public_keys_version_1` * `ext_crypto_sr25519_public_keys_version_1` * `ext_crypto_ecdsa_public_keys_version_1` -Users are encouraged to use the new `*_num_public_keys` and `*_public_key` counterparts. +The functions used to return a host-allocated SCALE-encoded array of public keys of the corresponding type. As it is hard to predict the size of buffer needed to store such an array, new function `*_num_public_keys` and `*_public_key` were introduced to implement iterative approach. #### ext_crypto_{ed25519|sr25519|ecdsa}_num_public_keys +##### Changes + New functions, all sharing the same signature and logic, are introduced: * `ext_crypto_ed25519_num_public_keys_version_1` * `ext_crypto_sr25519_num_public_keys_version_1` * `ext_crypto_ecdsa_num_public_keys_version_1` -The signature is: +They are intended to replace the obsolete `ext_crypto_{ed25519|sr25519|ecdsa}_public_keys` with a new iterative approach. + +##### New prototypes ```wat (func $ext_crypto_{ed25519|sr25519|ecdsa}_num_public_keys @@ -363,12 +481,16 @@ The result represents a (possibly zero) number of keys of the given type known t #### ext_crypto_{ed25519|sr25519|ecdsa}_public_key +##### Changes + New functions, all sharing the same signature and logic, are introduced: * `ext_crypto_ed25519_public_key_version_1` * `ext_crypto_sr25519_public_key_version_1` * `ext_crypto_ecdsa_public_key_version_1` -The signature is: +They are intended to replace the obsolete `ext_crypto_{ed25519|sr25519|ecdsa}_public_keys` with a new iterative approach. + +##### New prototypes ```wat (func $ext_crypto_{ed25519|sr25519|ecdsa}_public_key @@ -383,12 +505,23 @@ The signature is: #### ext_crypto_{ed25519|sr25519|ecdsa}_generate +##### Existing prototypes + +```wat +(func $ext_crypto_{ed25519|sr25519|ecdsa}_generate_version_1 + (param $key_type_id i32) (param $seed i64) (result i32)) +``` + +##### Changes + The following functions share the same signatures and set of changes: * `ext_crypto_ed25519_generate` * `ext_crypto_sr25519_generate` * `ext_crypto_ecdsa_generate` -For the aforementioned functions, versions 2 are introduced, and the corresponding versions 1 are deprecated. The signature is: +The functions used to return a host-allocated buffer containing the key of the corresponding type. They are changed to accept a runtime-allocated buffer as an argument and to return no value, as the length of keys is known and the operation cannot fail. + +##### New prototypes ```wat (func $ext_crypto_{ed25519|sr25519|ecdsa}_generate_version_2 @@ -401,11 +534,16 @@ For the aforementioned functions, versions 2 are introduced, and the correspondi * `seed` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the SCALE-encoded Option value ([Definition 200](https://spec.polkadot.network/id-cryptography-encoding#defn-option-type)) containing the BIP-39 seed which must be valid UTF-8. The function will panic if the seed is not valid UTF-8; * `out` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to the output buffer of the respective size (depending on key type) where the generated key will be written. -##### Changes +#### ext_crypto_{ed25519|sr25519|ecdsa}_sign\[_prehashed] -The logic of the functions is unchanged since the previous version. The signature has changed to align with the new memory allocation strategy. +##### Existing prototypes -#### ext_crypto_{ed25519|sr25519|ecdsa}_sign\[_prehashed] +```wat +(func $ext_crypto_{ed25519|sr25519|ecdsa}_sign{_prehashed|}_version_1 + (param $id i32) (param $pub_key i32) (param $msg i64) (result i64)) +``` + +##### Changes The following functions share the same signatures and set of changes: * `ext_crypto_ed25519_sign` @@ -413,7 +551,9 @@ The following functions share the same signatures and set of changes: * `ext_crypto_ecdsa_sign` * `ext_crypto_ecdsa_sign_prehashed` -For the aforementioned functions, versions 2 are introduced, and the corresponding versions 1 are deprecated. The signature is: +The functions used to return a host-allocated SCALE-encoded value representing the result of signature application. They are changed to accept a pointer to a runtime-allocated buffer of a known size (dependent on the signature type) and to return a result code. + +#### New prototypes ```wat (func $ext_crypto_{ed25519|sr25519|ecdsa}_sign{_prehashed|}_version_2 @@ -431,17 +571,24 @@ For the aforementioned functions, versions 2 are introduced, and the correspondi The function returns `0` on success. On error, `-1` is returned and the output buffer should be considered uninitialized. -##### Changes +#### ext_crypto_secp256k1_ecdsa_recover\[_compressed] -The logic of the functions is unchanged since the previous version. The signature has changed to align with the new memory allocation strategy. +##### Existing prototypes -#### ext_crypto_secp256k1_ecdsa_recover\[_compressed] +```wat +(func $ext_crypto_secp256k1_ecdsa_recover\[_compressed]_version_2 + (param $sig i32) (param $msg i32) (result i64)) +``` + +##### Changes The following functions share the same signatures and set of changes: * `ext_crypto_secp256k1_ecdsa_recover` * `ext_crypto_secp256k1_ecdsa_recover_compressed` -For the aforementioned functions, versions 3 are introduced, and the corresponding versions 2 are deprecated. The signature is: +The functions used to return a host-allocated SCALE-encoded value representing the result of the key recovery. They are changed to accept a pointer to a runtime-allocated buffer of a known size and to return a result code. The return error encoding, defined under [Definition 221](https://spec.polkadot.network/chap-host-api#defn-ecdsa-verify-error), is changed to promote the unification of host function result reporting (zero and positive values are for success, and the negative values are for failure codes). + +##### New prototypes ```wat (func $ext_crypto_secp256k1_ecdsa_recover\[_compressed]_version_3 @@ -458,11 +605,16 @@ For the aforementioned functions, versions 3 are introduced, and the correspondi The function returns `0` on success. On error, it returns a negative ECDSA verification error code, where `-1` stands for incorrect R or S, `-2` stands for invalid V, and `-3` stands for invalid signature. -##### Changes +#### ext_hashing_{keccak|sha2|blake2|twox}_{64|128|256|512} -The signature has changed to align with the new memory allocation strategy. The return error encoding, defined under [Definition 221](https://spec.polkadot.network/chap-host-api#defn-ecdsa-verify-error), is changed to promote the unification of host function result reporting (zero and positive values are for success, and the negative values are for failure codes). +##### Existing prototypes -#### ext_hashing_{keccak|sha2|blake2|twox}_{64|128|256|512} +```wat +(func $ext_hashing_{keccak|sha2|blake2|twox}_{64|128|256|512}_version_1 + (param $data i64) (result i32)) +``` + +##### Changes The following functions share the same signatures and set of changes: * `ext_hashing_keccak_256` @@ -474,7 +626,9 @@ The following functions share the same signatures and set of changes: * `ext_hashing_twox_128` * `ext_hashing_twox_256` -For the aforementioned functions, versions 2 are introduced, and the corresponding versions 1 are deprecated. The signature is: +The functions used to return a host-allocated buffer containing the hash. They are changed to accept a runtime-allocated buffer of a known size (depedent on the hash type) and to return no value, as the operation cannot fail. + +##### New prototypes ```wat (func $ext_hashing_{keccak|sha2|blake2|twox}_{64|128|256|512}_version_2 @@ -486,13 +640,20 @@ For the aforementioned functions, versions 2 are introduced, and the correspondi * `data` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the data to be hashed. * `out` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to the output buffer of the respective size (depending on hash type) where the calculated hash will be written. -##### Changes +#### ext_offchain_submit_transaction -The logic of the functions is unchanged since the previous version. The signature has changed to align with the new memory allocation strategy. +##### Existing prototype -#### ext_offchain_submit_transaction +```wat +(func $ext_offchain_submit_transaction_version_1 + (param $data i64) (result i64)) +``` + +##### Changes -The new version 2 is introduced, deprecating `ext_offchain_submit_transaction_version_1`. The signature is unchanged. +The old version returned a SCALE-encoded result in a host-allocated buffer. That is changed to return the result as a primitive value. + +##### New prototype ```wat (func $ext_offchain_submit_transaction_version_2 @@ -507,17 +668,26 @@ The new version 2 is introduced, deprecating `ext_offchain_submit_transaction_ve The result is `0` for success or `-1` for failure. -##### Changes +#### ext_offchain_network_state -The logic and the signature of the function are unchanged since the previous version. The only change is the interpretation of the result value to avoid an unneeded allocation and promote the unification of host function result reporting (zero and positive values are for success, and the negative values are for failure codes). +##### Existing prototype -#### ext_offchain_network_state +```wat +(func $ext_offchain_network_state_version_1 + (result i64)) +``` + +##### Changes -The function is deprecated. Users are encouraged to use `ext_offchain_network_peer_id_version_1` instead. +The function is considered obsolete. The function used to return a host-allocated value that was only used partially, with the part used being fixed-size. Users are encouraged to use `ext_offchain_network_peer_id_version_1` instead. #### ext_offchain_network_peer_id -A new function is introduced. The signature is +##### Changes + +A new function is introduced to replace `ext_offchain_network_state`. It fills the output buffer with an opaque peer id of a known size. + +##### New prototype ```wat (func $ext_offchain_submit_transaction_version_2 @@ -534,7 +704,18 @@ The result is `0` for success or `-1` for failure. #### ext_offchain_random_seed -The new version 2 is introduced, deprecating `ext_offchain_random_seed_version_1`. The signature is unchanged. +##### Existing prototype + +```wat +(func $ext_offchain_random_seed_version_1 + (result i32)) +``` + +##### Changes + +The function used to return a host-allocated buffer containing the random seed. It is changed to accept a pointer to a runtime-allocated buffer where the random seed is written and to return no value as the operation cannot fail. + +##### New prototype ```wat (func $ext_offchain_random_seed_version_2 @@ -545,17 +726,26 @@ The new version 2 is introduced, deprecating `ext_offchain_random_seed_version_1 * `out` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to the output buffer, 32 bytes long, where the random seed will be written. -##### Changes +#### ext_offchain_local_storage_get -The logic of the functions is unchanged since the previous version. The signature has changed to align with the new memory allocation strategy and promote the unification of host function result returning (zero and positive values are for success, and the negative values are for failure codes). +##### Existing prototype -#### ext_offchain_local_storage_get +```wat +(func $ext_offchain_local_storage_get_version_1 + (param $kind i32) (param $key i64) (result i64)) +``` -The function is deprecated. Users are encouraged to use `ext_offchain_local_storage_read_version_1` instead. +##### Changes + +The function is considered obsolete, as it only implements a subset of functionality of `ext_offchain_local_storage_read` and uses host-allocated buffers. Users are encouraged to use `ext_offchain_local_storage_read_version_1` instead. #### ext_offchain_local_storage_read -A new function is introduced. The signature is +##### Changes + +A new function is introduced to replace `ext_offchain_local_storage_get`. The name has been changed to better correspond to the family of the same-functionality functions in `ext_storage_*` group. + +##### New prototype ```wat (func $ext_offchain_local_storage_read_version_1 @@ -566,7 +756,7 @@ A new function is introduced. The signature is * `kind` is an offchain storage kind, where `0` denotes the persistent storage ([Definition 222](https://spec.polkadot.network/chap-host-api#defn-offchain-persistent-storage)), and `1` denotes the local storage ([Definition 223](https://spec.polkadot.network/chap-host-api#defn-offchain-persistent-storage)); * `key` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the storage key being read; -* `value_out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer where the value read should be stored. If the buffer is not large enough to accommodate the value, the value is truncated to the length of the buffer; +* `value_out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer where the value read should be stored. The value is actually stored only if the buffer is large enough. Otherwise, the buffer contents are undefined; * `offset` is a 32-bit offset from which the value reading should start. ##### Result @@ -575,7 +765,18 @@ The result is an optional positive integer ([New Definition I](#new-def-i)), rep #### ext_offchain_http_request_start -The new version 2 is introduced, deprecating `ext_offchain_http_request_start_version_1`. The signature is unchanged. +##### Existing prototype + +```wat +(func $ext_offchain_http_request_start_version_1 + (param $method i64) (param $uri i64) (param $meta i64) (result i64)) +``` + +##### Changes + +The function used to return a SCALE-encoded `Result` value in a host-allocated buffer. That is changed to return a primitive value denoting the operation result. The result interpretation has been changed to promote the unification of host function result returning (zero and positive values are for success, and the negative values are for failure codes). + +##### New prototype ```wat (func $ext_offchain_http_request_start_version_2 @@ -592,13 +793,20 @@ The new version 2 is introduced, deprecating `ext_offchain_http_request_start_ve On success, a positive request identifier is returned. On error, `-1` is returned. -##### Changes +#### ext_offchain_http_request_add_header -The logic and the signature of the function are unchanged since the previous version. The only change is the interpretation of the result value to avoid an unneeded allocation and promote the unification of host function result returning (zero and positive values are for success, and the negative values are for failure codes). +##### Existing prototype -#### ext_offchain_http_request_add_header +```wat +(func $ext_offchain_http_request_add_header_version_1 + (param $request_id i32) (param $name i64) (param $value i64) (result i64)) +``` + +##### Changes -The new version 2 is introduced, deprecating `ext_offchain_http_request_add_header_version_1`. The signature is unchanged. +The function used to return a SCALE-encoded `Result` value in a host-allocated buffer. That is changed to return a primitive value denoting the operation result. The result interpretation has been changed to promote the unification of host function result returning (zero and positive values are for success, and the negative values are for failure codes). + +##### New prototype ```wat (func $ext_offchain_http_request_add_header_version_2 @@ -615,13 +823,20 @@ The new version 2 is introduced, deprecating `ext_offchain_http_request_add_head The result is `0` for success or `-1` for failure. -##### Changes +#### ext_offchain_http_request_write_body -The logic and the signature of the function are unchanged since the previous version. The only change is the interpretation of the result value to avoid an unneeded allocation and promote the unification of host function result returning (zero and positive values are for success, and the negative values are for failure codes). +##### Existing prototype -#### ext_offchain_http_request_write_body +```wat +(func $ext_offchain_http_request_write_body_version_1 + (param $request_id i32) (param $chunk i64) (param $deadline i64) (result i64)) +``` + +##### Changes -The new version 2 is introduced, deprecating `ext_offchain_http_request_write_body_version_1`. The signature is unchanged. +The function used to return a SCALE-encoded `Result` value in a host-allocated buffer. That is changed to return a primitive value denoting the operation result. The result interpretation has been changed to promote the unification of host function result returning (zero and positive values are for success, and the negative values are for failure codes). + +##### New prototype ```wat (func $ext_offchain_http_request_write_body_version_2 @@ -638,16 +853,23 @@ The new version 2 is introduced, deprecating `ext_offchain_http_request_write_bo On success, `0` is returned. On failure, a negative error code is returned, where `-1` denotes the deadline was reached, `-2` denotes that an I/O error occurred, and `-3` denotes that the request ID provided was invalid. -##### Changes +#### ext_offchain_http_response_wait + +##### Existing prototype + +```wat +(func $ext_offchain_http_response_wait_version_1 + (param $ids i64) (param $deadline i64) (result i64)) +``` -The logic and the signature of the function are unchanged since the previous version. The only change is the interpretation of the result value to avoid an unneeded allocation and promote the unification of host function result returning (zero and positive values are for success, and the negative values are for failure codes). +##### Changes -#### ext_offchain_http_request_wait +The function used to return a SCALE-encoded array of request statuses in a host-allocated buffer. It is changed to accept the output buffer of a known size and fill it with request statuses. -The new version 2 is introduced, deprecating `ext_offchain_http_request_wait_version_1`. The signature is: +##### New prototype ```wat -(func $ext_offchain_http_request_wait_version_2 +(func $ext_offchain_http_response_wait_version_2 (param $ids i64) (param $deadline i64) (param $out i64)) ``` @@ -657,13 +879,79 @@ The new version 2 is introduced, deprecating `ext_offchain_http_request_wait_ver * `deadline` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the SCALE-encoded Option value ([Definition 200](https://spec.polkadot.network/id-cryptography-encoding#defn-option-type)) containing the UNIX timestamp ([Definition 191](https://spec.polkadot.network/id-cryptography-encoding#defn-unix-time)). Passing `None` blocks indefinitely; * `out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the buffer of `i32` integers where the request statuses will be stored. The number of elements of the buffer must be strictly equal to the number of elements in the `ids` array; otherwise, the function panics. +#### ext_offchain_http_response_headers + +##### Existing prototype + +```wat +(func $ext_offchain_http_response_headers_version_1 + (param $request_id i32) (result i64)) +``` + ##### Changes -The logic of the functions is unchanged since the previous version. The signature has changed to align with the new memory allocation strategy. +The function is considered obsolete in favor of `ext_offchain_http_response_header_name` and `ext_offchain_http_response_header_value`. It used to return a host-allocated SCALE-encoded array of response header names and values. As it's hard to predict what buffer size is needed to accommodate such an array, new functions offer an iterative approach instead. + +#### ext_offchain_http_response_header_name + +##### Changes + +New function to replace functionality of `ext_offchain_http_response_headers` with iterative approach. Reads a header name at a given index into a runtime-allocated buffer provided. + +##### New prototype + +```wat +(func $ext_offchain_http_response_header_name_version_1 + (param $request_id i32) (param $header_index i32) (param $out i64) (result i64)) +``` + +##### Arguments + +* `request_id` is an i32 integer indicating the ID of the started request, as returned by `ext_offchain_http_request_start`; +* `header_index` is an i32 integer indicating the index of the header requested, starting from zero; +* `out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the buffer where the header name will be stored. The value is actually stored only if the buffer is large enough. Otherwise, the buffer contents are undefined; + +##### Result + +The result is an optional positive integer ([New Definition I](#new-def-i)), representing either the full length of the header name or the _absence_ of the header with such index. + +#### ext_offchain_http_response_header_value + +##### Changes + +New function to replace functionality of `ext_offchain_http_response_headers` with iterative approach. Reads a header value at a given index into a runtime-allocated buffer provided. + +##### New prototype + +```wat +(func $ext_offchain_http_response_header_value_version_1 + (param $request_id i32) (param $header_index i32) (param $out i64) (result i64)) +``` + +##### Arguments + +* `request_id` is an i32 integer indicating the ID of the started request, as returned by `ext_offchain_http_request_start`; +* `header_index` is an i32 integer indicating the index of the header requested, starting from zero; +* `out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the buffer where the header value will be stored. The value is actually stored only if the buffer is large enough. Otherwise, the buffer contents are undefined; + +##### Result + +The result is an optional positive integer ([New Definition I](#new-def-i)), representing either the full length of the header value or the _absence_ of the header with such index. #### ext_offchain_http_response_read_body -The new version 2 is introduced, deprecating `ext_offchain_http_response_read_body_version_1`. The signature is unchanged. +##### Existing prototype + +```wat +(func $ext_offchain_http_response_read_body_version_1 + (param $request_id i32) (param $buffer i64) (param $deadline i64) (result i64)) +``` + +##### Changes + +The function has already been using a runtime-allocated buffer to return its value. However, the result of the operation was returned as a host-allocated SCALE-encoded `Result`. It is changed to return a primitive indicating either the length written or an error. + +##### New prototype ```wat (func $ext_offchain_http_response_read_body_version_2 @@ -680,17 +968,24 @@ The new version 2 is introduced, deprecating `ext_offchain_http_response_read_bo On success, the number of bytes written to the buffer is returned. A value of `0` means the entire response was consumed and no further calls to the function are needed for the provided request ID. On failure, a negative error code is returned, where `-1` denotes the deadline was reached, `-2` denotes that an I/O error occurred, and `-3` denotes that the request ID provided was invalid. -##### Changes +#### ext_allocator_{malloc|free} -The logic and the signature of the function are unchanged since the previous version. The only change is the interpretation of the result value to avoid an unneeded allocation and promote the unification of host function result returning (zero and positive values are for success, and the negative values are for failure codes). +##### Existing prototype -#### ext_allocator_{malloc|free} +```wat +(func $ext_allocator_malloc_version_1 (param $size i32) (result i32)) +(func $ext_allocator_free_version_1 (param $ptr i32)) +``` -The functions are deprecated and must not be used in new code. +The functions are considered obsolete and must not be used in new code. #### ext_input_read -A new function is introduced. The signature is +##### Changes + +A new function providing means of passing input data from the host to the runtime. Previously, the host allocated a buffer and passed a pointer to it to the runtime. With the runtime allocator, it's not possible anymore, so the input data passing protocol changed (see "Other changes" section below). This function is required to support that change. + +##### New prototype ```wat (func $ext_input_read_version_1 From 6a91680f4e7b007362948e6d1065cdcdb3a8793c Mon Sep 17 00:00:00 2001 From: Dmitry Sinyavin Date: Mon, 13 Oct 2025 19:05:31 +0200 Subject: [PATCH 09/30] Add cached storage cursor retrieval function --- ...0145-remove-unnecessary-allocator-usage.md | 22 +++++++++++++++++++ 1 file changed, 22 insertions(+) diff --git a/text/0145-remove-unnecessary-allocator-usage.md b/text/0145-remove-unnecessary-allocator-usage.md index e43c0c991..d64d5c59d 100644 --- a/text/0145-remove-unnecessary-allocator-usage.md +++ b/text/0145-remove-unnecessary-allocator-usage.md @@ -431,6 +431,28 @@ The function used to return the SCALE-encoded runtime version information in a h The result is an optional positive integer ([New Definition I](#new-def-i)) representing the length of the output data that might have been stored in `out`. An _absent_ value represents the absence of the version information in the Wasm blob or a failure to read one. +#### ext_misc_last_cursor + +##### Changes + +A new function is introduced to make it possible to fetch a cursor produced by `ext_storage_clear_prefix`, `ext_default_child_storage_clear_prefix`, and `ext_default_child_storage_kill_prefix` even if a buffer initially provided to those functions wasn't large enough to accommodate the cursor. + +##### New prototype + +```wat +(func $ext_misc_last_cursor_version_1 + (param $out i64) (result i64)) +``` +##### Arguments + +* `out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the buffer where the last cached cursor will be stored, if one exists. The value is actually stored only if the buffer is large enough. Otherwise, the buffer contents are undefined. + +##### Result + +The result is an optional positive integer ([New Definition I](#new-def-i)) representing the length of the cursor that might have been stored in `out`. An _absent_ value represents the absence of the cached cursor. + +If the buffer had enough capacity and the cursor was stored successfully, the cursor cache is cleared and the same cursor cannot be retrieved once again using this function. + #### ext_crypto_{ed25519|sr25519|ecdsa}_public_keys ##### Existing prototypes From 858d2d73481e64a2f6a28f9ed64292189f86524e Mon Sep 17 00:00:00 2001 From: Dmitry Sinyavin Date: Mon, 17 Nov 2025 11:21:24 +0100 Subject: [PATCH 10/30] Address discussions: clear prefix --- mdbook/book.toml | 11 +- mdbook/text/0001-agile-coretime.html | 720 +++++++++++++++++ mdbook/text/0005-coretime-interface.html | 345 ++++++++ .../text/0007-system-collator-selection.html | 358 +++++++++ mdbook/text/0008-parachain-bootnodes-dht.html | 315 ++++++++ mdbook/text/0010-burn-coretime-revenue.html | 256 ++++++ ...12-process-for-adding-new-collectives.html | 313 ++++++++ ...uilder-and-core-runtime-apis-for-mbms.html | 306 ++++++++ ...rove-locking-mechanism-for-parachains.html | 347 +++++++++ mdbook/text/0022-adopt-encointer-runtime.html | 268 +++++++ mdbook/text/0032-minimal-relay.html | 436 +++++++++++ .../text/0042-extrinsics-state-version.html | 304 ++++++++ .../0043-storage-proof-size-hostfunction.html | 271 +++++++ mdbook/text/0045-nft-deposits-asset-hub.html | 433 +++++++++++ ...047-assignment-of-availability-chunks.html | 478 ++++++++++++ .../text/0048-session-keys-runtime-api.html | 317 ++++++++ mdbook/text/0050-fellowship-salaries.html | 335 ++++++++ ...0056-one-transaction-per-notification.html | 292 +++++++ .../0059-nodes-capabilities-discovery.html | 311 ++++++++ mdbook/text/0078-merkleized-metadata.html | 564 ++++++++++++++ ...-general-transaction-extrinsic-format.html | 271 +++++++ text/0001-agile-coretime.html | 736 ++++++++++++++++++ text/0005-coretime-interface.html | 361 +++++++++ text/0007-system-collator-selection.html | 374 +++++++++ text/0008-parachain-bootnodes-dht.html | 331 ++++++++ text/0010-burn-coretime-revenue.html | 272 +++++++ ...12-process-for-adding-new-collectives.html | 329 ++++++++ ...uilder-and-core-runtime-apis-for-mbms.html | 322 ++++++++ ...rove-locking-mechanism-for-parachains.html | 363 +++++++++ text/0022-adopt-encointer-runtime.html | 284 +++++++ text/0032-minimal-relay.html | 452 +++++++++++ text/0042-extrinsics-state-version.html | 320 ++++++++ .../0043-storage-proof-size-hostfunction.html | 287 +++++++ text/0045-nft-deposits-asset-hub.html | 449 +++++++++++ ...047-assignment-of-availability-chunks.html | 494 ++++++++++++ text/0048-session-keys-runtime-api.html | 333 ++++++++ text/0050-fellowship-salaries.html | 351 +++++++++ ...0056-one-transaction-per-notification.html | 308 ++++++++ text/0059-nodes-capabilities-discovery.html | 327 ++++++++ text/0078-merkleized-metadata.html | 580 ++++++++++++++ ...-general-transaction-extrinsic-format.html | 287 +++++++ ...0145-remove-unnecessary-allocator-usage.md | 37 +- 42 files changed, 14829 insertions(+), 19 deletions(-) create mode 100644 mdbook/text/0001-agile-coretime.html create mode 100644 mdbook/text/0005-coretime-interface.html create mode 100644 mdbook/text/0007-system-collator-selection.html create mode 100644 mdbook/text/0008-parachain-bootnodes-dht.html create mode 100644 mdbook/text/0010-burn-coretime-revenue.html create mode 100644 mdbook/text/0012-process-for-adding-new-collectives.html create mode 100644 mdbook/text/0013-prepare-blockbuilder-and-core-runtime-apis-for-mbms.html create mode 100644 mdbook/text/0014-improve-locking-mechanism-for-parachains.html create mode 100644 mdbook/text/0022-adopt-encointer-runtime.html create mode 100644 mdbook/text/0032-minimal-relay.html create mode 100644 mdbook/text/0042-extrinsics-state-version.html create mode 100644 mdbook/text/0043-storage-proof-size-hostfunction.html create mode 100644 mdbook/text/0045-nft-deposits-asset-hub.html create mode 100644 mdbook/text/0047-assignment-of-availability-chunks.html create mode 100644 mdbook/text/0048-session-keys-runtime-api.html create mode 100644 mdbook/text/0050-fellowship-salaries.html create mode 100644 mdbook/text/0056-one-transaction-per-notification.html create mode 100644 mdbook/text/0059-nodes-capabilities-discovery.html create mode 100644 mdbook/text/0078-merkleized-metadata.html create mode 100644 mdbook/text/0084-general-transaction-extrinsic-format.html create mode 100644 text/0001-agile-coretime.html create mode 100644 text/0005-coretime-interface.html create mode 100644 text/0007-system-collator-selection.html create mode 100644 text/0008-parachain-bootnodes-dht.html create mode 100644 text/0010-burn-coretime-revenue.html create mode 100644 text/0012-process-for-adding-new-collectives.html create mode 100644 text/0013-prepare-blockbuilder-and-core-runtime-apis-for-mbms.html create mode 100644 text/0014-improve-locking-mechanism-for-parachains.html create mode 100644 text/0022-adopt-encointer-runtime.html create mode 100644 text/0032-minimal-relay.html create mode 100644 text/0042-extrinsics-state-version.html create mode 100644 text/0043-storage-proof-size-hostfunction.html create mode 100644 text/0045-nft-deposits-asset-hub.html create mode 100644 text/0047-assignment-of-availability-chunks.html create mode 100644 text/0048-session-keys-runtime-api.html create mode 100644 text/0050-fellowship-salaries.html create mode 100644 text/0056-one-transaction-per-notification.html create mode 100644 text/0059-nodes-capabilities-discovery.html create mode 100644 text/0078-merkleized-metadata.html create mode 100644 text/0084-general-transaction-extrinsic-format.html diff --git a/mdbook/book.toml b/mdbook/book.toml index 2eab700e4..8918476ab 100644 --- a/mdbook/book.toml +++ b/mdbook/book.toml @@ -4,7 +4,7 @@ description = "An online book of RFCs approved or proposed within the Polkadot F src = "src" [build] -create-missing = false +create-missing = true [output.html] additional-css = ["theme/polkadot.css"] @@ -17,6 +17,15 @@ no-section-label = true enable = true woff = true +[output.pdf] +print-background=false +margin-top=0.5 +margin-left=0.5 +margin-bottom=0.5 +margin-right=0.5 +paper-width=8.3 +paper-height=11.7 + [preprocessor.toc] command = "mdbook-toc" renderer = ["html"] diff --git a/mdbook/text/0001-agile-coretime.html b/mdbook/text/0001-agile-coretime.html new file mode 100644 index 000000000..15ef03271 --- /dev/null +++ b/mdbook/text/0001-agile-coretime.html @@ -0,0 +1,720 @@ + + + + + + + 0001 - Polkadot Fellowship RFCs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+
+

RFC-1: Agile Coretime

+
+ + + +
Start Date30 June 2023
DescriptionAgile periodic-sale-based model for assigning Coretime on the Polkadot Ubiquitous Computer.
AuthorsGavin Wood
+
+

Summary

+

This proposes a periodic, sale-based method for assigning Polkadot Coretime, the analogue of "block space" within the Polkadot Network. The method takes into account the need for long-term capital expenditure planning for teams building on Polkadot, yet also provides a means to allow Polkadot to capture long-term value in the resource which it sells. It supports the possibility of building rich and dynamic secondary markets to optimize resource allocation and largely avoids the need for parameterization.

+

Motivation

+

Present System

+

The Polkadot Ubiquitous Computer, or just Polkadot UC, represents the public service provided by the Polkadot Network. It is a trust-free, WebAssembly-based, multicore, internet-native omnipresent virtual machine which is highly resilient to interference and corruption.

+

The present system of allocating the limited resources of the Polkadot Ubiquitous Computer is through a process known as parachain slot auctions. This is a parachain-centric paradigm whereby a single core is long-term allocated to a single parachain which itself implies a Substrate/Cumulus-based chain secured and connected via the Relay-chain. Slot auctions are on-chain candle auctions which proceed for several days and result in the core being assigned to the parachain for six months at a time up to 24 months in advance. Practically speaking, we only see two year periods being bid upon and leased.

+

Funds behind the bids made in the slot auctions are merely locked, they are not consumed or paid and become unlocked and returned to the bidder on expiry of the lease period. A means of sharing the deposit trustlessly known as a crowdloan is available allowing token holders to contribute to the overall deposit of a chain without any counterparty risk.

+

Problems

+

The present system is based on a model of one-core-per-parachain. This is a legacy interpretation of the Polkadot platform and is not a reflection of its present capabilities. By restricting ownership and usage to this model, more dynamic and resource-efficient means of utilizing the Polkadot Ubiquitous Computer are lost.

+

More specifically, it is impossible to lease out cores at anything less than six months, and apparently unrealistic to do so at anything less than two years. This removes the ability to dynamically manage the underlying resource, and generally experimentation, iteration and innovation suffer. It bakes into the platform an assumption of permanence for anything deployed into it and restricts the market's ability to find a more optimal allocation of the finite resource.

+

There is no ability to determine capital requirements for hosting a parachain beyond two years from the point of its initial deployment onto Polkadot. While it would be unreasonable to have perfect and indefinite cost predictions for any real-world platform, not having any clarity whatsoever beyond "market rates" two years hence can be a very off-putting prospect for teams to buy into.

+

However, quite possibly the most substantial problem is both a perceived and often real high barrier to entry of the Polkadot ecosystem. By forcing innovators to either raise seven-figure sums through investors or appeal to the wider token-holding community, Polkadot makes it difficult for a small band of innovators to deploy their technology into Polkadot. While not being actually permissioned, it is also far from the barrierless, permissionless ideal which an innovation platform such as Polkadot should be striving for.

+

Requirements

+
    +
  1. The solution SHOULD provide an acceptable value-capture mechanism for the Polkadot network.
  2. +
  3. The solution SHOULD allow parachains and other projects deployed on to the Polkadot UC to make long-term capital expenditure predictions for the cost of ongoing deployment.
  4. +
  5. The solution SHOULD minimize the barriers to entry in the ecosystem.
  6. +
  7. The solution SHOULD work well when the Polkadot UC has up to 1,000 cores.
  8. +
  9. The solution SHOULD work when the number of cores which the Polkadot UC can support changes over time.
  10. +
  11. The solution SHOULD facilitate the optimal allocation of work to cores of the Polkadot UC, including by facilitating the trade of regular core assignment at various intervals and for various spans.
  12. +
  13. The solution SHOULD avoid creating additional dependencies on functionality which the Relay-chain need not strictly provide for the delivery of the Polkadot UC.
  14. +
+

Furthermore, the design SHOULD be implementable and deployable in a timely fashion; three months from the acceptance of this RFC should not be unreasonable.

+

Stakeholders

+

Primary stakeholder sets are:

+
    +
  • Protocol researchers and developers, largely represented by the Polkadot Fellowship and Parity Technologies' Engineering division.
  • +
  • Polkadot Parachain teams both present and future, and their users.
  • +
  • Polkadot DOT token holders.
  • +
+

Socialization:

+

The essensials of this proposal were presented at Polkadot Decoded 2023 Copenhagen on the Main Stage. A small amount of socialization at the Parachain Summit preceeded it and some substantial discussion followed it. Parity Ecosystem team is currently soliciting views from ecosystem teams who would be key stakeholders.

+

Explanation

+

Overview

+

Upon implementation of this proposal, the parachain-centric slot auctions and associated crowdloans cease. Instead, Coretime on the Polkadot UC is sold by the Polkadot System in two separate formats: Bulk Coretime and Instantaneous Coretime.

+

When a Polkadot Core is utilized, we say it is dedicated to a Task rather than a "parachain". The Task to which a Core is dedicated may change at every Relay-chain block and while one predominant type of Task is to secure a Cumulus-based blockchain (i.e. a parachain), other types of Tasks are envisioned.

+

Bulk Coretime is sold periodically on a specialised system chain known as the Coretime-chain and allocated in advance of its usage, whereas Instantaneous Coretime is sold on the Relay-chain immediately prior to usage on a block-by-block basis.

+

This proposal does not fix what should be done with revenue from sales of Coretime and leaves it for a further RFC process.

+

Owners of Bulk Coretime are tracked on the Coretime-chain and the ownership status and properties of the owned Coretime are exposed over XCM as a non-fungible asset.

+

At the request of the owner, the Coretime-chain allows a single Bulk Coretime asset, known as a Region, to be used in various ways including transferal to another owner, allocated to a particular task (e.g. a parachain) or placed in the Instantaneous Coretime Pool. Regions can also be split out, either into non-overlapping sub-spans or exactly-overlapping spans with less regularity.

+

The Coretime-Chain periodically instructs the Relay-chain to assign its cores to alternative tasks as and when Core allocations change due to new Regions coming into effect.

+

Renewal and Migration

+

There is a renewal system which allows a Bulk Coretime assignment of a single core to be renewed unchanged with a known price increase from month to month. Renewals are processed in a period prior to regular purchases, effectively giving them precedence over a fixed number of cores available.

+

Renewals are only enabled when a core's assignment does not include an Instantaneous Coretime allocation and has not been split into shorter segments.

+

Thus, renewals are designed to ensure only that committed parachains get some guarantees about price for predicting future costs. This price-capped renewal system only allows cores to be reused for their same tasks from month to month. In any other context, Bulk Coretime would need to be purchased regularly.

+

As a migration mechanism, pre-existing leases (from the legacy lease/slots/crowdloan framework) are initialized into the Coretime-chain and cores assigned to them prior to Bulk Coretime sales. In the sale where the lease expires, the system offers a renewal, as above, to allow a priority sale of Bulk Coretime and ensure that the Parachain suffers no downtime when transitioning from the legacy framework.

+

Instantaneous Coretime

+

Processing of Instantaneous Coretime happens in part on the Polkadot Relay-chain. Credit is purchased on the Coretime-chain for regular DOT tokens, and this results in a DOT-denominated Instantaneous Coretime Credit account on the Relay-chain being credited for the same amount.

+

Though the Instantaneous Coretime Credit account records a balance for an account identifier (very likely controlled by a collator), it is non-transferable and non-refundable. It can only be consumed in order to purchase some Instantaneous Coretime with immediate availability.

+

The Relay-chain reports this usage back to the Coretime-chain in order to allow it to reward the providers of the underlying Coretime, either the Polkadot System or owners of Bulk Coretime who contributed to the Instantaneous Coretime Pool.

+

Specifically the Relay-chain is expected to be responsible for:

+
    +
  • holding non-transferable, non-refundable DOT-denominated Instantaneous Coretime Credit balance information.
  • +
  • setting and adjusting the price of Instantaneous Coretime based on usage.
  • +
  • allowing collators to consume their Instantaneous Coretime Credit at the current pricing in exchange for the ability to schedule one PoV for near-immediate usage.
  • +
  • ensuring the Coretime-Chain has timely accounting information on Instantaneous Coretime Sales revenue.
  • +
+

Coretime-chain

+

The Coretime-chain is a new system parachain. It has the responsibility of providing the Relay-chain via UMP with information of:

+
    +
  • The number of cores which should be made available.
  • +
  • Which tasks should be running on which cores and in what ratios.
  • +
  • Accounting information for Instantaneous Coretime Credit.
  • +
+

It also expects information from the Relay-chain via DMP:

+
    +
  • The number of cores available to be scheduled.
  • +
  • Account information on Instantaneous Coretime Sales.
  • +
+

The specific interface is properly described in RFC-5.

+

Detail

+

Parameters

+

This proposal includes a number of parameters which need not necessarily be fixed. Their usage is explained below, but their values are suggested or specified in the later section Parameter Values.

+

Reservations and Leases

+

The Coretime-chain includes some governance-set reservations of Coretime; these cover every System-chain. Additionally, governance is expected to initialize details of the pre-existing leased chains.

+

Regions

+

A Region is an assignable period of Coretime with a known regularity.

+

All Regions are associated with a unique Core Index, to identify which core the assignment of which ownership of the Region controls.

+

All Regions are also associated with a Core Mask, an 80-bit bitmap, to denote the regularity at which it may be scheduled on the core. If all bits are set in the Core Mask value, it is said to be Complete. 80 is selected since this results in the size of the datatype used to identify any Region of Polkadot Coretime to be a very convenient 128-bit. Additionally, if TIMESLICE (the number of Relay-chain blocks in a Timeslice) is 80, then a single bit in the Core Mask bitmap represents exactly one Core for one Relay-chain block in one Timeslice.

+

All Regions have a span. Region spans are quantized into periods of TIMESLICE blocks; BULK_PERIOD divides into TIMESLICE a whole number of times.

+

The Timeslice type is a u32 which can be multiplied by TIMESLICE to give a BlockNumber value representing the same quantity in terms of Relay-chain blocks.

+

Regions can be tasked to a TaskId (aka ParaId) or pooled into the Instantaneous Coretime Pool. This process can be Provisional or Final. If done only provisionally or not at all then they are fresh and have an Owner which is able to manipulate them further including reassignment. Once Final, then all ownership information is discarded and they cannot be manipulated further. Renewal is not possible when only provisionally tasked/pooled.

+

Bulk Sales

+

A sale of Bulk Coretime occurs on the Coretime-chain every BULK_PERIOD blocks.

+

In every sale, a BULK_LIMIT of individual Regions are offered for sale.

+

Each Region offered for sale has a different Core Index, ensuring that they each represent an independently allocatable resource on the Polkadot UC.

+

The Regions offered for sale have the same span: they last exactly BULK_PERIOD blocks, and begin immediately following the span of the previous Sale's Regions. The Regions offered for sale also have the complete, non-interlaced, Core Mask.

+

The Sale Period ends immediately as soon as span of the Coretime Regions that are being sold begins. At this point, the next Sale Price is set according to the previous Sale Price together with the number of Regions sold compared to the desired and maximum amount of Regions to be sold. See Price Setting for additional detail on this point.

+

Following the end of the previous Sale Period, there is an Interlude Period lasting INTERLUDE_PERIOD of blocks. After this period is elapsed, regular purchasing begins with the Purchasing Period.

+

This is designed to give at least two weeks worth of time for the purchased regions to be partitioned, interlaced, traded and allocated.

+

The Interlude

+

The Interlude period is a period prior to Regular Purchasing where renewals are allowed to happen. This has the effect of ensuring existing long-term tasks/parachains have a chance to secure their Bulk Coretime for a well-known price prior to general sales.

+

Regular Purchasing

+

Any account may purchase Regions of Bulk Coretime if they have the appropriate funds in place during the Purchasing Period, which is from INTERLUDE_PERIOD blocks after the end of the previous sale until the beginning of the Region of the Bulk Coretime which is for sale as long as there are Regions of Bulk Coretime left for sale (i.e. no more than BULK_LIMIT have already been sold in the Bulk Coretime Sale). The Purchasing Period is thus roughly BULK_PERIOD - INTERLUDE_PERIOD blocks in length.

+

The Sale Price varies during an initial portion of the Purchasing Period called the Leadin Period and then stays stable for the remainder. This initial portion is LEADIN_PERIOD blocks in duration. During the Leadin Period the price decreases towards the Sale Price, which it lands at by the end of the Leadin Period. The actual curve by which the price starts and descends to the Sale Price is outside the scope of this RFC, though a basic suggestion is provided in the Price Setting Notes, below.

+

Renewals

+

At any time when there are remaining Regions of Bulk Coretime to be sold, including during the Interlude Period, then certain Bulk Coretime assignmnents may be Renewed. This is similar to a purchase in that funds must be paid and it consumes one of the Regions of Bulk Coretime which would otherwise be placed for purchase. However there are two key differences.

+

Firstly, the price paid is the minimum of RENEWAL_PRICE_CAP more than what the purchase/renewal price was in the previous renewal and the current (or initial, if yet to begin) regular Sale Price.

+

Secondly, the purchased Region comes preassigned with exactly the same workload as before. It cannot be traded, repartitioned, interlaced or exchanged. As such unlike regular purchasing the Region never has an owner.

+

Renewal is only possible for either cores which have been assigned as a result of a previous renewal, which are migrating from legacy slot leases, or which fill their Bulk Coretime with an unsegmented, fully and finally assigned workload which does not include placement in the Instantaneous Coretime Pool. The renewed workload will be the same as this initial workload.

+

Manipulation

+

Regions may be manipulated in various ways by its owner:

+
    +
  1. Transferred in ownership.
  2. +
  3. Partitioned into quantized, non-overlapping segments of Bulk Coretime with the same ownership.
  4. +
  5. Interlaced into multiple Regions over the same period whose eventual assignments take turns to be scheduled.
  6. +
  7. Assigned to a single, specific task (identified by TaskId aka ParaId). This may be either provisional or final.
  8. +
  9. Pooled into the Instantaneous Coretime Pool, in return for a pro-rata amount of the revenue from the Instantaneous Coretime Sales over its period.
  10. +
+

Enactment

+

Specific functions of the Coretime-chain

+

Several functions of the Coretime-chain SHALL be exposed through dispatchables and/or a nonfungible trait implementation integrated into XCM:

+

1. transfer

+

Regions may have their ownership transferred.

+

A transfer(region: RegionId, new_owner: AccountId) dispatchable shall have the effect of altering the current owner of the Region identified by region from the signed origin to new_owner.

+

An implementation of the nonfungible trait SHOULD include equivalent functionality. RegionId SHOULD be used for the AssetInstance value.

+

2. partition

+

Regions may be split apart into two non-overlapping interior Regions of the same Core Mask which together concatenate to the original Region.

+

A partition(region: RegionId, pivot: Timeslice) dispatchable SHALL have the effect of removing the Region identified by region and adding two new Regions of the same owner and Core Mask. One new Region will begin at the same point of the old Region but end at pivot timeslices into the Region, whereas the other will begin at this point and end at the end point of the original Region.

+

Also:

+
    +
  • owner field of region must the equal to the Signed origin.
  • +
  • pivot must equal neither the begin nor end fields of the region.
  • +
+

3. interlace

+

Regions may be decomposed into two Regions of the same span whose eventual assignments take turns on the core by virtue of having complementary Core Masks.

+

An interlace(region: RegionId, mask: CoreMask) dispatchable shall have the effect of removing the Region identified by region and creating two new Regions. The new Regions will each have the same span and owner of the original Region, but one Region will have a Core Mask equal to mask and the other will have Core Mask equal to the XOR of mask and the Core Mask of the original Region.

+

Also:

+
    +
  • owner field of region must the equal to the Signed origin.
  • +
  • mask must have some bits set AND must not equal the Core Mask of the old Region AND must only have bits set which are also set in the old Region's' Core Mask.
  • +
+

4. assign

+

Regions may be assigned to a core.

+

A assign(region: RegionId, target: TaskId, finality: Finality) dispatchable shall have the effect of placing an item in the workplan corresponding to the region's properties and assigned to the target task.

+

If the region's end has already passed (taking into account any advance notice requirements) then this operation is a no-op. If the region's begining has already passed, then it is effectively altered to become the next schedulable timeslice.

+

finality may have the value of either Final or Provisional. If Final, then the operation is free, the region record is removed entirely from storage and renewal may be possible: if the Region's span is the entire BULK_PERIOD, then the Coretime-chain records in storage that the allocation happened during this period in order to facilitate the possibility for a renewal. (Renewal only becomes possible when the full Core Mask of a core is finally assigned for the full BULK_PERIOD.)

+

Also:

+
    +
  • owner field of region must the equal to the Signed origin.
  • +
+

5. pool

+

Regions may be consumed in exchange for a pro rata portion of the Instantaneous Coretime Sales Revenue from its period and regularity.

+

A pool(region: RegionId, beneficiary: AccountId, finality: Finality) dispatchable shall have the effect of placing an item in the workplan corresponding to the region's properties and assigned to the Instantaneous Coretime Pool. The details of the region will be recorded in order to allow for a pro rata share of the Instantaneous Coretime Sales Revenue at the time of the Region relative to any other providers in the Pool.

+

If the region's end has already passed (taking into account any advance notice requirements) then this operation is a no-op. If the region's begining has already passed, then it is effectively altered to become the next schedulable timeslice.

+

finality may have the value of either Final or Provisional. If Final, then the operation is free and the region record is removed entirely from storage.

+

Also:

+
    +
  • owner field of region must the equal to the Signed origin.
  • +
+

6. Purchases

+

A dispatchable purchase(price_limit: Balance) shall be provided. Any account may call purchase to purchase Bulk Coretime at the maximum price of price_limit.

+

This may be called successfully only:

+
    +
  1. during the regular Purchasing Period;
  2. +
  3. when the caller is a Signed origin and their account balance is reducible by the current sale price;
  4. +
  5. when the current sale price is no greater than price_limit; and
  6. +
  7. when the number of cores already sold is less than BULK_LIMIT.
  8. +
+

If successful, the caller's account balance is reduced by the current sale price and a new Region item for the following Bulk Coretime span is issued with the owner equal to the caller's account.

+

7. Renewals

+

A dispatchable renew(core: CoreIndex) shall be provided. Any account may call renew to purchase Bulk Coretime and renew an active allocation for the given core.

+

This may be called during the Interlude Period as well as the regular Purchasing Period and has the same effect as purchase followed by assign, except that:

+
    +
  1. The price of the sale is the Renewal Price (see next).
  2. +
  3. The Region is allocated exactly the given core is currently allocated for the present Region.
  4. +
+

Renewal is only valid where a Region's span is assigned to Tasks (not placed in the Instantaneous Coretime Pool) for the entire unsplit BULK_PERIOD over all of the Core Mask and with Finality. There are thus three possibilities of a renewal being allowed:

+
    +
  1. Purchased unsplit Coretime with final assignment to tasks over the full Core Mask.
  2. +
  3. Renewed Coretime.
  4. +
  5. A legacy lease which is ending.
  6. +
+

Renewal Price

+

The Renewal Price is the minimum of the current regular Sale Price (or the initial Sale Price if in the Interlude Period) and:

+
    +
  • If the workload being renewed came to be through the Purchase and Assignment of Bulk Coretime, then the price paid during that Purchase operation.
  • +
  • If the workload being renewed was previously renewed, then the price paid during this previous Renewal operation plus RENEWAL_PRICE_CAP.
  • +
  • If the workload being renewed is a migation from a legacy slot auction lease, then the nominal price for a Regular Purchase (outside of the Lead-in Period) of the Sale during which the legacy lease expires.
  • +
+

8. Instantaneous Coretime Credits

+

A dispatchable purchase_credit(amount: Balance, beneficiary: RelayChainAccountId) shall be provided. Any account with at least amount spendable funds may call this. This increases the Instantaneous Coretime Credit balance on the Relay-chain of the beneficiary by the given amount.

+

This Credit is consumable on the Relay-chain as part of the Task scheduling system and its specifics are out of the scope of this proposal. When consumed, revenue is recorded and provided to the Coretime-chain for proper distribution. The API for doing this is specified in RFC-5.

+

Notes on the Instantaneous Coretime Market

+

For an efficient market to form around the provision of Bulk-purchased Cores into the pool of cores available for Instantaneous Coretime purchase, it is crucial to ensure that price changes for the purchase of Instantaneous Coretime are reflected well in the revenues of private Coretime providers during the same period.

+

In order to ensure this, then it is crucial that Instantaneous Coretime, once purchased, cannot be held indefinitely prior to eventual use since, if this were the case, a nefarious collator could purchase Coretime when cheap and utilize it some time later when expensive and deprive private Coretime providers of their revenue.

+

It must therefore be assumed that Instantaneous Coretime, once purchased, has a definite and short "shelf-life", after which it becomes unusable. This incentivizes collators to avoid purchasing Coretime unless they expect to utilize it imminently and thus helps create an efficient market-feedback mechanism whereby a higher price will actually result in material revenues for private Coretime providers who contribute to the pool of Cores available to service Instantaneous Coretime purchases.

+

Notes on Economics

+

The specific pricing mechanisms are out of scope for the present proposal. Proposals on economics should be properly described and discussed in another RFC. However, for the sake of completeness, I provide some basic illustration of how price setting could potentially work.

+

Bulk Price Progression

+

The present proposal assumes the existence of a price-setting mechanism which takes into account several parameters:

+
    +
  • OLD_PRICE: The price of the previous sale.
  • +
  • BULK_TARGET: the target number of cores to be purchased as Bulk Coretime Regions or renewed during the previous sale.
  • +
  • BULK_LIMIT: the maximum number of cores which could have been purchased/renewed during the previous sale.
  • +
  • CORES_SOLD: the actual number of cores purchased/renewed in the previous sale.
  • +
  • SELLOUT_PRICE: the price at which the most recent Bulk Coretime was purchased (not renewed) prior to selling more cores than BULK_TARGET (or immediately after, if none were purchased before). This may not have a value if no Bulk Coretime was purchased.
  • +
+

In general we would expect the price to increase the closer CORES_SOLD gets to BULK_LIMIT and to decrease the closer it gets to zero. If it is exactly equal to BULK_TARGET, then we would expect the price to remain the same.

+

In the edge case that no cores were purchased yet more cores were sold (through renewals) than the target, then we would also avoid altering the price.

+

A simple example of this would be the formula:

+
IF SELLOUT_PRICE == NULL AND CORES_SOLD > BULK_TARGET THEN
+    RETURN OLD_PRICE
+END IF
+EFFECTIVE_PRICE := IF CORES_SOLD > BULK_TARGET THEN
+    SELLOUT_PRICE
+ELSE
+    OLD_PRICE
+END IF
+NEW_PRICE := IF CORES_SOLD < BULK_TARGET THEN
+    EFFECTIVE_PRICE * MAX(CORES_SOLD, 1) / BULK_TARGET
+ELSE
+    EFFECTIVE_PRICE + EFFECTIVE_PRICE *
+        (CORES_SOLD - BULK_TARGET) / (BULK_LIMIT - BULK_TARGET)
+END IF
+
+

This exists only as a trivial example to demonstrate a basic solution exists, and should not be intended as a concrete proposal.

+

Intra-Leadin Price-decrease

+

During the Leadin Period of a sale, the effective price starts higher than the Sale Price and falls to end at the Sale Price at the end of the Leadin Period. The price can thus be defined as a simple factor above one on which the Sale Price is multiplied. A function which returns this factor would accept a factor between zero and one specifying the portion of the Leadin Period which has passed.

+

Thus we assume SALE_PRICE, then we can define PRICE as:

+
PRICE := SALE_PRICE * FACTOR((NOW - LEADIN_BEGIN) / LEADIN_PERIOD)
+
+

We can define a very simple progression where the price decreases monotonically from double the Sale Price at the beginning of the Leadin Period.

+
FACTOR(T) := 2 - T
+
+

Parameter Values

+

Parameters are either suggested or specified. If suggested, it is non-binding and the proposal should not be judged on the value since other RFCs and/or the governance mechanism of Polkadot is expected to specify/maintain it. If specified, then the proposal should be judged on the merit of the value as-is.

+
+ + + + + + + +
NameValue
BULK_PERIOD28 * DAYSspecified
INTERLUDE_PERIOD7 * DAYSspecified
LEADIN_PERIOD7 * DAYSspecified
TIMESLICE8 * MINUTESspecified
BULK_TARGET30suggested
BULK_LIMIT45suggested
RENEWAL_PRICE_CAPPerbill::from_percent(2)suggested
+
+

Instantaneous Price Progression

+

This proposal assumes the existence of a Relay-chain-based price-setting mechanism for the Instantaneous Coretime Market which alters from block to block, taking into account several parameters: the last price, the size of the Instantaneous Coretime Pool (in terms of cores per Relay-chain block) and the amount of Instantaneous Coretime waiting for processing (in terms of Core-blocks queued).

+

The ideal situation is to have the size of the Instantaneous Coretime Pool be equal to some factor of the Instantaneous Coretime waiting. This allows all Instantaneous Coretime sales to be processed with some limited latency while giving limited flexibility over ordering to the Relay-chain apparatus which is needed for efficient operation.

+

If we set a factor of three, and thus aim to retain a queue of Instantaneous Coretime Sales which can be processed within three Relay-chain blocks, then we would increase the price if the queue goes above three times the amount of cores available, and decrease if it goes under.

+

Let us assume the values OLD_PRICE, FACTOR, QUEUE_SIZE and POOL_SIZE. A simple definition of the NEW_PRICE would be thus:

+
NEW_PRICE := IF QUEUE_SIZE < POOL_SIZE * FACTOR THEN
+    OLD_PRICE * 0.95
+ELSE
+    OLD_PRICE / 0.95
+END IF
+
+

This exists only as a trivial example to demonstrate a basic solution exists, and should not be intended as a concrete proposal.

+

Notes on Types

+

This exists only as a short illustration of a potential technical implementation and should not be treated as anything more.

+

Regions

+

This data schema achieves a number of goals:

+
    +
  • Coretime can be individually traded at a level of a single usage of a single core.
  • +
  • Coretime Regions, of arbitrary span and up to 1/80th interlacing can be exposed as NFTs and exchanged.
  • +
  • Any Coretime Region can be contributed to the Instantaneous Coretime Pool.
  • +
  • Unlimited number of individual Coretime contributors to the Instantaneous Coretime Pool. (Effectively limited only in number of cores and interlacing level; with current values this would allow 80,000 individual payees per timeslice).
  • +
  • All keys are self-describing.
  • +
  • Workload to communicate core (re-)assignments is well-bounded and low in weight.
  • +
  • All mandatory bookkeeping workload is well-bounded in weight.
  • +
+
#![allow(unused)]
+fn main() {
+type Timeslice = u32; // 80 block amounts.
+type CoreIndex = u16;
+type CoreMask = [u8; 10]; // 80-bit bitmap.
+
+// 128-bit (16 bytes)
+struct RegionId {
+    begin: Timeslice,
+    core: CoreIndex,
+    mask: CoreMask,
+}
+// 296-bit (37 bytes)
+struct RegionRecord {
+    end: Timeslice,
+    owner: AccountId,
+}
+
+map Regions = Map<RegionId, RegionRecord>;
+
+// 40-bit (5 bytes). Could be 32-bit with a more specialised type.
+enum CoreTask {
+    Off,
+    Assigned { target: TaskId },
+    InstaPool,
+}
+// 120-bit (15 bytes). Could be 14 bytes with a specialised 32-bit `CoreTask`.
+struct ScheduleItem {
+    mask: CoreMask, // 80 bit
+    task: CoreTask, // 40 bit
+}
+
+/// The work we plan on having each core do at a particular time in the future.
+type Workplan = Map<(Timeslice, CoreIndex), BoundedVec<ScheduleItem, 80>>;
+/// The current workload of each core. This gets updated with workplan as timeslices pass.
+type Workload = Map<CoreIndex, BoundedVec<ScheduleItem, 80>>;
+
+enum Contributor {
+    System,
+    Private(AccountId),
+}
+
+struct ContributionRecord {
+    begin: Timeslice,
+    end: Timeslice,
+    core: CoreIndex,
+    mask: CoreMask,
+    payee: Contributor,
+}
+type InstaPoolContribution = Map<ContributionRecord, ()>;
+
+type SignedTotalMaskBits = u32;
+type InstaPoolIo = Map<Timeslice, SignedTotalMaskBits>;
+
+type PoolSize = Value<TotalMaskBits>;
+
+/// Counter for the total CoreMask which could be dedicated to a pool. `u32` so we don't ever get
+/// an overflow.
+type TotalMaskBits = u32;
+struct InstaPoolHistoryRecord {
+    total_contributions: TotalMaskBits,
+    maybe_payout: Option<Balance>,
+}
+/// Total InstaPool rewards for each Timeslice and the number of core Mask which contributed.
+type InstaPoolHistory = Map<Timeslice, InstaPoolHistoryRecord>;
+}
+

CoreMask tracks unique "parts" of a single core. It is used with interlacing in order to give a unique identifier to each component of any possible interlacing configuration of a core, allowing for simple self-describing keys for all core ownership and allocation information. It also allows for each core's workload to be tracked and updated progressively, keeping ongoing compute costs well-bounded and low.

+

Regions are issued into the Regions map and can be transferred, partitioned and interlaced as the owner desires. Regions can only be tasked if they begin after the current scheduling deadline (if they have missed this, then the region can be auto-trimmed until it is).

+

Once tasked, they are removed from there and a record is placed in Workplan. In addition, if they are contributed to the Instantaneous Coretime Pool, then an entry is placing in InstaPoolContribution and InstaPoolIo.

+

Each timeslice, InstaPoolIo is used to update the current value of PoolSize. A new entry in InstaPoolHistory is inserted, with the total_contributions field of InstaPoolHistoryRecord being informed by the PoolSize value. Each core's has its Workload mutated according to its Workplan for the upcoming timeslice.

+

When Instantaneous Coretime Market Revenues are reported for a particular timeslice from the Relay-chain, this information gets placed in the maybe_payout field of the relevant record of InstaPoolHistory.

+

Payments can be requested made for any records in InstaPoolContribution whose begin is the key for a value in InstaPoolHistory whose maybe_payout is Some. In this case, the total_contributions is reduced by the ContributionRecord's mask and a pro rata amount paid. The ContributionRecord is mutated by incrementing begin, or removed if begin becomes equal to end.

+

Example:

+
#![allow(unused)]
+fn main() {
+// Simple example with a `u16` `CoreMask` and bulk sold in 100 timeslices.
+Regions:
+{ core: 0u16, begin: 100, mask: 0b1111_1111_1111_1111u16 } => { end: 200u32, owner: Alice };
+// First split @ 50
+Regions:
+{ core: 0u16, begin: 100, mask: 0b1111_1111_1111_1111u16 } => { end: 150u32, owner: Alice };
+{ core: 0u16, begin: 150, mask: 0b1111_1111_1111_1111u16 } => { end: 200u32, owner: Alice };
+// Share half of first 50 blocks
+Regions:
+{ core: 0u16, begin: 100, mask: 0b1111_1111_0000_0000u16 } => { end: 150u32, owner: Alice };
+{ core: 0u16, begin: 100, mask: 0b0000_0000_1111_1111u16 } => { end: 150u32, owner: Alice };
+{ core: 0u16, begin: 150, mask: 0b1111_1111_1111_1111u16 } => { end: 200u32, owner: Alice };
+// Sell half of them to Bob
+Regions:
+{ core: 0u16, begin: 100, mask: 0b1111_1111_0000_0000u16 } => { end: 150u32, owner: Alice };
+{ core: 0u16, begin: 100, mask: 0b0000_0000_1111_1111u16 } => { end: 150u32, owner: Bob };
+{ core: 0u16, begin: 150, mask: 0b1111_1111_1111_1111u16 } => { end: 200u32, owner: Alice };
+// Bob splits first 10 and assigns them to himself.
+Regions:
+{ core: 0u16, begin: 100, mask: 0b1111_1111_0000_0000u16 } => { end: 150u32, owner: Alice };
+{ core: 0u16, begin: 100, mask: 0b0000_0000_1111_1111u16 } => { end: 110u32, owner: Bob };
+{ core: 0u16, begin: 110, mask: 0b0000_0000_1111_1111u16 } => { end: 150u32, owner: Bob };
+{ core: 0u16, begin: 150, mask: 0b1111_1111_1111_1111u16 } => { end: 200u32, owner: Alice };
+// Bob shares first 10 3 ways and sells smaller shares to Charlie and Dave
+Regions:
+{ core: 0u16, begin: 100, mask: 0b1111_1111_0000_0000u16 } => { end: 150u32, owner: Alice };
+{ core: 0u16, begin: 100, mask: 0b0000_0000_1100_0000u16 } => { end: 110u32, owner: Charlie };
+{ core: 0u16, begin: 100, mask: 0b0000_0000_0011_0000u16 } => { end: 110u32, owner: Dave };
+{ core: 0u16, begin: 100, mask: 0b0000_0000_0000_1111u16 } => { end: 110u32, owner: Bob };
+{ core: 0u16, begin: 110, mask: 0b0000_0000_1111_1111u16 } => { end: 150u32, owner: Bob };
+{ core: 0u16, begin: 150, mask: 0b1111_1111_1111_1111u16 } => { end: 200u32, owner: Alice };
+// Bob assigns to his para B, Charlie and Dave assign to their paras C and D; Alice assigns first 50 to A
+Regions:
+{ core: 0u16, begin: 150, mask: 0b1111_1111_1111_1111u16 } => { end: 200u32, owner: Alice };
+Workplan:
+(100, 0) => vec![
+    { mask: 0b1111_1111_0000_0000u16, task: Assigned(A) },
+    { mask: 0b0000_0000_1100_0000u16, task: Assigned(C) },
+    { mask: 0b0000_0000_0011_0000u16, task: Assigned(D) },
+    { mask: 0b0000_0000_0000_1111u16, task: Assigned(B) },
+]
+(110, 0) => vec![{ mask: 0b0000_0000_1111_1111u16, task: Assigned(B) }]
+// Alice assigns her remaining 50 timeslices to the InstaPool paying herself:
+Regions: (empty)
+Workplan:
+(100, 0) => vec![
+    { mask: 0b1111_1111_0000_0000u16, task: Assigned(A) },
+    { mask: 0b0000_0000_1100_0000u16, task: Assigned(C) },
+    { mask: 0b0000_0000_0011_0000u16, task: Assigned(D) },
+    { mask: 0b0000_0000_0000_1111u16, task: Assigned(B) },
+]
+(110, 0) => vec![{ mask: 0b0000_0000_1111_1111u16, task: Assigned(B) }]
+(150, 0) => vec![{ mask: 0b1111_1111_1111_1111u16, task: InstaPool }]
+InstaPoolContribution:
+{ begin: 150, end: 200, core: 0, mask: 0b1111_1111_1111_1111u16, payee: Alice }
+InstaPoolIo:
+150 => 16
+200 => -16
+// Actual notifications to relay chain.
+// Assumes:
+// - Timeslice is 10 blocks.
+// - Timeslice 0 begins at block #1000.
+// - Relay needs 10 blocks notice of change.
+//
+Workload: 0 => vec![]
+PoolSize: 0
+
+// Block 990:
+Relay <= assign_core(core: 0u16, begin: 1000, assignment: vec![(A, 8), (C, 2), (D, 2), (B, 4)])
+Workload: 0 => vec![
+    { mask: 0b1111_1111_0000_0000u16, task: Assigned(A) },
+    { mask: 0b0000_0000_1100_0000u16, task: Assigned(C) },
+    { mask: 0b0000_0000_0011_0000u16, task: Assigned(D) },
+    { mask: 0b0000_0000_0000_1111u16, task: Assigned(B) },
+]
+PoolSize: 0
+
+// Block 1090:
+Relay <= assign_core(core: 0u16, begin: 1100, assignment: vec![(A, 8), (B, 8)])
+Workload: 0 => vec![
+    { mask: 0b1111_1111_0000_0000u16, task: Assigned(A) },
+    { mask: 0b0000_0000_1111_1111u16, task: Assigned(B) },
+]
+PoolSize: 0
+
+// Block 1490:
+Relay <= assign_core(core: 0u16, begin: 1500, assignment: vec![(Pool, 16)])
+Workload: 0 => vec![
+    { mask: 0b1111_1111_1111_1111u16, task: InstaPool },
+]
+PoolSize: 16
+InstaPoolIo:
+200 => -16
+InstaPoolHistory:
+150 => { total_contributions: 16, maybe_payout: None }
+
+// Sometime after block 1500:
+InstaPoolHistory:
+150 => { total_contributions: 16, maybe_payout: Some(P) }
+
+// Sometime after block 1990:
+InstaPoolIo: (empty)
+PoolSize: 0
+InstaPoolHistory:
+150 => { total_contributions: 16, maybe_payout: Some(P0) }
+151 => { total_contributions: 16, maybe_payout: Some(P1) }
+152 => { total_contributions: 16, maybe_payout: Some(P2) }
+...
+199 => { total_contributions: 16, maybe_payout: Some(P49) }
+
+// Sometime later still Alice calls for a payout
+InstaPoolContribution: (empty)
+InstaPoolHistory: (empty)
+// Alice gets rewarded P0 + P1 + ... P49.
+}
+

Rollout

+

Rollout of this proposal comes in several phases:

+
    +
  1. Finalise the specifics of implementation; this may be done through a design document or through a well-documented prototype implementation.
  2. +
  3. Implement the design, including all associated aspects such as unit tests, benchmarks and any support software needed.
  4. +
  5. If any new parachain is required, launch of this.
  6. +
  7. Formal audit of the implementation and any manual testing.
  8. +
  9. Announcement to the various stakeholders of the imminent changes.
  10. +
  11. Software integration and release.
  12. +
  13. Governance upgrade proposal(s).
  14. +
  15. Monitoring of the upgrade process.
  16. +
+

Performance, Ergonomics and Compatibility

+

No specific considerations.

+

Parachains already deployed into the Polkadot UC must have a clear plan of action to migrate to an agile Coretime market.

+

While this proposal does not introduce documentable features per se, adequate documentation must be provided to potential purchasers of Polkadot Coretime. This SHOULD include any alterations to the Polkadot-SDK software collection.

+

Testing, Security and Privacy

+

Regular testing through unit tests, integration tests, manual testnet tests, zombie-net tests and fuzzing SHOULD be conducted.

+

A regular security review SHOULD be conducted prior to deployment through a review by the Web3 Foundation economic research group.

+

Any final implementation MUST pass a professional external security audit.

+

The proposal introduces no new privacy concerns.

+ +

RFC-3 proposes a means of implementing the high-level allocations within the Relay-chain.

+

RFC-5 proposes the API for interacting with Relay-chain.

+

Additional work should specify the interface for the instantaneous market revenue so that the Coretime-chain can ensure Bulk Coretime placed in the instantaneous market is properly compensated.

+

Drawbacks, Alternatives and Unknowns

+

Unknowns include the economic and resource parameterisations:

+
    +
  • The initial price of Bulk Coretime.
  • +
  • The price-change algorithm between Bulk Coretime sales.
  • +
  • The price increase per Bulk Coretime period for renewals.
  • +
  • The price decrease graph in the Leadin period for Bulk Coretime sales.
  • +
  • The initial price of Instantaneous Coretime.
  • +
  • The price-change algorithm for Instantaneous Coretime sales.
  • +
  • The percentage of cores to be sold as Bulk Coretime.
  • +
  • The fate of revenue collected.
  • +
+

Prior Art and References

+

Robert Habermeier initially wrote on the subject of Polkadot blockspace-centric in the article Polkadot Blockspace over Blockchains. While not going into details, the article served as an early reframing piece for moving beyond one-slot-per-chain models and building out secondary market infrastructure for resource allocation.

+ +
+ + +
+
+ + + +
+ + + + + + + + + + + + + + + + + + +
+ + diff --git a/mdbook/text/0005-coretime-interface.html b/mdbook/text/0005-coretime-interface.html new file mode 100644 index 000000000..11e9a97f5 --- /dev/null +++ b/mdbook/text/0005-coretime-interface.html @@ -0,0 +1,345 @@ + + + + + + + 0005 - Polkadot Fellowship RFCs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+
+

RFC-5: Coretime Interface

+
+ + + +
Start Date06 July 2023
DescriptionInterface for manipulating the usage of cores on the Polkadot Ubiquitous Computer.
AuthorsGavin Wood, Robert Habermeier
+
+

Summary

+

In the Agile Coretime model of the Polkadot Ubiquitous Computer, as proposed in RFC-1 and RFC-3, it is necessary for the allocating parachain (envisioned to be one or more pallets on a specialised Brokerage System Chain) to communicate the core assignments to the Relay-chain, which is responsible for ensuring those assignments are properly enacted.

+

This is a proposal for the interface which will exist around the Relay-chain in order to communicate this information and instructions.

+

Motivation

+

The background motivation for this interface is splitting out coretime allocation functions and secondary markets from the Relay-chain onto System parachains. A well-understood and general interface is necessary for ensuring the Relay-chain receives coretime allocation instructions from one or more System chains without introducing dependencies on the implementation details of either side.

+

Requirements

+
    +
  • The interface MUST allow the Relay-chain to be scheduled on a low-latency basis.
  • +
  • Individual cores MUST be schedulable, both in full to a single task (a ParaId or the Instantaneous Coretime Pool) or to many unique tasks in differing ratios.
  • +
  • Typical usage of the interface SHOULD NOT overload the VMP message system.
  • +
  • The interface MUST allow for the allocating chain to be notified of all accounting information relevant for making accurate rewards for contributing to the Instantaneous Coretime Pool.
  • +
  • The interface MUST allow for Instantaneous Coretime Market Credits to be communicated.
  • +
  • The interface MUST allow for the allocating chain to instruct changes to the number of cores which it is able to allocate.
  • +
  • The interface MUST allow for the allocating chain to be notified of changes to the number of cores which are able to be allocated by the allocating chain.
  • +
+

Stakeholders

+

Primary stakeholder sets are:

+
    +
  • Developers of the Relay-chain core-management logic.
  • +
  • Developers of the Brokerage System Chain and its pallets.
  • +
+

Socialization:

+

This content of this RFC was discussed in the Polkdot Fellows channel.

+

Explanation

+

The interface has two sections: The messages which the Relay-chain is able to receive from the allocating parachain (the UMP message types), and messages which the Relay-chain is able to send to the allocating parachain (the DMP message types). These messages are expected to be able to be implemented in a well-known pallet and called with the XCM Transact instruction.

+

Future work may include these messages being introduced into the XCM standard.

+

UMP Message Types

+

request_core_count

+

Prototype:

+
fn request_core_count(
+    count: u16,
+)
+
+

Requests the Relay-chain to alter the number of schedulable cores to count. Under normal operation, the Relay-chain SHOULD send a notify_core_count(count) message back.

+

request_revenue_info_at

+

Prototype:

+
fn request_revenue_at(
+    when: BlockNumber,
+)
+
+

Requests that the Relay-chain send a notify_revenue message back at or soon after Relay-chain block number when whose until parameter is equal to when.

+

The period in to the past which when is allowed to be may be limited; if so the limit should be understood on a channel outside of this proposal. In the case that the request cannot be serviced because when is too old a block then a notify_revenue message must still be returned, but its revenue field may be None.

+

credit_account

+

Prototype:

+
fn credit_account(
+    who: AccountId,
+    amount: Balance,
+)
+
+

Instructs the Relay-chain to add the amount of DOT to the Instantaneous Coretime Market Credit account of who.

+

It is expected that Instantaneous Coretime Market Credit on the Relay-chain is NOT transferrable and only redeemable when used to assign cores in the Instantaneous Coretime Pool.

+

assign_core

+

Prototype:

+
type PartsOf57600 = u16;
+enum CoreAssignment {
+    InstantaneousPool,
+    Task(ParaId),
+}
+fn assign_core(
+    core: CoreIndex,
+    begin: BlockNumber,
+    assignment: Vec<(CoreAssignment, PartsOf57600)>,
+    end_hint: Option<BlockNumber>,
+)
+
+

Requirements:

+
assert!(core < core_count);
+assert!(targets.iter().map(|x| x.0).is_sorted());
+assert_eq!(targets.iter().map(|x| x.0).unique().count(), targets.len());
+assert_eq!(targets.iter().map(|x| x.1).sum(), 57600);
+
+

Where:

+
    +
  • core_count is assumed to be the sole parameter in the last received notify_core_count message.
  • +
+

Instructs the Relay-chain to ensure that the core indexed as core is utilised for a number of assignments in specific ratios given by assignment starting as soon after begin as possible. Core assignments take the form of a CoreAssignment value which can either task the core to a ParaId value or indicate that the core should be used in the Instantaneous Pool. Each assignment comes with a ratio value, represented as the numerator of the fraction with a denominator of 57,600.

+

If end_hint is Some and the inner is greater than the current block number, then the Relay-chain should optimize in the expectation of receiving a new assign_core(core, ...) message at or prior to the block number of the inner value. Specific functionality should remain unchanged regardless of the end_hint value.

+

On the choice of denominator: 57,600 is a very composite number which factors into: 2 ** 8, 3 ** 2, 5 ** 2. By using it as the denominator we allow for various useful fractions to be perfectly represented including thirds, quarters, fifths, tenths, 80ths, percent and 256ths.

+

DMP Message Types

+

notify_core_count

+

Prototype:

+
fn notify_core_count(
+    count: u16,
+)
+
+

Indicate that from this block onwards, the range of acceptable values of the core parameter of assign_core message is [0, count). assign_core will be a no-op if provided with a value for core outside of this range.

+

notify_revenue_info

+

Prototype:

+
fn notify_revenue_info(
+    until: BlockNumber,
+    revenue: Option<Balance>,
+)
+
+

Provide the amount of revenue accumulated from Instantaneous Coretime Sales from Relay-chain block number last_until to until, not including until itself. last_until is defined as being the until argument of the last notify_revenue message sent, or zero for the first call. If revenue is None, this indicates that the information is no longer available.

+

This explicitly disregards the possibility of multiple parachains requesting and being notified of revenue information. The Relay-chain must be configured to ensure that only a single revenue information destination exists.

+

Realistic Limits of the Usage

+

For request_revenue_info, a successful request should be possible if when is no less than the Relay-chain block number on arrival of the message less 100,000.

+

For assign_core, a successful request should be possible if begin is no less than the Relay-chain block number on arrival of the message plus 10 and workload contains no more than 100 items.

+

Performance, Ergonomics and Compatibility

+

No specific considerations.

+

Testing, Security and Privacy

+

Standard Polkadot testing and security auditing applies.

+

The proposal introduces no new privacy concerns.

+ +

RFC-1 proposes a means of determining allocation of Coretime using this interface.

+

RFC-3 proposes a means of implementing the high-level allocations within the Relay-chain.

+

Drawbacks, Alternatives and Unknowns

+

None at present.

+

Prior Art and References

+

None.

+ +
+ + +
+
+ + + +
+ + + + + + + + + + + + + + + + + + +
+ + diff --git a/mdbook/text/0007-system-collator-selection.html b/mdbook/text/0007-system-collator-selection.html new file mode 100644 index 000000000..e6ea8cf48 --- /dev/null +++ b/mdbook/text/0007-system-collator-selection.html @@ -0,0 +1,358 @@ + + + + + + + 0007 - Polkadot Fellowship RFCs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+
+

RFC-0007: System Collator Selection

+
+ + + +
Start Date07 July 2023
DescriptionMechanism for selecting collators of system chains.
AuthorsJoe Petrowski
+
+

Summary

+

As core functionality moves from the Relay Chain into system chains, so increases the reliance on +the liveness of these chains for the use of the network. It is not economically scalable, nor +necessary from a game-theoretic perspective, to pay collators large rewards. This RFC proposes a +mechanism -- part technical and part social -- for ensuring reliable collator sets that are +resilient to attemps to stop any subsytem of the Polkadot protocol.

+

Motivation

+

In order to guarantee access to Polkadot's system, the collators on its system chains must propose +blocks (provide liveness) and allow all transactions to eventually be included. That is, some +collators may censor transactions, but there must exist one collator in the set who will include a +given transaction. In fact, all collators may censor varying subsets of transactions, but as long +as no transaction is in the intersection of every subset, it will eventually be included. The +objective of this RFC is to propose a mechanism to select such a set on each system chain.

+

While the network as a whole uses staking (and inflationary rewards) to attract validators, +collators face different challenges in scale and have lower security assumptions than validators. +Regarding scale, there exist many system chains, and it is economically expensive to pay collators +a premium. Likewise, any staked DOT for collation is not staked for validation. Since collator +sets do not need to meet Byzantine Fault Tolerance criteria, staking as the primary mechanism for +collator selection would remove stake that is securing BFT assumptions, making the network less +secure.

+

Another problem with economic scalability relates to the increasing number of system chains, and +corresponding increase in need for collators (i.e., increase in collator slots). "Good" (highly +available, non-censoring) collators will not want to compete in elections on many chains when they +could use their resources to compete in the more profitable validator election. Such dilution +decreases the required bond on each chain, leaving them vulnerable to takeover by hostile +collator groups.

+

This RFC proposes a system whereby collation is primarily an infrastructure service, with the +on-chain Treasury reimbursing costs of semi-trusted node operators, referred to as "Invulnerables". +The system need not trust the individual operators, only that as a set they would be resilient to +coordinated attempts to stop a single chain from halting or to censor a particular subset of +transactions.

+

In the case that users do not trust this set, this RFC also proposes that each chain always have +available collator positions that can be acquired by anyone by placing a bond.

+

Requirements

+
    +
  • System MUST have at least one valid collator for every chain.
  • +
  • System MUST allow anyone to become a collator, provided they reserve/hold enough DOT.
  • +
  • System SHOULD select a set of collators with reasonable expectation that the set will not collude +to censor any subset of transactions.
  • +
  • Collators selected by governance SHOULD have a reasonable expectation that the Treasury will +reimburse their operating costs.
  • +
+

Stakeholders

+
    +
  • Infrastructure providers (people who run validator/collator nodes)
  • +
  • Polkadot Treasury
  • +
+

Explanation

+

This protocol builds on the existing +Collator Selection pallet +and its notion of Invulnerables. Invulnerables are collators (identified by their AccountIds) who +will be selected as part of the collator set every session. Operations relating to the management +of the Invulnerables are done through privileged, governance origins. The implementation should +maintain an API for adding and removing Invulnerable collators.

+

In addition to Invulnerables, there are also open slots for "Candidates". Anyone can register as a +Candidate by placing a fixed bond. However, with a fixed bond and fixed number of slots, there is +an obvious selection problem: The slots fill up without any logic to replace their occupants.

+

This RFC proposes that the collator selection protocol allow Candidates to increase (and decrease) +their individual bonds, sort the Candidates according to bond, and select the top N Candidates. +The selection and changeover should be coordinated by the session manager.

+

A FRAME pallet already exists for sorting ("bagging") "top N" groups, the +Bags List pallet. +This pallet's SortedListProvider should be integrated into the session manager of the Collator +Selection pallet.

+

Despite the lack of apparent economic incentives (i.e., inflation), several reasons exist why one +may want to bond funds to participate in the Candidates election, for example:

+
    +
  • They want to build credibility to be selected as Invulnerable;
  • +
  • They want to ensure availability of an application, e.g. a stablecoin issuer might run a collator +on Asset Hub to ensure transactions in its asset are included in blocks;
  • +
  • They fear censorship themselves, e.g. a voter might think their votes are being censored from +governance, so they run a collator on the governance chain to include their votes.
  • +
+

Unlike the fixed-bond mechanism that fills up its Candidates, the election mechanism ensures that +anyone can join the collator set by placing the Nth highest bond.

+

Set Size

+

In order to achieve the requirements listed under Motivation, it is reasonable to have +approximately:

+
    +
  • 20 collators per system chain,
  • +
  • of which 15 are Invulnerable, and
  • +
  • five are elected by bond.
  • +
+

Drawbacks

+

The primary drawback is a reliance on governance for continued treasury funding of infrastructure +costs for Invulnerable collators.

+

Testing, Security, and Privacy

+

The vast majority of cases can be covered by unit testing. Integration test should ensure that the +Collator Selection UpdateOrigin, which has permission to modify the Invulnerables and desired +number of Candidates, can handle updates over XCM from the system's governance location.

+

Performance, Ergonomics, and Compatibility

+

This proposal has very little impact on most users of Polkadot, and should improve the performance +of system chains by reducing the number of missed blocks.

+

Performance

+

As chains have strict PoV size limits, care must be taken in the PoV impact of the session manager. +Appropriate benchmarking and tests should ensure that conservative limits are placed on the number +of Invulnerables and Candidates.

+

Ergonomics

+

The primary group affected is Candidate collators, who, after implementation of this RFC, will need +to compete in a bond-based election rather than a race to claim a Candidate spot.

+

Compatibility

+

This RFC is compatible with the existing implementation and can be handled via upgrades and +migration.

+

Prior Art and References

+

Written Discussions

+ +

Prior Feedback and Input From

+
    +
  • Kian Paimani
  • +
  • Jeff Burdges
  • +
  • Rob Habermeier
  • +
  • SR Labs Auditors
  • +
  • Current collators including Paranodes, Stake Plus, Turboflakes, Peter Mensik, SIK, and many more.
  • +
+

Unresolved Questions

+

None at this time.

+ +

There may exist in the future system chains for which this model of collator selection is not +appropriate. These chains should be evaluated on a case-by-case basis.

+ +
+ + +
+
+ + + +
+ + + + + + + + + + + + + + + + + + +
+ + diff --git a/mdbook/text/0008-parachain-bootnodes-dht.html b/mdbook/text/0008-parachain-bootnodes-dht.html new file mode 100644 index 000000000..5c7c2e68e --- /dev/null +++ b/mdbook/text/0008-parachain-bootnodes-dht.html @@ -0,0 +1,315 @@ + + + + + + + 0008 - Polkadot Fellowship RFCs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+
+

RFC-0008: Store parachain bootnodes in relay chain DHT

+
+ + + +
Start Date2023-07-14
DescriptionParachain bootnodes shall register themselves in the DHT of the relay chain
AuthorsPierre Krieger
+
+

Summary

+

The full nodes of the Polkadot peer-to-peer network maintain a distributed hash table (DHT), which is currently used for full nodes discovery and validators discovery purposes.

+

This RFC proposes to extend this DHT to be used to discover full nodes of the parachains of Polkadot.

+

Motivation

+

The maintenance of bootnodes has long been an annoyance for everyone.

+

When a bootnode is newly-deployed or removed, every chain specification must be updated in order to take the update into account. This has lead to various non-optimal solutions, such as pulling chain specifications from GitHub repositories. +When it comes to RPC nodes, UX developers often have trouble finding up-to-date addresses of parachain RPC nodes. With the ongoing migration from RPC nodes to light clients, similar problems would happen with chain specifications as well.

+

Furthermore, there exists multiple different possible variants of a certain chain specification: with the non-raw storage, with the raw storage, with just the genesis trie root hash, with or without checkpoint, etc. All of this creates confusion. Removing the need for parachain developers to be aware of and manage these different versions would be beneficial.

+

Since the PeerId and addresses of bootnodes needs to be stable, extra maintenance work is required from the chain maintainers. For example, they need to be extra careful when migrating nodes within their infrastructure. In some situations, bootnodes are put behind domain names, which also requires maintenance work.

+

Because the list of bootnodes in chain specifications is so annoying to modify, the consequence is that the number of bootnodes is rather low (typically between 2 and 15). In order to better resist downtimes and DoS attacks, a better solution would be to use every node of a certain chain as potential bootnode, rather than special-casing some specific nodes.

+

While this RFC doesn't solve these problems for relay chains, it aims at solving it for parachains by storing the list of all the full nodes of a parachain on the relay chain DHT.

+

Assuming that this RFC is implemented, and that light clients are used, deploying a parachain wouldn't require more work than registering it onto the relay chain and starting the collators. There wouldn't be any need for special infrastructure nodes anymore.

+

Stakeholders

+

This RFC has been opened on my own initiative because I think that this is a good technical solution to a usability problem that many people are encountering and that they don't realize can be solved.

+

Explanation

+

The content of this RFC only applies for parachains and parachain nodes that are "Substrate-compatible". It is in no way mandatory for parachains to comply to this RFC.

+

Note that "Substrate-compatible" is very loosely defined as "implements the same mechanisms and networking protocols as Substrate". The author of this RFC believes that "Substrate-compatible" should be very precisely specified, but there is controversy on this topic.

+

While a lot of this RFC concerns the implementation of parachain nodes, it makes use of the resources of the Polkadot chain, and as such it is important to describe them in the Polkadot specification.

+

This RFC adds two mechanisms: a registration in the DHT, and a new networking protocol.

+

DHT provider registration

+

This RFC heavily relies on the functionalities of the Kademlia DHT already in use by Polkadot. +You can find a link to the specification here.

+

Full nodes of a parachain registered on Polkadot should register themselves onto the Polkadot DHT as the providers of a key corresponding to the parachain that they are serving, as described in the Content provider advertisement section of the specification. This uses the ADD_PROVIDER system of libp2p-kademlia.

+

This key is: sha256(concat(scale_compact(para_id), randomness)) where the value of randomness can be found in the randomness field when calling the BabeApi_currentEpoch function. +For example, for a para_id equal to 1000, and at the time of writing of this RFC (July 14th 2023 at 09:13 UTC), it is sha(0xa10f12872447958d50aa7b937b0106561a588e0e2628d33f81b5361b13dbcf8df708), which is equal to 0x483dd8084d50dbbbc962067f216c37b627831d9339f5a6e426a32e3076313d87.

+

In order to avoid downtime when the key changes, parachain full nodes should also register themselves as a secondary key that uses a value of randomness equal to the randomness field when calling BabeApi_nextEpoch.

+

Implementers should be aware that their implementation of Kademlia might already hash the key before XOR'ing it. The key is not meant to be hashed twice.

+

The compact SCALE encoding has been chosen in order to avoid problems related to the number of bytes and endianness of the para_id.

+

New networking protocol

+

A new request-response protocol should be added, whose name is /91b171bb158e2d3848fa23a9f1c25182fb8e20313b2c1eb49219da7a70ce90c3/paranode (that hexadecimal number is the genesis hash of the Polkadot chain, and should be adjusted appropriately for Kusama and others).

+

The request consists in a SCALE-compact-encoded para_id. For example, for a para_id equal to 1000, this is 0xa10f.

+

Note that because this is a request-response protocol, the request is always prefixed with its length in bytes. While the body of the request is simply the SCALE-compact-encoded para_id, the data actually sent onto the substream is both the length and body.

+

The response consists in a protobuf struct, defined as:

+
syntax = "proto2";
+
+message Response {
+    // Peer ID of the node on the parachain side.
+    bytes peer_id = 1;
+
+    // Multiaddresses of the parachain side of the node. The list and format are the same as for the `listenAddrs` field of the `identify` protocol.
+    repeated bytes addrs = 2;
+
+    // Genesis hash of the parachain. Used to determine the name of the networking protocol to connect to the parachain. Untrusted.
+    bytes genesis_hash = 3;
+
+    // So-called "fork ID" of the parachain. Used to determine the name of the networking protocol to connect to the parachain. Untrusted.
+    optional string fork_id = 4;
+};
+
+

The maximum size of a response is set to an arbitrary 16kiB. The responding side should make sure to conform to this limit. Given that fork_id is typically very small and that the only variable-length field is addrs, this is easily achieved by limiting the number of addresses.

+

Implementers should be aware that addrs might be very large, and are encouraged to limit the number of addrs to an implementation-defined value.

+

Drawbacks

+

The peer_id and addrs fields are in theory not strictly needed, as the PeerId and addresses could be always equal to the PeerId and addresses of the node being registered as the provider and serving the response. However, the Cumulus implementation currently uses two different networking stacks, one of the parachain and one for the relay chain, using two separate PeerIds and addresses, and as such the PeerId and addresses of the other networking stack must be indicated. Asking them to use only one networking stack wouldn't feasible in a realistic time frame.

+

The values of the genesis_hash and fork_id fields cannot be verified by the requester and are expected to be unused at the moment. Instead, a client that desires connecting to a parachain is expected to obtain the genesis hash and fork ID of the parachain from the parachain chain specification. These fields are included in the networking protocol nonetheless in case an acceptable solution is found in the future, and in order to allow use cases such as discovering parachains in a not-strictly-trusted way.

+

Testing, Security, and Privacy

+

Because not all nodes want to be used as bootnodes, implementers are encouraged to provide a way to disable this mechanism. However, it is very much encouraged to leave this mechanism on by default for all parachain nodes.

+

This mechanism doesn't add or remove any security by itself, as it relies on existing mechanisms. +However, if the principle of chain specification bootnodes is entirely replaced with the mechanism described in this RFC (which is the objective), then it becomes important whether the mechanism in this RFC can be abused in order to make a parachain unreachable.

+

Due to the way Kademlia works, it would become the responsibility of the 20 Polkadot nodes whose sha256(peer_id) is closest to the key (described in the explanations section) to store the list of bootnodes of each parachain. +Furthermore, when a large number of providers (here, a provider is a bootnode) are registered, only the providers closest to the key are kept, up to a certain implementation-defined limit.

+

For this reason, an attacker can abuse this mechanism by randomly generating libp2p PeerIds until they find the 20 entries closest to the key representing the target parachain. They are then in control of the parachain bootnodes. +Because the key changes periodically and isn't predictable, and assuming that the Polkadot DHT is sufficiently large, it is not realistic for an attack like this to be maintained in the long term.

+

Furthermore, parachain clients are expected to cache a list of known good nodes on their disk. If the mechanism described in this RFC went down, it would only prevent new nodes from accessing the parachain, while clients that have connected before would not be affected.

+

Performance, Ergonomics, and Compatibility

+

Performance

+

The DHT mechanism generally has a low overhead, especially given that publishing providers is done only every 24 hours.

+

Doing a Kademlia iterative query then sending a provider record shouldn't take more than around 50 kiB in total of bandwidth for the parachain bootnode.

+

Assuming 1000 parachain full nodes, the 20 Polkadot full nodes corresponding to a specific parachain will each receive a sudden spike of a few megabytes of networking traffic when the key rotates. Again, this is relatively negligible. If this becomes a problem, one can add a random delay before a parachain full node registers itself to be the provider of the key corresponding to BabeApi_next_epoch.

+

Maybe the biggest uncertainty is the traffic that the 20 Polkadot full nodes will receive from light clients that desire knowing the bootnodes of a parachain. Light clients are generally encouraged to cache the peers that they use between restarts, so they should only query these 20 Polkadot full nodes at their first initialization. +If this every becomes a problem, this value of 20 is an arbitrary constant that can be increased for more redundancy.

+

Ergonomics

+

Irrelevant.

+

Compatibility

+

Irrelevant.

+

Prior Art and References

+

None.

+

Unresolved Questions

+

While it fundamentally doesn't change much to this RFC, using BabeApi_currentEpoch and BabeApi_nextEpoch might be inappropriate. I'm not familiar enough with good practices within the runtime to have an opinion here. Should it be an entirely new pallet?

+ +

It is possible that in the future a client could connect to a parachain without having to rely on a trusted parachain specification.

+ +
+ + +
+
+ + + +
+ + + + + + + + + + + + + + + + + + +
+ + diff --git a/mdbook/text/0010-burn-coretime-revenue.html b/mdbook/text/0010-burn-coretime-revenue.html new file mode 100644 index 000000000..469e692ae --- /dev/null +++ b/mdbook/text/0010-burn-coretime-revenue.html @@ -0,0 +1,256 @@ + + + + + + + 0010 - Polkadot Fellowship RFCs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+
+

RFC-0010: Burn Coretime Revenue

+
+ + + +
Start Date19.07.2023
DescriptionRevenue from Coretime sales should be burned
AuthorsJonas Gehrlein
+
+

Summary

+

The Polkadot UC will generate revenue from the sale of available Coretime. The question then arises: how should we handle these revenues? Broadly, there are two reasonable paths – burning the revenue and thereby removing it from total issuance or divert it to the Treasury. This Request for Comment (RFC) presents arguments favoring burning as the preferred mechanism for handling revenues from Coretime sales.

+

Motivation

+

How to handle the revenue accrued from Coretime sales is an important economic question that influences the value of DOT and should be properly discussed before deciding for either of the options. Now is the best time to start this discussion.

+

Stakeholders

+

Polkadot DOT token holders.

+

Explanation

+

This RFC discusses potential benefits of burning the revenue accrued from Coretime sales instead of diverting them to Treasury. Here are the following arguments for it.

+

It's in the interest of the Polkadot community to have a consistent and predictable Treasury income, because volatility in the inflow can be damaging, especially in situations when it is insufficient. As such, this RFC operates under the presumption of a steady and sustainable Treasury income flow, which is crucial for the Polkadot community's stability. The assurance of a predictable Treasury income, as outlined in a prior discussion here, or through other equally effective measures, serves as a baseline assumption for this argument.

+

Consequently, we need not concern ourselves with this particular issue here. This naturally begs the question - why should we introduce additional volatility to the Treasury by aligning it with the variable Coretime sales? It's worth noting that Coretime revenues often exhibit an inverse relationship with periods when Treasury spending should ideally be ramped up. During periods of low Coretime utilization (indicated by lower revenue), Treasury should spend more on projects and endeavours to increase the demand for Coretime. This pattern underscores that Coretime sales, by their very nature, are an inconsistent and unpredictable source of funding for the Treasury. Given the importance of maintaining a steady and predictable inflow, it's unnecessary to rely on another volatile mechanism. Some might argue that we could have both: a steady inflow (from inflation) and some added bonus from Coretime sales, but burning the revenue would offer further benefits as described below.

+
    +
  • +

    Balancing Inflation: While DOT as a utility token inherently profits from a (reasonable) net inflation, it also benefits from a deflationary force that functions as a counterbalance to the overall inflation. Right now, the only mechanism on Polkadot that burns fees is the one for underutilized DOT in the Treasury. Finding other, more direct target for burns makes sense and the Coretime market is a good option.

    +
  • +
  • +

    Clear incentives: By burning the revenue accrued on Coretime sales, prices paid by buyers are clearly costs. This removes distortion from the market that might arise when the paid tokens occur on some other places within the network. In that case, some actors might have secondary motives of influencing the price of Coretime sales, because they benefit down the line. For example, actors that actively participate in the Coretime sales are likely to also benefit from a higher Treasury balance, because they might frequently request funds for their projects. While those effects might appear far-fetched, they could accumulate. Burning the revenues makes sure that the prices paid are clearly costs to the actors themselves.

    +
  • +
  • +

    Collective Value Accrual: Following the previous argument, burning the revenue also generates some externality, because it reduces the overall issuance of DOT and thereby increases the value of each remaining token. In contrast to the aforementioned argument, this benefits all token holders collectively and equally. Therefore, I'd consider this as the preferrable option, because burns lets all token holders participate at Polkadot's success as Coretime usage increases.

    +
  • +
+ +
+ + +
+
+ + + +
+ + + + + + + + + + + + + + + + + + +
+ + diff --git a/mdbook/text/0012-process-for-adding-new-collectives.html b/mdbook/text/0012-process-for-adding-new-collectives.html new file mode 100644 index 000000000..c5d97d2a7 --- /dev/null +++ b/mdbook/text/0012-process-for-adding-new-collectives.html @@ -0,0 +1,313 @@ + + + + + + + 0012 - Polkadot Fellowship RFCs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+
+

RFC-0012: Process for Adding New System Collectives

+
+ + + +
Start Date24 July 2023
DescriptionA process for adding new (and removing existing) system collectives.
AuthorsJoe Petrowski
+
+

Summary

+

Since the introduction of the Collectives parachain, many groups have expressed interest in forming +new -- or migrating existing groups into -- on-chain collectives. While adding a new collective is +relatively simple from a technical standpoint, the Fellowship will need to merge new pallets into +the Collectives parachain for each new collective. This RFC proposes a means for the network to +ratify a new collective, thus instructing the Fellowship to instate it in the runtime.

+

Motivation

+

Many groups have expressed interest in representing collectives on-chain. Some of these include:

+
    +
  • Parachain technical fellowship (new)
  • +
  • Fellowship(s) for media, education, and evangelism (new)
  • +
  • Polkadot Ambassador Program (existing)
  • +
  • Anti-Scam Team (existing)
  • +
+

Collectives that form part of the core Polkadot protocol should have a mandate to serve the +Polkadot network. However, as part of the Polkadot protocol, the Fellowship, in its capacity of +maintaining system runtimes, will need to include modules and configurations for each collective.

+

Once a group has developed a value proposition for the Polkadot network, it should have a clear +path to having its collective accepted on-chain as part of the protocol. Acceptance should direct +the Fellowship to include the new collective with a given initial configuration into the runtime. +However, the network, not the Fellowship, should ultimately decide which collectives are in the +interest of the network.

+

Stakeholders

+
    +
  • Polkadot stakeholders who would like to organize on-chain.
  • +
  • Technical Fellowship, in its role of maintaining system runtimes.
  • +
+

Explanation

+

The group that wishes to operate an on-chain collective should publish the following information:

+
    +
  • Charter, including the collective's mandate and how it benefits Polkadot. This would be similar +to the +Fellowship Manifesto.
  • +
  • Seeding recommendation.
  • +
  • Member types, i.e. should members be individuals or organizations.
  • +
  • Member management strategy, i.e. how do members join and get promoted, if applicable.
  • +
  • How much, if at all, members should get paid in salary.
  • +
  • Any special origins this Collective should have outside its self. For example, the Fellowship +can whitelist calls for referenda via the WhitelistOrigin.
  • +
+

This information could all be in a single document or, for example, a GitHub repository.

+

After publication, members should seek feedback from the community and Technical Fellowship, and +make any revisions needed. When the collective believes the proposal is ready, they should bring a +remark with the text APPROVE_COLLECTIVE("{collective name}, {commitment}") to a Root origin +referendum. The proposer should provide instructions for generating commitment. The passing of +this referendum would be unequivocal direction to the Fellowship that this collective should be +part of the Polkadot runtime.

+

Note: There is no need for a REJECT referendum. Proposals that have not been approved are simply +not included in the runtime.

+

Removing Collectives

+

If someone believes that an existing collective is not acting in the interest of the network or in +accordance with its charter, they should likewise have a means to instruct the Fellowship to +remove that collective from Polkadot.

+

An on-chain remark from the Root origin with the text +REMOVE_COLLECTIVE("{collective name}, {para ID}, [{pallet indices}]") would instruct the +Fellowship to remove the collective via the listed pallet indices on paraId. Should someone want +to construct such a remark, they should have a reasonable expectation that a member of the +Fellowship would help them identify the pallet indices associated with a given collective, whether +or not the Fellowship member agrees with removal.

+

Collective removal may also come with other governance calls, for example voiding any scheduled +Treasury spends that would fund the given collective.

+

Drawbacks

+

Passing a Root origin referendum is slow. However, given the network's investment (in terms of code +maintenance and salaries) in a new collective, this is an appropriate step.

+

Testing, Security, and Privacy

+

No impacts.

+

Performance, Ergonomics, and Compatibility

+

Generally all new collectives will be in the Collectives parachain. Thus, performance impacts +should strictly be limited to this parachain and not affect others. As the majority of logic for +collectives is generalized and reusable, we expect most collectives to be instances of similar +subsets of modules. That is, new collectives should generally be compatible with UIs and other +services that provide collective-related functionality, with little modifications to support new +ones.

+

Prior Art and References

+

The launch of the Technical Fellowship, see the +initial forum post.

+

Unresolved Questions

+

None at this time.

+ +
+ + +
+
+ + + +
+ + + + + + + + + + + + + + + + + + +
+ + diff --git a/mdbook/text/0013-prepare-blockbuilder-and-core-runtime-apis-for-mbms.html b/mdbook/text/0013-prepare-blockbuilder-and-core-runtime-apis-for-mbms.html new file mode 100644 index 000000000..b98838301 --- /dev/null +++ b/mdbook/text/0013-prepare-blockbuilder-and-core-runtime-apis-for-mbms.html @@ -0,0 +1,306 @@ + + + + + + + 0013 - Polkadot Fellowship RFCs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+
+

RFC-0013: Prepare Core runtime API for MBMs

+
+ + + +
Start DateJuly 24, 2023
DescriptionPrepare the Core Runtime API for Multi-Block-Migrations
AuthorsOliver Tale-Yazdi
+
+

Summary

+

Introduces breaking changes to the Core runtime API by letting Core::initialize_block return an enum. The versions of Core is bumped from 4 to 5.

+

Motivation

+

The main feature that motivates this RFC are Multi-Block-Migrations (MBM); these make it possible to split a migration over multiple blocks.
+Further it would be nice to not hinder the possibility of implementing a new hook poll, that runs at the beginning of the block when there are no MBMs and has access to AllPalletsWithSystem. This hook can then be used to replace the use of on_initialize and on_finalize for non-deadline critical logic.
+In a similar fashion, it should not hinder the future addition of a System::PostInherents callback that always runs after all inherents were applied.

+

Stakeholders

+
    +
  • Substrate Maintainers: They have to implement this, including tests, audit and +maintenance burden.
  • +
  • Polkadot Runtime developers: They will have to adapt the runtime files to this breaking change.
  • +
  • Polkadot Parachain Teams: They have to adapt to the breaking changes but then eventually have +multi-block migrations available.
  • +
+

Explanation

+

Core::initialize_block

+

This runtime API function is changed from returning () to ExtrinsicInclusionMode:

+
fn initialize_block(header: &<Block as BlockT>::Header)
++  -> ExtrinsicInclusionMode;
+
+

With ExtrinsicInclusionMode is defined as:

+
#![allow(unused)]
+fn main() {
+enum ExtrinsicInclusionMode {
+  /// All extrinsics are allowed in this block.
+  AllExtrinsics,
+  /// Only inherents are allowed in this block.
+  OnlyInherents,
+}
+}
+

A block author MUST respect the ExtrinsicInclusionMode that is returned by initialize_block. The runtime MUST reject blocks that have non-inherent extrinsics in them while OnlyInherents was returned.

+

Coming back to the motivations and how they can be implemented with this runtime API change:

+

1. Multi-Block-Migrations: The runtime is being put into lock-down mode for the duration of the migration process by returning OnlyInherents from initialize_block. This ensures that no user provided transaction can interfere with the migration process. It is absolutely necessary to ensure this, otherwise a transaction could call into un-migrated storage and violate storage invariants.

+

2. poll is possible by using apply_extrinsic as entry-point and not hindered by this approach. It would not be possible to use a pallet inherent like System::last_inherent to achieve this for two reasons: First is that pallets do not have access to AllPalletsWithSystem which is required to invoke the poll hook on all pallets. Second is that the runtime does currently not enforce an order of inherents.

+

3. System::PostInherents can be done in the same manner as poll.

+

Drawbacks

+

The previous drawback of cementing the order of inherents has been addressed and removed by redesigning the approach. No further drawbacks have been identified thus far.

+

Testing, Security, and Privacy

+

The new logic of initialize_block can be tested by checking that the block-builder will skip transactions when OnlyInherents is returned.

+

Security: n/a

+

Privacy: n/a

+

Performance, Ergonomics, and Compatibility

+

Performance

+

The performance overhead is minimal in the sense that no clutter was added after fulfilling the +requirements. The only performance difference is that initialize_block also returns an enum that needs to be passed through the WASM boundary. This should be negligible.

+

Ergonomics

+

The new interface allows for more extensible runtime logic. In the future, this will be utilized for +multi-block-migrations which should be a huge ergonomic advantage for parachain developers.

+

Compatibility

+

The advice here is OPTIONAL and outside of the RFC. To not degrade +user experience, it is recommended to ensure that an updated node can still import historic blocks.

+

Prior Art and References

+

The RFC is currently being implemented in polkadot-sdk#1781 (formerly substrate#14275). Related issues and merge +requests:

+ +

Unresolved Questions

+

Please suggest a better name for BlockExecutiveMode. We already tried: RuntimeExecutiveMode, +ExtrinsicInclusionMode. The names of the modes Normal and Minimal were also called +AllExtrinsics and OnlyInherents, so if you have naming preferences; please post them.
+=> renamed to ExtrinsicInclusionMode

+

Is post_inherents more consistent instead of last_inherent? Then we should change it.
+=> renamed to last_inherent

+ +

The long-term future here is to move the block building logic into the runtime. Currently there is a tight dance between the block author and the runtime; the author has to call into different runtime functions in quick succession and exact order. Any misstep causes the block to be invalid.
+This can be unified and simplified by moving both parts into the runtime.

+ +
+ + +
+
+ + + +
+ + + + + + + + + + + + + + + + + + +
+ + diff --git a/mdbook/text/0014-improve-locking-mechanism-for-parachains.html b/mdbook/text/0014-improve-locking-mechanism-for-parachains.html new file mode 100644 index 000000000..33efabc07 --- /dev/null +++ b/mdbook/text/0014-improve-locking-mechanism-for-parachains.html @@ -0,0 +1,347 @@ + + + + + + + 0014 - Polkadot Fellowship RFCs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+
+

RFC-0014: Improve locking mechanism for parachains

+
+ + + +
Start DateJuly 25, 2023
DescriptionImprove locking mechanism for parachains
AuthorsBryan Chen
+
+

Summary

+

This RFC proposes a set of changes to the parachain lock mechanism. The goal is to allow a parachain manager to self-service the parachain without root track governance action.

+

This is achieved by remove existing lock conditions and only lock a parachain when:

+
    +
  • A parachain manager explicitly lock the parachain
  • +
  • OR a parachain block is produced successfully
  • +
+

Motivation

+

The manager of a parachain has permission to manage the parachain when the parachain is unlocked. Parachains are by default locked when onboarded to a slot. This requires the parachain wasm/genesis must be valid, otherwise a root track governance action on relaychain is required to update the parachain.

+

The current reliance on root track governance actions for managing parachains can be time-consuming and burdensome. This RFC aims to address this technical difficulty by allowing parachain managers to take self-service actions, rather than relying on general public voting.

+

The key scenarios this RFC seeks to improve are:

+
    +
  1. Rescue a parachain with invalid wasm/genesis.
  2. +
+

While we have various resources and templates to build a new parachain, it is still not a trivial task. It is very easy to make a mistake and resulting an invalid wasm/genesis. With lack of tools to help detect those issues1, it is very likely that the issues are only discovered after the parachain is onboarded on a slot. In this case, the parachain is locked and the parachain team has to go through a lengthy governance process to rescue the parachain.

+
    +
  1. Perform lease renewal for an existing parachain.
  2. +
+

One way to perform lease renewal for a parachain is by doing a least swap with another parachain with a longer lease. This requires the other parachain must be operational and able to perform XCM transact call into relaychain to dispatch the swap call. Combined with the overhead of setting up a new parachain, this is an time consuming and expensive process. Ideally, the parachain manager should be able to perform the lease swap call without having a running parachain2.

+

Requirements

+
    +
  • A parachain manager SHOULD be able to rescue a parachain by updating the wasm/genesis without root track governance action.
  • +
  • A parachain manager MUST NOT be able to update the wasm/genesis if the parachain is locked.
  • +
  • A parachain SHOULD be locked when it successfully produced the first block.
  • +
  • A parachain manager MUST be able to perform lease swap without having a running parachain.
  • +
+

Stakeholders

+
    +
  • Parachain teams
  • +
  • Parachain users
  • +
+

Explanation

+

Status quo

+

A parachain can either be locked or unlocked3. With parachain locked, the parachain manager does not have any privileges. With parachain unlocked, the parachain manager can perform following actions with the paras_registrar pallet:

+
    +
  • deregister: Deregister a Para Id, freeing all data and returning any deposit.
  • +
  • swap: Initiate or confirm lease swap with another parachain.
  • +
  • add_lock: Lock the parachain.
  • +
  • schedule_code_upgrade: Schedule a parachain upgrade to update parachain wasm.
  • +
  • set_current_head: Set the parachain's current head.
  • +
+

Currently, a parachain can be locked with following conditions:

+
    +
  • From add_lock call, which can be dispatched by relaychain Root origin, the parachain, or the parachain manager.
  • +
  • When a parachain is onboarded on a slot4.
  • +
  • When a crowdloan is created.
  • +
+

Only the relaychain Root origin or the parachain itself can unlock the lock5.

+

This creates an issue that if the parachain is unable to produce block, the parachain manager is unable to do anything and have to rely on relaychain Root origin to manage the parachain.

+

Proposed changes

+

This RFC proposes to change the lock and unlock conditions.

+

A parachain can be locked only with following conditions:

+
    +
  • Relaychain governance MUST be able to lock any parachain.
  • +
  • A parachain MUST be able to lock its own lock.
  • +
  • A parachain manager SHOULD be able to lock the parachain.
  • +
  • A parachain SHOULD be locked when it successfully produced a block for the first time.
  • +
+

A parachain can be unlocked only with following conditions:

+
    +
  • Relaychain governance MUST be able to unlock any parachain.
  • +
  • A parachain MUST be able to unlock its own lock.
  • +
+

Note that create crowdloan MUST NOT lock the parachain and onboard a parachain SHOULD NOT lock it until a new block is successfully produced.

+

Migration

+

A one off migration is proposed in order to apply this change retrospectively so that existing parachains can also be benefited from this RFC. This migration will unlock parachains that confirms with following conditions:

+
    +
  • Parachain is locked.
  • +
  • Parachain never produced a block. Including from expired leases.
  • +
  • Parachain manager never explicitly lock the parachain.
  • +
+

Drawbacks

+

Parachain locks are designed in such way to ensure the decentralization of parachains. If parachains are not locked when it should be, it could introduce centralization risk for new parachains.

+

For example, one possible scenario is that a collective may decide to launch a parachain fully decentralized. However, if the parachain is unable to produce block, the parachain manager will be able to replace the wasm and genesis without the consent of the collective.

+

It is considered this risk is tolerable as it requires the wasm/genesis to be invalid at first place. It is not yet practically possible to develop a parachain without any centralized risk currently.

+

Another case is that a parachain team may decide to use crowdloan to help secure a slot lease. Previously, creating a crowdloan will lock a parachain. This means crowdloan participants will know exactly the genesis of the parachain for the crowdloan they are participating. However, this actually providers little assurance to crowdloan participants. For example, if the genesis block is determined before a crowdloan is started, it is not possible to have onchain mechanism to enforce reward distributions for crowdloan participants. They always have to rely on the parachain team to fulfill the promise after the parachain is alive.

+

Existing operational parachains will not be impacted.

+

Testing, Security, and Privacy

+

The implementation of this RFC will be tested on testnets (Rococo and Westend) first.

+

An audit maybe required to ensure the implementation does not introduce unwanted side effects.

+

There is no privacy related concerns.

+

Performance

+

This RFC should not introduce any performance impact.

+

Ergonomics

+

This RFC should improve the developer experiences for new and existing parachain teams

+

Compatibility

+

This RFC is fully compatibility with existing interfaces.

+

Prior Art and References

+
    +
  • Parachain Slot Extension Story: https://github.com/paritytech/polkadot/issues/4758
  • +
  • Allow parachain to renew lease without actually run another parachain: https://github.com/paritytech/polkadot/issues/6685
  • +
  • Always treat parachain that never produced block for a significant amount of time as unlocked: https://github.com/paritytech/polkadot/issues/7539
  • +
+

Unresolved Questions

+

None at this stage.

+ +

This RFC is only intended to be a short term solution. Slots will be removed in future and lock mechanism is likely going to be replaced with a more generalized parachain manage & recovery system in future. Therefore long term impacts of this RFC are not considered.

+
1 +

https://github.com/paritytech/cumulus/issues/377

+
+
2 +

https://github.com/paritytech/polkadot/issues/6685

+
+
3 +

https://github.com/paritytech/polkadot/blob/994af3de79af25544bf39644844cbe70a7b4d695/runtime/common/src/paras_registrar.rs#L51-L52C15

+
+
4 +

https://github.com/paritytech/polkadot/blob/994af3de79af25544bf39644844cbe70a7b4d695/runtime/common/src/paras_registrar.rs#L473-L475

+
+
5 +

https://github.com/paritytech/polkadot/blob/994af3de79af25544bf39644844cbe70a7b4d695/runtime/common/src/paras_registrar.rs#L333-L340

+
+ +
+ + +
+
+ + + +
+ + + + + + + + + + + + + + + + + + +
+ + diff --git a/mdbook/text/0022-adopt-encointer-runtime.html b/mdbook/text/0022-adopt-encointer-runtime.html new file mode 100644 index 000000000..864811931 --- /dev/null +++ b/mdbook/text/0022-adopt-encointer-runtime.html @@ -0,0 +1,268 @@ + + + + + + + 0022 - Polkadot Fellowship RFCs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+
+

RFC-0022: Adopt Encointer Runtime

+
+ + + +
Start DateAug 22nd 2023
DescriptionPermanently move the Encointer runtime into the Fellowship runtimes repo.
Authors@brenzi for Encointer Association, 8000 Zurich, Switzerland
+
+

Summary

+

Encointer is a system chain on Kusama since Jan 2022 and has been developed and maintained by the Encointer association. This RFC proposes to treat Encointer like any other system chain and include it in the fellowship repo with this PR.

+

Motivation

+

Encointer does not seek to be in control of its runtime repository. As a decentralized system, the fellowship has a more suitable structure to maintain a system chain runtime repo than the Encointer association does.

+

Also, Encointer aims to update its runtime in batches with other system chains in order to have consistency for interoperability across system chains.

+

Stakeholders

+
    +
  • Fellowship: Will continue to take upon them the review and auditing work for the Encointer runtime, but the process is streamlined with other system chains and therefore less time-consuming compared to the separate repo and CI process we currently have.
  • +
  • Kusama Network: Tokenholders can easily see the changes of all system chains in one place.
  • +
  • Encointer Association: Further decentralization of the Encointer Network necessities like devops.
  • +
  • Encointer devs: Being able to work directly in the Fellowship runtimes repo to streamline and synergize with other developers.
  • +
+

Explanation

+

Our PR has all details about our runtime and how we would move it into the fellowship repo.

+

Noteworthy: All Encointer-specific pallets will still be located in encointer's repo for the time being: https://github.com/encointer/pallets

+

It will still be the duty of the Encointer team to keep its runtime up to date and provide adequate test fixtures. Frequent dependency bumps with Polkadot releases would be beneficial for interoperability and could be streamlined with other system chains but that will not be a duty of fellowship. Whenever possible, all system chains could be upgraded jointly (including Encointer) with a batch referendum.

+

Further notes:

+
    +
  • Encointer will publish all its crates crates.io
  • +
  • Encointer does not carry out external auditing of its runtime nor pallets. It would be beneficial but not a requirement from our side if Encointer could join the auditing process of other system chains.
  • +
+

Drawbacks

+

Other than all other system chains, development and maintenance of the Encointer Network is mainly financed by the KSM Treasury and possibly the DOT Treasury in the future. Encointer is dedicated to maintaining its network and runtime code for as long as possible, but there is a dependency on funding which is not in the hands of the fellowship. The only risk in the context of funding, however, is that the Encointer runtime will see less frequent updates if there's less funding.

+

Testing, Security, and Privacy

+

No changes to the existing system are proposed. Only changes to how maintenance is organized.

+

Performance, Ergonomics, and Compatibility

+

No changes

+

Prior Art and References

+

Existing Encointer runtime repo

+

Unresolved Questions

+

None identified

+ +

More info on Encointer: encointer.org

+ +
+ + +
+
+ + + +
+ + + + + + + + + + + + + + + + + + +
+ + diff --git a/mdbook/text/0032-minimal-relay.html b/mdbook/text/0032-minimal-relay.html new file mode 100644 index 000000000..1e438af9a --- /dev/null +++ b/mdbook/text/0032-minimal-relay.html @@ -0,0 +1,436 @@ + + + + + + + 0032 - Polkadot Fellowship RFCs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+
+

RFC-0032: Minimal Relay

+
+ + + +
Start Date20 September 2023
DescriptionProposal to minimise Relay Chain functionality.
AuthorsJoe Petrowski, Gavin Wood
+
+

Summary

+

The Relay Chain contains most of the core logic for the Polkadot network. While this was necessary +prior to the launch of parachains and development of XCM, most of this logic can exist in +parachains. This is a proposal to migrate several subsystems into system parachains.

+

Motivation

+

Polkadot's scaling approach allows many distinct state machines (known generally as parachains) to +operate with common guarantees about the validity and security of their state transitions. Polkadot +provides these common guarantees by executing the state transitions on a strict subset (a backing +group) of the Relay Chain's validator set.

+

However, state transitions on the Relay Chain need to be executed by all validators. If any of +those state transitions can occur on parachains, then the resources of the complement of a single +backing group could be used to offer more cores. As in, they could be offering more coretime (a.k.a. +blockspace) to the network.

+

By minimising state transition logic on the Relay Chain by migrating it into "system chains" -- a +set of parachains that, with the Relay Chain, make up the Polkadot protocol -- the Polkadot +Ubiquitous Computer can maximise its primary offering: secure blockspace.

+

Stakeholders

+
    +
  • Parachains that interact with affected logic on the Relay Chain;
  • +
  • Core protocol and XCM format developers;
  • +
  • Tooling, block explorer, and UI developers.
  • +
+

Explanation

+

The following pallets and subsystems are good candidates to migrate from the Relay Chain:

+
    +
  • Identity
  • +
  • Balances
  • +
  • Staking +
      +
    • Staking
    • +
    • Election Provider
    • +
    • Bags List
    • +
    • NIS
    • +
    • Nomination Pools
    • +
    • Fast Unstake
    • +
    +
  • +
  • Governance +
      +
    • Treasury and Bounties
    • +
    • Conviction Voting
    • +
    • Referenda
    • +
    +
  • +
+

Note: The Auctions and Crowdloan pallets will be replaced by Coretime, its system chain and +interface described in RFC-1 and RFC-5, respectively.

+

Migrations

+

Some subsystems are simpler to move than others. For example, migrating Identity can be done by +simply preventing state changes in the Relay Chain, using the Identity-related state as the genesis +for a new chain, and launching that new chain with the genesis and logic (pallet) needed.

+

Other subsystems cannot experience any downtime like this because they are essential to the +network's functioning, like Staking and Governance. However, these can likely coexist with a +similarly-permissioned system chain for some time, much like how "Gov1" and "OpenGov" coexisted at +the latter's introduction.

+

Specific migration plans will be included in release notes of runtimes from the Polkadot Fellowship +when beginning the work of migrating a particular subsystem.

+

Interfaces

+

The Relay Chain, in many cases, will still need to interact with these subsystems, especially +Staking and Governance. These subsystems will require making some APIs available either via +dispatchable calls accessible to XCM Transact or possibly XCM Instructions in future versions.

+

For example, Staking provides a pallet-API to register points (e.g. for block production) and +offences (e.g. equivocation). With Staking in a system chain, that chain would need to allow the +Relay Chain to update validator points periodically so that it can correctly calculate rewards.

+

A pub-sub protocol may also lend itself to these types of interactions.

+

Functional Architecture

+

This RFC proposes that system chains form individual components within the system's architecture and +that these components are chosen as functional groups. This approach allows synchronous +composibility where it is most valuable, but isolates logic in such a way that provides flexibility +for optimal resource allocation (see Resource Allocation). For the +subsystems discussed in this RFC, namely Identity, Governance, and Staking, this would mean:

+
    +
  • People Chain, for identity and personhood logic, providing functionality related to the attributes +of single actors;
  • +
  • Governance Chain, for governance and system collectives, providing functionality for pluralities +to express their voices within the system;
  • +
  • Staking Chain, for Polkadot's staking system, including elections, nominations, reward +distribution, slashing, and non-interactive staking; and
  • +
  • Asset Hub, for fungible and non-fungible assets, including DOT.
  • +
+

The Collectives chain and Asset Hub already exist, so implementation of this RFC would mean two new +chains (People and Staking), with Governance moving to the currently-known-as Collectives chain +and Asset Hub being increasingly used for DOT over the Relay Chain.

+

Note that one functional group will likely include many pallets, as we do not know how pallet +configurations and interfaces will evolve over time.

+

Resource Allocation

+

The system should minimise wasted blockspace. These three (and other) subsystems may not each +consistently require a dedicated core. However, core scheduling is far more agile than functional +grouping. While migrating functionality from one chain to another can be a multi-month endeavour, +cores can be rescheduled almost on-the-fly.

+

Migrations are also breaking changes to some use cases, for example other parachains that need to +route XCM programs to particular chains. It is thus preferable to do them a single time in migrating +off the Relay Chain, reducing the risk of needing parachain splits in the future.

+

Therefore, chain boundaries should be based on functional grouping where synchronous composibility +is most valuable; and efficient resource allocation should be managed by the core scheduling +protocol.

+

Many of these system chains (including Asset Hub) could often share a single core in a semi-round +robin fashion (the coretime may not be uniform). When needed, for example during NPoS elections or +slashing events, the scheduler could allocate a dedicated core to the chain in need of more +throughput.

+

Deployment

+

Actual migrations should happen based on some prioritization. This RFC proposes to migrate Identity, +Staking, and Governance as the systems to work on first. A brief discussion on the factors involved +in each one:

+

Identity

+

Identity will be one of the simpler pallets to migrate into a system chain, as its logic is largely +self-contained and it does not "share" balances with other subsystems. As in, any DOT is held in +reserve as a storage deposit and cannot be simultaneously used the way locked DOT can be locked for +multiple purposes.

+

Therefore, migration can take place as follows:

+
    +
  1. The pallet can be put in a locked state, blocking most calls to the pallet and preventing updates +to identity info.
  2. +
  3. The frozen state will form the genesis of a new system parachain.
  4. +
  5. Functions will be added to the pallet that allow migrating the deposit to the parachain. The +parachain deposit is on the order of 1/100th of the Relay Chain's. Therefore, this will result in +freeing up Relay State as well as most of each user's reserved balance.
  6. +
  7. The pallet and any leftover state can be removed from the Relay Chain.
  8. +
+

User interfaces that render Identity information will need to source their data from the new system +parachain.

+

Note: In the future, it may make sense to decommission Kusama's Identity chain and do all account +identities via Polkadot's. However, the Kusama chain will serve as a dress rehearsal for Polkadot.

+

Staking

+

Migrating the staking subsystem will likely be the most complex technical undertaking, as the +Staking system cannot stop (the system MUST always have a validator set) nor run in parallel (the +system MUST have only one validator set) and the subsystem itself is made up of subsystems in the +runtime and the node. For example, if offences are reported to the Staking parachain, validator +nodes will need to submit their reports there.

+

Handling balances also introduces complications. The same balance can be used for staking and +governance. Ideally, all balances stay on Asset Hub, and only report "credits" to system chains like +Staking and Governance. However, staking mutates balances by issuing new DOT on era changes and for +rewards. Allowing DOT directly on the Staking parachain would simplify staking changes.

+

Given the complexity, it would be pragmatic to include the Balances pallet in the Staking parachain +in its first version. Any other systems that use overlapping locks, most notably governance, will +need to recognise DOT held on both Asset Hub and the Staking parachain.

+

There is more discussion about staking in a parachain in Moving Staking off the Relay +Chain.

+

Governance

+

Migrating governance into a parachain will be less complicated than staking. Most of the primitives +needed for the migration already exist. The Treasury supports spending assets on remote chains and +collectives like the Polkadot Technical Fellowship already function in a parachain. That is, XCM +already provides the ability to express system origins across chains.

+

Therefore, actually moving the governance logic into a parachain will be simple. It can run in +parallel with the Relay Chain's governance, which can be removed when the parachain has demonstrated +sufficient functionality. It's possible that the Relay Chain maintain a Root-level emergency track +for situations like parachains +halting.

+

The only complication arises from the fact that both Asset Hub and the Staking parachain will have +DOT balances; therefore, the Governance chain will need to be able to credit users' voting power +based on balances from both locations. This is not expected to be difficult to handle.

+

Kusama

+

Although Polkadot and Kusama both have system chains running, they have to date only been used for +introducing new features or bodies, for example fungible assets or the Technical Fellowship. There +has not yet been a migration of logic/state from the Relay Chain into a parachain. Given its more +realistic network conditions than testnets, Kusama is the best stage for rehearsal.

+

In the case of identity, Polkadot's system may be sufficient for the ecosystem. Therefore, Kusama +should be used to test the migration of logic and state from Relay Chain to parachain, but these +features may be (at the will of Kusama's governance) dropped from Kusama entirely after a successful +migration on Polkadot.

+

For Governance, Polkadot already has the Collectives parachain, which would become the Governance +parachain. The entire group of DOT holders is itself a collective (the legislative body), and +governance provides the means to express voice. Launching a Kusama Governance chain would be +sensible to rehearse a migration.

+

The Staking subsystem is perhaps where Kusama would provide the most value in its canary capacity. +Staking is the subsystem most constrained by PoV limits. Ensuring that elections, payouts, session +changes, offences/slashes, etc. work in a parachain on Kusama -- with its larger validator set -- +will give confidence to the chain's robustness on Polkadot.

+

Drawbacks

+

These subsystems will have reduced resources in cores than on the Relay Chain. Staking in particular +may require some optimizations to deal with constraints.

+

Testing, Security, and Privacy

+

Standard audit/review requirements apply. More powerful multi-chain integration test tools would be +useful in developement.

+

Performance, Ergonomics, and Compatibility

+

Describe the impact of the proposal on the exposed functionality of Polkadot.

+

Performance

+

This is an optimization. The removal of public/user transactions on the Relay Chain ensures that its +primary resources are allocated to system performance.

+

Ergonomics

+

This proposal alters very little for coretime users (e.g. parachain developers). Application +developers will need to interact with multiple chains, making ergonomic light client tools +particularly important for application development.

+

For existing parachains that interact with these subsystems, they will need to configure their +runtimes to recognize the new locations in the network.

+

Compatibility

+

Implementing this proposal will require some changes to pallet APIs and/or a pub-sub protocol. +Application developers will need to interact with multiple chains in the network.

+

Prior Art and References

+ +

Unresolved Questions

+

There remain some implementation questions, like how to use balances for both Staking and +Governance. See, for example, Moving Staking off the Relay +Chain.

+ +

Ideally the Relay Chain becomes transactionless, such that not even balances are represented there. +With Staking and Governance off the Relay Chain, this is not an unreasonable next step.

+

With Identity on Polkadot, Kusama may opt to drop its People Chain.

+ +
+ + +
+
+ + + +
+ + + + + + + + + + + + + + + + + + +
+ + diff --git a/mdbook/text/0042-extrinsics-state-version.html b/mdbook/text/0042-extrinsics-state-version.html new file mode 100644 index 000000000..e1583b0a0 --- /dev/null +++ b/mdbook/text/0042-extrinsics-state-version.html @@ -0,0 +1,304 @@ + + + + + + + 0042 - Polkadot Fellowship RFCs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+
+

RFC-0042: Add System version that replaces StateVersion on RuntimeVersion

+
+ + + +
Start Date25th October 2023
DescriptionAdd System Version and remove State Version
AuthorsVedhavyas Singareddi
+
+

Summary

+

At the moment, we have system_version field on RuntimeVersion that derives which state version is used for the +Storage. +We have a use case where we want extrinsics root is derived using StateVersion::V1. Without defining a new field +under RuntimeVersion, +we would like to propose adding system_version that can be used to derive both storage and extrinsic state version.

+

Motivation

+

Since the extrinsic state version is always StateVersion::V0, deriving extrinsic root requires full extrinsic data. +This would be problematic when we need to verify the extrinsics root if the extrinsic sizes are bigger. This problem is +further explored in https://github.com/polkadot-fellows/RFCs/issues/19

+

For Subspace project, we have an enshrined rollups called Domain with optimistic verification and Fraud proofs are +used to detect malicious behavior. +One of the Fraud proof variant is to derive Domain block extrinsic root on Subspace's consensus chain. +Since StateVersion::V0 requires full extrinsic data, we are forced to pass all the extrinsics through the Fraud proof. +One of the main challenge here is some extrinsics could be big enough that this variant of Fraud proof may not be +included in the Consensus block due to Block's weight restriction. +If the extrinsic root is derived using StateVersion::V1, then we do not need to pass the full extrinsic data but +rather at maximum, 32 byte of extrinsic data.

+

Stakeholders

+
    +
  • Technical Fellowship, in its role of maintaining system runtimes.
  • +
+

Explanation

+

In order to use project specific StateVersion for extrinsic roots, we proposed +an implementation that introduced +parameter to frame_system::Config but that unfortunately did not feel correct. +So we would like to propose adding this change to +the RuntimeVersion +object. The system version, if introduced, will be used to derive both storage and extrinsic state version. +If system version is 0, then both Storage and Extrinsic State version would use V0. +If system version is 1, then Storage State version would use V1 and Extrinsic State version would use V0. +If system version is 2, then both Storage and Extrinsic State version would use V1.

+

If implemented, the new RuntimeVersion definition would look something similar to

+
#![allow(unused)]
+fn main() {
+/// Runtime version (Rococo).
+#[sp_version::runtime_version]
+pub const VERSION: RuntimeVersion = RuntimeVersion {
+		spec_name: create_runtime_str!("rococo"),
+		impl_name: create_runtime_str!("parity-rococo-v2.0"),
+		authoring_version: 0,
+		spec_version: 10020,
+		impl_version: 0,
+		apis: RUNTIME_API_VERSIONS,
+		transaction_version: 22,
+		system_version: 1,
+	};
+}
+

Drawbacks

+

There should be no drawbacks as it would replace state_version with same behavior but documentation should be updated +so that chains know which system_version to use.

+

Testing, Security, and Privacy

+

AFAIK, should not have any impact on the security or privacy.

+

Performance, Ergonomics, and Compatibility

+

These changes should be compatible for existing chains if they use state_version value for system_verision.

+

Performance

+

I do not believe there is any performance hit with this change.

+

Ergonomics

+

This does not break any exposed Apis.

+

Compatibility

+

This change should not break any compatibility.

+

Prior Art and References

+

We proposed introducing a similar change by introducing a +parameter to frame_system::Config but did not feel that +is the correct way of introducing this change.

+

Unresolved Questions

+

I do not have any specific questions about this change at the moment.

+ +

IMO, this change is pretty self-contained and there won't be any future work necessary.

+ +
+ + +
+
+ + + +
+ + + + + + + + + + + + + + + + + + +
+ + diff --git a/mdbook/text/0043-storage-proof-size-hostfunction.html b/mdbook/text/0043-storage-proof-size-hostfunction.html new file mode 100644 index 000000000..6a83e6b9e --- /dev/null +++ b/mdbook/text/0043-storage-proof-size-hostfunction.html @@ -0,0 +1,271 @@ + + + + + + + 0043 - Polkadot Fellowship RFCs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+
+

RFC-0043: Introduce storage_proof_size Host Function for Improved Parachain Block Utilization

+
+ + + +
Start Date30 October 2023
DescriptionHost function to provide the storage proof size to runtimes.
AuthorsSebastian Kunert
+
+

Summary

+

This RFC proposes a new host function for parachains, storage_proof_size. It shall provide the size of the currently recorded storage proof to the runtime. Runtime authors can use the proof size to improve block utilization by retroactively reclaiming unused storage weight.

+

Motivation

+

The number of extrinsics that are included in a parachain block is limited by two constraints: execution time and proof size. FRAME weights cover both concepts, and block-builders use them to decide how many extrinsics to include in a block. However, these weights are calculated ahead of time by benchmarking on a machine with reference hardware. The execution-time properties of the state-trie and its storage items are unknown at benchmarking time. Therefore, we make some assumptions about the state-trie:

+
    +
  • Trie Depth: We assume a trie depth to account for intermediary nodes.
  • +
  • Storage Item Size: We make a pessimistic assumption based on the MaxEncodedLen trait.
  • +
+

These pessimistic assumptions lead to an overestimation of storage weight, negatively impacting block utilization on parachains.

+

In addition, the current model does not account for multiple accesses to the same storage items. While these repetitive accesses will not increase storage-proof size, the runtime-side weight monitoring will account for them multiple times. Since the proof size is completely opaque to the runtime, we can not implement retroactive storage weight correction.

+

A solution must provide a way for the runtime to track the exact storage-proof size consumed on a per-extrinsic basis.

+

Stakeholders

+
    +
  • Parachain Teams: They MUST include this host function in their runtime and node.
  • +
  • Light-client Implementors: They SHOULD include this host function in their runtime and node.
  • +
+

Explanation

+

This RFC proposes a new host function that exposes the storage-proof size to the runtime. As a result, runtimes can implement storage weight reclaiming mechanisms that improve block utilization.

+

This RFC proposes the following host function signature:

+
#![allow(unused)]
+fn main() {
+fn ext_storage_proof_size_version_1() -> u64;
+}
+

The host function MUST return an unsigned 64-bit integer value representing the current proof size. In block-execution and block-import contexts, this function MUST return the current size of the proof. To achieve this, parachain node implementors need to enable proof recording for block imports. In other contexts, this function MUST return 18446744073709551615 (u64::MAX), which represents disabled proof recording.

+

Performance, Ergonomics, and Compatibility

+

Performance

+

Parachain nodes need to enable proof recording during block import to correctly implement the proposed host function. Benchmarking conducted with balance transfers has shown a performance reduction of around 0.6% when proof recording is enabled.

+

Ergonomics

+

The host function proposed in this RFC allows parachain runtime developers to keep track of the proof size. Typical usage patterns would be to keep track of the overall proof size or the difference between subsequent calls to the host function.

+

Compatibility

+

Parachain teams will need to include this host function to upgrade.

+

Prior Art and References

+ + +
+ + +
+
+ + + +
+ + + + + + + + + + + + + + + + + + +
+ + diff --git a/mdbook/text/0045-nft-deposits-asset-hub.html b/mdbook/text/0045-nft-deposits-asset-hub.html new file mode 100644 index 000000000..d38f813e8 --- /dev/null +++ b/mdbook/text/0045-nft-deposits-asset-hub.html @@ -0,0 +1,433 @@ + + + + + + + 0045 - Polkadot Fellowship RFCs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+
+

RFC-0045: Lowering NFT Deposits on Asset Hub

+
+ + + +
Start Date2 November 2023
DescriptionA proposal to reduce the minimum deposit required for collection creation on the Polkadot and Kusama Asset Hubs.
AuthorsAurora Poppyseed, Just_Luuuu, Viki Val, Joe Petrowski
+
+

Summary

+

This RFC proposes changing the current deposit requirements on the Polkadot and Kusama Asset Hub for +creating an NFT collection, minting an individual NFT, and lowering its corresponding metadata and +attribute deposits. The objective is to lower the barrier to entry for NFT creators, fostering a +more inclusive and vibrant ecosystem while maintaining network integrity and preventing spam.

+

Motivation

+

The current deposit of 10 DOT for collection creation (along with 0.01 DOT for item deposit and 0.2 +DOT for metadata and attribute deposits) on the Polkadot Asset Hub and 0.1 KSM on Kusama Asset Hub +presents a significant financial barrier for many NFT creators. By lowering the deposit +requirements, we aim to encourage more NFT creators to participate in the Polkadot NFT ecosystem, +thereby enriching the diversity and vibrancy of the community and its offerings.

+

The initial introduction of a 10 DOT deposit was an arbitrary starting point that does not consider +the actual storage footprint of an NFT collection. This proposal aims to adjust the deposit first to +a value based on the deposit function, which calculates a deposit based on the number of keys +introduced to storage and the size of corresponding values stored.

+

Further, it suggests a direction for a future of calculating deposits variably based on adoption +and/or market conditions. There is a discussion on tradeoffs of setting deposits too high or too +low.

+

Requirements

+
    +
  • Deposits SHOULD be derived from deposit function, adjusted by correspoding pricing mechansim.
  • +
+

Stakeholders

+
    +
  • NFT Creators: Primary beneficiaries of the proposed change, particularly those who found the +current deposit requirements prohibitive.
  • +
  • NFT Platforms: As the facilitator of artists' relations, NFT marketplaces have a vested +interest in onboarding new users and making their platforms more accessible.
  • +
  • dApp Developers: Making the blockspace more accessible will encourage developers to create and +build unique dApps in the Polkadot ecosystem.
  • +
  • Polkadot Community: Stands to benefit from an influx of artists, creators, and diverse NFT +collections, enhancing the overall ecosystem.
  • +
+

Previous discussions have been held within the Polkadot +Forum, with +artists expressing their concerns about the deposit amounts.

+

Explanation

+

This RFC proposes a revision of the deposit constants in the configuration of the NFTs pallet on the +Polkadot Asset Hub. The new deposit amounts would be determined by a standard deposit formula.

+

As of v1.1.1, the Collection Deposit is 10 DOT and the Item Deposit is 0.01 DOT (see +here).

+

Based on the storage footprint of these items, this RFC proposes changing them to:

+
#![allow(unused)]
+fn main() {
+pub const NftsCollectionDeposit: Balance = system_para_deposit(1, 130);
+pub const NftsItemDeposit: Balance = system_para_deposit(1, 164);
+}
+

This results in the following deposits (calculted using this +repository):

+

Polkadot

+
+ + + + +
NameCurrent Rate (DOT)Calculated with Function (DOT)
collectionDeposit100.20064
itemDeposit0.010.20081
metadataDepositBase0.201290.20076
attributeDepositBase0.20.2
+
+

Similarly, the prices for Kusama were calculated as:

+

Kusama:

+
+ + + + +
NameCurrent Rate (KSM)Calculated with Function (KSM)
collectionDeposit0.10.006688
itemDeposit0.0010.000167
metadataDepositBase0.0067096666170.0006709666617
attributeDepositBase0.006666666660.000666666666
+
+

Enhanced Approach to Further Lower Barriers for Entry

+

This RFC proposes further lowering these deposits below the rate normally charged for such a storage +footprint. This is based on the economic argument that sub-rate deposits are a subsididy for growth +and adoption of a specific technology. If the NFT functionality on Polkadot gains adoption, it makes +it more attractive for future entrants, who would be willing to pay the non-subsidized rate because +of the existing community.

+

Proposed Rate Adjustments

+
#![allow(unused)]
+fn main() {
+parameter_types! {
+	pub const NftsCollectionDeposit: Balance = system_para_deposit(1, 130);
+	pub const NftsItemDeposit: Balance = system_para_deposit(1, 164) / 40;
+	pub const NftsMetadataDepositBase: Balance = system_para_deposit(1, 129) / 10;
+	pub const NftsAttributeDepositBase: Balance = system_para_deposit(1, 0) / 10;
+	pub const NftsDepositPerByte: Balance = system_para_deposit(0, 1);
+}
+}
+

This adjustment would result in the following DOT and KSM deposit values:

+
+ + + + +
NameProposed Rate PolkadotProposed Rate Kusama
collectionDeposit0.20064 DOT0.006688 KSM
itemDeposit0.005 DOT0.000167 KSM
metadataDepositBase0.002 DOT0.0006709666617 KSM
attributeDepositBase0.002 DOT0.000666666666 KSM
+
+

Short- and Long-Term Plans

+

The plan presented above is recommended as an immediate step to make Polkadot a more attractive +place to launch NFTs, although one would note that a forty fold reduction in the Item Deposit is +just as arbitrary as the value it was replacing. As explained earlier, this is meant as a subsidy to +gain more momentum for NFTs on Polkadot.

+

In the long term, an implementation should account for what should happen to the deposit rates +assuming that the subsidy is successful and attracts a lot of deployments. Many options are +discussed in the Addendum.

+

The deposit should be calculated as a function of the number of existing collections with maximum +DOT and stablecoin values limiting the amount. With asset rates available via the Asset Conversion +pallet, the system could take the lower value required. A sigmoid curve would make sense for this +application to avoid sudden rate changes, as in:

+

$$ minDeposit + \frac{\mathrm{min(DotDeposit, StableDeposit) - minDeposit} }{\mathrm{1 + e^{a - b * x}} }$$

+

where the constant a moves the inflection to lower or higher x values, the constant b adjusts +the rate of the deposit increase, and the independent variable x is the number of collections or +items, depending on application.

+

Drawbacks

+

Modifying deposit requirements necessitates a balanced assessment of the potential drawbacks. +Highlighted below are cogent points extracted from the discourse on the Polkadot Forum +conversation, +which provide critical perspectives on the implications of such changes.

+

Adjusting NFT deposit requirements on Polkadot and Kusama Asset Hubs involves key challenges:

+
    +
  1. +

    State Growth and Technical Concerns: Lowering deposit requirements can lead to increased +blockchain state size, potentially causing state bloat. This growth needs to be managed to +prevent strain on the network's resources and maintain operational efficiency. As stated earlier, +the deposit levels proposed here are intentionally low with the thesis that future participants +would pay the standard rate.

    +
  2. +
  3. +

    Network Security and Market Response: Adapting to the cryptocurrency market's volatility is +crucial. The mechanism for setting deposit amounts must be responsive yet stable, avoiding undue +complexity for users.

    +
  4. +
  5. +

    Economic Impact on Previous Stakeholders: The change could have varied economic effects on +previous (before the change) creators, platform operators, and investors. Balancing these +interests is essential to ensure the adjustment benefits the ecosystem without negatively +impacting its value dynamics. However in the particular case of Polkadot and Kusama Asset Hub +this does not pose a concern since there are very few collections currently and thus previous +stakeholders wouldn't be much affected. As of date 9th January 2024 there are 42 collections on +Polkadot Asset Hub and 191 on Kusama Asset Hub with a relatively low volume.

    +
  6. +
+

Testing, Security, and Privacy

+

Security concerns

+

As noted above, state bloat is a security concern. In the case of abuse, governance could adapt by +increasing deposit rates and/or using forceDestroy on collections agreed to be spam.

+

Performance, Ergonomics, and Compatibility

+

Performance

+

The primary performance consideration stems from the potential for state bloat due to increased +activity from lower deposit requirements. It's vital to monitor and manage this to avoid any +negative impact on the chain's performance. Strategies for mitigating state bloat, including +efficient data management and periodic reviews of storage requirements, will be essential.

+

Ergonomics

+

The proposed change aims to enhance the user experience for artists, traders, and utilizers of +Kusama and Polkadot Asset Hubs, making Polkadot and Kusama more accessible and user-friendly.

+

Compatibility

+

The change does not impact compatibility as a redeposit function is already implemented.

+

Unresolved Questions

+

If this RFC is accepted, there should not be any unresolved questions regarding how to adapt the +implementation of deposits for NFT collections.

+

Addendum

+

Several innovative proposals have been considered to enhance the network's adaptability and manage +deposit requirements more effectively. The RFC recommends a mixture of the function-based model and +the stablecoin model, but some tradeoffs of each are maintained here for those interested.

+

Enhanced Weak Governance Origin Model

+

The concept of a weak governance origin, controlled by a consortium like a system collective, has +been proposed. This model would allow for dynamic adjustments of NFT deposit requirements in +response to market conditions, adhering to storage deposit norms.

+
    +
  • Responsiveness: To address concerns about delayed responses, the model could incorporate +automated triggers based on predefined market indicators, ensuring timely adjustments.
  • +
  • Stability vs. Flexibility: Balancing stability with the need for flexibility is challenging. +To mitigate the issue of frequent changes in DOT-based deposits, a mechanism for gradual and +predictable adjustments could be introduced.
  • +
  • Scalability: The model's scalability is a concern, given the numerous deposits across the +system. A more centralized approach to deposit management might be needed to avoid constant, +decentralized adjustments.
  • +
+

Function-Based Pricing Model

+

Another proposal is to use a mathematical function to regulate deposit prices, initially allowing +low prices to encourage participation, followed by a gradual increase to prevent network bloat.

+
    +
  • Choice of Function: A logarithmic or sigmoid function is favored over an exponential one, as +these functions increase prices at a rate that encourages participation while preventing +prohibitive costs.
  • +
  • Adjustment of Constants: To finely tune the pricing rise, one of the function's constants +could correlate with the total number of NFTs on Asset Hub. This would align the deposit +requirements with the actual usage and growth of the network.
  • +
+

Linking Deposit to USD(x) Value

+

This approach suggests pegging the deposit value to a stable currency like the USD, introducing +predictability and stability for network users.

+
    +
  • Market Dynamics: One perspective is that fluctuations in native currency value naturally +balance user participation and pricing, deterring network spam while encouraging higher-value +collections. Conversely, there's an argument for allowing broader participation if the DOT/KSM +value increases.
  • +
  • Complexity and Risks: Implementing a USD-based pricing system could add complexity and +potential risks. The implementation needs to be carefully designed to avoid unintended +consequences, such as excessive reliance on external financial systems or currencies.
  • +
+

Each of these proposals offers unique advantages and challenges. The optimal approach may involve a +combination of these ideas, carefully adjusted to address the specific needs and dynamics of the +Polkadot and Kusama networks.

+ +
+ + +
+
+ + + +
+ + + + + + + + + + + + + + + + + + +
+ + diff --git a/mdbook/text/0047-assignment-of-availability-chunks.html b/mdbook/text/0047-assignment-of-availability-chunks.html new file mode 100644 index 000000000..479c75db8 --- /dev/null +++ b/mdbook/text/0047-assignment-of-availability-chunks.html @@ -0,0 +1,478 @@ + + + + + + + 0047 - Polkadot Fellowship RFCs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+
+

RFC-0047: Assignment of availability chunks to validators

+
+ + + +
Start Date03 November 2023
DescriptionAn evenly-distributing indirection layer between availability chunks and validators.
AuthorsAlin Dima
+
+

Summary

+

Propose a way of permuting the availability chunk indices assigned to validators, in the context of +recovering available data from systematic chunks, with the +purpose of fairly distributing network bandwidth usage.

+

Motivation

+

Currently, the ValidatorIndex is always identical to the ChunkIndex. Since the validator array is only shuffled once +per session, naively using the ValidatorIndex as the ChunkIndex would pose an unreasonable stress on the first N/3 +validators during an entire session, when favouring availability recovery from systematic chunks.

+

Therefore, the relay chain node needs a deterministic way of evenly distributing the first ~(N_VALIDATORS / 3) +systematic availability chunks to different validators, based on the relay chain block and core. +The main purpose is to ensure fair distribution of network bandwidth usage for availability recovery in general and in +particular for systematic chunk holders.

+

Stakeholders

+

Relay chain node core developers.

+

Explanation

+

Systematic erasure codes

+

An erasure coding algorithm is considered systematic if it preserves the original unencoded data as part of the +resulting code. +The implementation of the erasure coding algorithm used for polkadot's availability data is systematic. +Roughly speaking, the first N_VALIDATORS/3 chunks of data can be cheaply concatenated to retrieve the original data, +without running the resource-intensive and time-consuming reconstruction algorithm.

+

You can find the concatenation procedure of systematic chunks for polkadot's erasure coding algorithm +here

+

In a nutshell, it performs a column-wise concatenation with 2-byte chunks. +The output could be zero-padded at the end, so scale decoding must be aware of the expected length in bytes and ignore +trailing zeros (this assertion is already being made for regular reconstruction).

+

Availability recovery at present

+

According to the polkadot protocol spec:

+
+

A validator should request chunks by picking peers randomly and must recover at least f+1 chunks, where +n=3f+k and k in {1,2,3}.

+
+

For parity's polkadot node implementation, the process was further optimised. At this moment, it works differently based +on the estimated size of the available data:

+

(a) for small PoVs (up to 128 Kib), sequentially try requesting the unencoded data from the backing group, in a random +order. If this fails, fallback to option (b).

+

(b) for large PoVs (over 128 Kib), launch N parallel requests for the erasure coded chunks (currently, N has an upper +limit of 50), until enough chunks were recovered. Validators are tried in a random order. Then, reconstruct the +original data.

+

All options require that after reconstruction, validators then re-encode the data and re-create the erasure chunks trie +in order to check the erasure root.

+

Availability recovery from systematic chunks

+

As part of the effort of +increasing polkadot's resource efficiency, scalability and performance, +work is under way to modify the Availability Recovery protocol by leveraging systematic chunks. See +this comment for preliminary +performance results.

+

In this scheme, the relay chain node will first attempt to retrieve the ~N/3 systematic chunks from the validators that +should hold them, before falling back to recovering from regular chunks, as before.

+

A re-encoding step is still needed for verifying the erasure root, so the erasure coding overhead cannot be completely +brought down to 0.

+

Not being able to retrieve even one systematic chunk would make systematic reconstruction impossible. Therefore, backers +can be used as a backup to retrieve a couple of missing systematic chunks, before falling back to retrieving regular +chunks.

+

Chunk assignment function

+

Properties

+

The function that decides the chunk index for a validator will be parameterized by at least +(validator_index, core_index) +and have the following properties:

+
    +
  1. deterministic
  2. +
  3. relatively quick to compute and resource-efficient.
  4. +
  5. when considering a fixed core_index, the function should describe a permutation of the chunk indices
  6. +
  7. the validators that map to the first N/3 chunk indices should have as little overlap as possible for different cores.
  8. +
+

In other words, we want a uniformly distributed, deterministic mapping from ValidatorIndex to ChunkIndex per core.

+

It's desirable to not embed this function in the runtime, for performance and complexity reasons. +However, this means that the function needs to be kept very simple and with minimal or no external dependencies. +Any change to this function could result in parachains being stalled and needs to be coordinated via a runtime upgrade +or governance call.

+

Proposed function

+

Pseudocode:

+
#![allow(unused)]
+fn main() {
+pub fn get_chunk_index(
+  n_validators: u32,
+  validator_index: ValidatorIndex,
+  core_index: CoreIndex
+) -> ChunkIndex {
+  let threshold = systematic_threshold(n_validators); // Roughly n_validators/3
+  let core_start_pos = core_index * threshold;
+
+  (core_start_pos + validator_index) % n_validators
+}
+}
+

Network protocol

+

The request-response /req_chunk protocol will be bumped to a new version (from v1 to v2). +For v1, the request and response payloads are:

+
#![allow(unused)]
+fn main() {
+/// Request an availability chunk.
+pub struct ChunkFetchingRequest {
+	/// Hash of candidate we want a chunk for.
+	pub candidate_hash: CandidateHash,
+	/// The index of the chunk to fetch.
+	pub index: ValidatorIndex,
+}
+
+/// Receive a requested erasure chunk.
+pub enum ChunkFetchingResponse {
+	/// The requested chunk data.
+	Chunk(ChunkResponse),
+	/// Node was not in possession of the requested chunk.
+	NoSuchChunk,
+}
+
+/// This omits the chunk's index because it is already known by
+/// the requester and by not transmitting it, we ensure the requester is going to use his index
+/// value for validating the response, thus making sure he got what he requested.
+pub struct ChunkResponse {
+	/// The erasure-encoded chunk of data belonging to the candidate block.
+	pub chunk: Vec<u8>,
+	/// Proof for this chunk's branch in the Merkle tree.
+	pub proof: Proof,
+}
+}
+

Version 2 will add an index field to ChunkResponse:

+
#![allow(unused)]
+fn main() {
+#[derive(Debug, Clone, Encode, Decode)]
+pub struct ChunkResponse {
+	/// The erasure-encoded chunk of data belonging to the candidate block.
+	pub chunk: Vec<u8>,
+	/// Proof for this chunk's branch in the Merkle tree.
+	pub proof: Proof,
+	/// Chunk index.
+	pub index: ChunkIndex
+}
+}
+

An important thing to note is that in version 1, the ValidatorIndex value is always equal to the ChunkIndex. +Until the chunk rotation feature is enabled, this will also be true for version 2. However, after the feature is +enabled, this will generally not be true.

+

The requester will send the request to validator with index V. The responder will map the V validator index to the +C chunk index and respond with the C-th chunk. This mapping can be seamless, by having each validator store their +chunk by ValidatorIndex (just as before).

+

The protocol implementation MAY check the returned ChunkIndex against the expected mapping to ensure that +it received the right chunk. +In practice, this is desirable during availability-distribution and systematic chunk recovery. However, regular +recovery may not check this index, which is particularly useful when participating in disputes that don't allow +for easy access to the validator->chunk mapping. See Appendix A for more details.

+

In any case, the requester MUST verify the chunk's proof using the provided index.

+

During availability-recovery, given that the requester may not know (if the mapping is not available) whether the +received chunk corresponds to the requested validator index, it has to keep track of received chunk indices and ignore +duplicates. Such duplicates should be considered the same as an invalid/garbage response (drop it and move on to the +next validator - we can't punish via reputation changes, because we don't know which validator misbehaved).

+

Upgrade path

+

Step 1: Enabling new network protocol

+

In the beginning, both /req_chunk/1 and /req_chunk/2 will be supported, until all validators and +collators have upgraded to use the new version. V1 will be considered deprecated. During this step, the mapping will +still be 1:1 (ValidatorIndex == ChunkIndex), regardless of protocol. +Once all nodes are upgraded, a new release will be cut that removes the v1 protocol. Only once all nodes have upgraded +to this version will step 2 commence.

+

Step 2: Enabling the new validator->chunk mapping

+

Considering that the Validator->Chunk mapping is critical to para consensus, the change needs to be enacted atomically +via governance, only after all validators have upgraded the node to a version that is aware of this mapping, +functionality-wise. +It needs to be explicitly stated that after the governance enactment, validators that run older client versions that +don't support this mapping will not be able to participate in parachain consensus.

+

Additionally, an error will be logged when starting a validator with an older version, after the feature was enabled.

+

On the other hand, collators will not be required to upgrade in this step (but are still require to upgrade for step 1), +as regular chunk recovery will work as before, granted that version 1 of the networking protocol has been removed. +Note that collators only perform availability-recovery in rare, adversarial scenarios, so it is fine to not optimise for +this case and let them upgrade at their own pace.

+

To support enabling this feature via the runtime, we will use the NodeFeatures bitfield of the HostConfiguration +struct (added in https://github.com/paritytech/polkadot-sdk/pull/2177). Adding and enabling a feature +with this scheme does not require a runtime upgrade, but only a referendum that issues a +Configuration::set_node_feature extrinsic. Once the feature is enabled and new configuration is live, the +validator->chunk mapping ceases to be a 1:1 mapping and systematic recovery may begin.

+

Drawbacks

+
    +
  • Getting access to the core_index that used to be occupied by a candidate in some parts of the dispute protocol is +very complicated (See appendix A). This RFC assumes that availability-recovery processes initiated during +disputes will only use regular recovery, as before. This is acceptable since disputes are rare occurrences in practice +and is something that can be optimised later, if need be. Adding the core_index to the CandidateReceipt would +mitigate this problem and will likely be needed in the future for CoreJam and/or Elastic scaling. +Related discussion about updating CandidateReceipt
  • +
  • It's a breaking change that requires all validators and collators to upgrade their node version at least once.
  • +
+

Testing, Security, and Privacy

+

Extensive testing will be conducted - both automated and manual. +This proposal doesn't affect security or privacy.

+

Performance, Ergonomics, and Compatibility

+

Performance

+

This is a necessary data availability optimisation, as reed-solomon erasure coding has proven to be a top consumer of +CPU time in polkadot as we scale up the parachain block size and number of availability cores.

+

With this optimisation, preliminary performance results show that CPU time used for reed-solomon coding/decoding can be +halved and total POV recovery time decrease by 80% for large POVs. See more +here.

+

Ergonomics

+

Not applicable.

+

Compatibility

+

This is a breaking change. See upgrade path section above. +All validators and collators need to have upgraded their node versions before the feature will be enabled via a +governance call.

+

Prior Art and References

+

See comments on the tracking issue and the +in-progress PR

+

Unresolved Questions

+

Not applicable.

+ +

This enables future optimisations for the performance of availability recovery, such as retrieving batched systematic +chunks from backers/approval-checkers.

+

Appendix A

+

This appendix details the intricacies of getting access to the core index of a candidate in parity's polkadot node.

+

Here, core_index refers to the index of the core that a candidate was occupying while it was pending availability +(from backing to inclusion).

+

Availability-recovery can currently be triggered by the following phases in the polkadot protocol:

+
    +
  1. During the approval voting process.
  2. +
  3. By other collators of the same parachain.
  4. +
  5. During disputes.
  6. +
+

Getting the right core index for a candidate can be troublesome. Here's a breakdown of how different parts of the +node implementation can get access to it:

+
    +
  1. +

    The approval-voting process for a candidate begins after observing that the candidate was included. Therefore, the +node has easy access to the block where the candidate got included (and also the core that it occupied).

    +
  2. +
  3. +

    The pov_recovery task of the collators starts availability recovery in response to noticing a candidate getting +backed, which enables easy access to the core index the candidate started occupying.

    +
  4. +
  5. +

    Disputes may be initiated on a number of occasions:

    +

    3.a. is initiated by the validator as a result of finding an invalid candidate while participating in the +approval-voting protocol. In this case, availability-recovery is not needed, since the validator already issued their +vote.

    +

    3.b is initiated by the validator noticing dispute votes recorded on-chain. In this case, we can safely +assume that the backing event for that candidate has been recorded and kept in memory.

    +

    3.c is initiated as a result of getting a dispute statement from another validator. It is possible that the dispute +is happening on a fork that was not yet imported by this validator, so the subsystem may not have seen this candidate +being backed.

    +
  6. +
+

A naive attempt of solving 3.c would be to add a new version for the disputes request-response networking protocol. +Blindly passing the core index in the network payload would not work, since there is no way of validating that +the reported core_index was indeed the one occupied by the candidate at the respective relay parent.

+

Another attempt could be to include in the message the relay block hash where the candidate was included. +This information would be used in order to query the runtime API and retrieve the core index that the candidate was +occupying. However, considering it's part of an unimported fork, the validator cannot call a runtime API on that block.

+

Adding the core_index to the CandidateReceipt would solve this problem and would enable systematic recovery for all +dispute scenarios.

+ +
+ + +
+
+ + + +
+ + + + + + + + + + + + + + + + + + +
+ + diff --git a/mdbook/text/0048-session-keys-runtime-api.html b/mdbook/text/0048-session-keys-runtime-api.html new file mode 100644 index 000000000..85a84a69e --- /dev/null +++ b/mdbook/text/0048-session-keys-runtime-api.html @@ -0,0 +1,317 @@ + + + + + + + 0048 - Polkadot Fellowship RFCs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+
+

RFC-0048: Generate ownership proof for SessionKeys

+
+ + + +
Start Date13 November 2023
DescriptionChange SessionKeys runtime api to support generating an ownership proof for the on chain registration.
AuthorsBastian Köcher
+
+

Summary

+

This RFC proposes to changes the SessionKeys::generate_session_keys runtime api interface. This runtime api is used by validator operators to +generate new session keys on a node. The public session keys are then registered manually on chain by the validator operator. +Before this RFC it was not possible by the on chain logic to ensure that the account setting the public session keys is also in +possession of the private session keys. To solve this the RFC proposes to pass the account id of the account doing the +registration on chain to generate_session_keys. Further this RFC proposes to change the return value of the generate_session_keys +function also to not only return the public session keys, but also the proof of ownership for the private session keys. The +validator operator will then need to send the public session keys and the proof together when registering new session keys on chain.

+

Motivation

+

When submitting the new public session keys to the on chain logic there doesn't exist any verification of possession of the private session keys. +This means that users can basically register any kind of public session keys on chain. While the on chain logic ensures that there are +no duplicate keys, someone could try to prevent others from registering new session keys by setting them first. While this wouldn't bring +the "attacker" any kind of advantage, more like disadvantages (potential slashes on their account), it could prevent someone from +e.g. changing its session key in the event of a private session key leak.

+

After this RFC this kind of attack would not be possible anymore, because the on chain logic can verify that the sending account +is in ownership of the private session keys.

+

Stakeholders

+
    +
  • Polkadot runtime implementors
  • +
  • Polkadot node implementors
  • +
  • Validator operators
  • +
+

Explanation

+

We are first going to explain the proof format being used:

+
#![allow(unused)]
+fn main() {
+type Proof = (Signature, Signature, ..);
+}
+

The proof being a SCALE encoded tuple over all signatures of each private session +key signing the account_id. The actual type of each signature depends on the +corresponding session key cryptographic algorithm. The order of the signatures in +the proof is the same as the order of the session keys in the SessionKeys type +declared in the runtime.

+

The version of the SessionKeys needs to be bumped to 1 to reflect the changes to the +signature of SessionKeys_generate_session_keys:

+
#![allow(unused)]
+fn main() {
+pub struct OpaqueGeneratedSessionKeys {
+	pub keys: Vec<u8>,
+	pub proof: Vec<u8>,
+}
+
+fn SessionKeys_generate_session_keys(account_id: Vec<u8>, seed: Option<Vec<u8>>) -> OpaqueGeneratedSessionKeys;
+}
+

The default calling convention for runtime apis is applied, meaning the parameters +passed as SCALE encoded array and the length of the encoded array. The return value +being the SCALE encoded return value as u64 (array_ptr | length << 32). So, the +actual exported function signature looks like:

+
#![allow(unused)]
+fn main() {
+fn SessionKeys_generate_session_keys(array: *const u8, len: usize) -> u64;
+}
+

The on chain logic for setting the SessionKeys needs to be changed as well. It +already gets the proof passed as Vec<u8>. This proof needs to be decoded to +the actual Proof type as explained above. The proof and the SCALE encoded +account_id of the sender are used to verify the ownership of the SessionKeys.

+

Drawbacks

+

Validator operators need to pass the their account id when rotating their session keys in a node. +This will require updating some high level docs and making users familiar with the slightly changed ergonomics.

+

Testing, Security, and Privacy

+

Testing of the new changes only requires passing an appropriate owner for the current testing context. +The changes to the proof generation and verification got audited to ensure they are correct.

+

Performance, Ergonomics, and Compatibility

+

Performance

+

The session key generation is an offchain process and thus, doesn't influence the performance of the +chain. Verifying the proof is done on chain as part of the transaction logic for setting the session keys. +The verification of the proof is a signature verification number of individual session keys times. As setting +the session keys is happening quite rarely, it should not influence the overall system performance.

+

Ergonomics

+

The interfaces have been optimized to make it as easy as possible to generate the ownership proof.

+

Compatibility

+

Introduces a new version of the SessionKeys runtime api. Thus, nodes should be updated before +a runtime is enacted that contains these changes otherwise they will fail to generate session keys. +The RPC that exists around this runtime api needs to be updated to support passing the account id +and for returning the ownership proof alongside the public session keys.

+

UIs would need to be updated to support the new RPC and the changed on chain logic.

+

Prior Art and References

+

None.

+

Unresolved Questions

+

None.

+ +

Substrate implementation of the RFC.

+ +
+ + +
+
+ + + +
+ + + + + + + + + + + + + + + + + + +
+ + diff --git a/mdbook/text/0050-fellowship-salaries.html b/mdbook/text/0050-fellowship-salaries.html new file mode 100644 index 000000000..3029d20b8 --- /dev/null +++ b/mdbook/text/0050-fellowship-salaries.html @@ -0,0 +1,335 @@ + + + + + + + 0050 - Polkadot Fellowship RFCs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+
+

RFC-0050: Fellowship Salaries

+
+ + + +
Start Date15 November 2023
DescriptionProposal to set rank-based Fellowship salary levels.
AuthorsJoe Petrowski, Gavin Wood
+
+

Summary

+

The Fellowship Manifesto states that members should receive a monthly allowance on par with gross +income in OECD countries. This RFC proposes concrete amounts.

+

Motivation

+

One motivation for the Technical Fellowship is to provide an incentive mechanism that can induct and +retain technical talent for the continued progress of the network.

+

In order for members to uphold their commitment to the network, they should receive support to +ensure that their needs are met such that they have the time to dedicate to their work on Polkadot. +Given the high expectations of Fellows, it is reasonable to consider contributions and requirements +on par with a full-time job. Providing a livable wage to those making such contributions makes it +pragmatic to work full-time on Polkadot.

+

Note: Goals of the Fellowship, expectations for each Dan, and conditions for promotion and demotion +are all explained in the Manifesto. This RFC is only to propose concrete values for allowances.

+

Stakeholders

+
    +
  • Fellowship members
  • +
  • Polkadot Treasury
  • +
+

Explanation

+

This RFC proposes agreeing on salaries relative to a single level, the III Dan. As such, changes to +the amount or asset used would only be on a single value, and all others would adjust relatively. A +III Dan is someone whose contributions match the expectations of a full-time individual contributor. +The salary at this level should be reasonably close to averages in OECD countries.

+
+ + + + + + + + + +
DanFactor
I0.125
II0.25
III1
IV1.5
V2.0
VI2.5
VII2.5
VIII2.5
IX2.5
+
+

Note that there is a sizable increase between II Dan (Proficient) and III Dan (Fellow). By the third +Dan, it is generally expected that one is working on Polkadot as their primary focus in a full-time +capacity.

+

Salary Asset

+

Although the Manifesto (Section 8) specifies a monthly allowance in DOT, this RFC proposes the use +of USDT instead. The allowance is meant to provide members stability in meeting their day-to-day +needs and recognize contributions. Using USDT provides more stability and less speculation.

+

This RFC proposes that a III Dan earn 80,000 USDT per year. The salary at this level is commensurate +with average salaries in OECD countries (note: 77,000 USD in the U.S., with an average engineer at +100,000 USD). The other ranks would thus earn:

+
+ + + + + + + + + +
DanAnnual Salary
I10,000
II20,000
III80,000
IV120,000
V160,000
VI200,000
VII200,000
VIII200,000
IX200,000
+
+

The salary levels for Architects (IV, V, and VI Dan) are typical of senior engineers.

+

Allowances will be managed by the Salary pallet.

+

Projections

+

Based on the current membership, the maximum yearly and monthly costs are shown below:

+
+ + + + + + + + + +
DanSalaryMembersYearlyMonthly
I10,00027270,00022,500
II20,00011220,00018,333
III80,0008640,00053,333
IV120,0003360,00030,000
V160,0005800,00066,667
VI200,0003600,00050,000
> VI200,000000
Total2,890,000240,833
+
+

Note that these are the maximum amounts; members may choose to take a passive (lower) level. On the +other hand, more people will likely join the Fellowship in the coming years.

+

Updates

+

Updates to these levels, whether relative ratios, the asset used, or the amount, shall be done via +RFC.

+

Drawbacks

+

By not using DOT for payment, the protocol relies on the stability of other assets and the ability +to acquire them. However, the asset of choice can be changed in the future.

+

Testing, Security, and Privacy

+

N/A.

+

Performance, Ergonomics, and Compatibility

+

Performance

+

N/A

+

Ergonomics

+

N/A

+

Compatibility

+

N/A

+

Prior Art and References

+ +

Unresolved Questions

+

None at present.

+ +
+ + +
+
+ + + +
+ + + + + + + + + + + + + + + + + + +
+ + diff --git a/mdbook/text/0056-one-transaction-per-notification.html b/mdbook/text/0056-one-transaction-per-notification.html new file mode 100644 index 000000000..b1132aa96 --- /dev/null +++ b/mdbook/text/0056-one-transaction-per-notification.html @@ -0,0 +1,292 @@ + + + + + + + 0056 - Polkadot Fellowship RFCs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+
+

RFC-0056: Enforce only one transaction per notification

+
+ + + +
Start Date2023-11-30
DescriptionModify the transactions notifications protocol to always send only one transaction at a time
AuthorsPierre Krieger
+
+

Summary

+

When two peers connect to each other, they open (amongst other things) a so-called "notifications protocol" substream dedicated to gossiping transactions to each other.

+

Each notification on this substream currently consists in a SCALE-encoded Vec<Transaction> where Transaction is defined in the runtime.

+

This RFC proposes to modify the format of the notification to become (Compact(1), Transaction). This maintains backwards compatibility, as this new format decodes as a Vec of length equal to 1.

+

Motivation

+

There exists three motivations behind this change:

+
    +
  • +

    It is technically impossible to decode a SCALE-encoded Vec<Transaction> into a list of SCALE-encoded transactions without knowing how to decode a Transaction. That's because a Vec<Transaction> consists in several Transactions one after the other in memory, without any delimiter that indicates the end of a transaction and the start of the next. Unfortunately, the format of a Transaction is runtime-specific. This means that the code that receives notifications is necessarily tied to a specific runtime, and it is not possible to write runtime-agnostic code.

    +
  • +
  • +

    Notifications protocols are already designed to be optimized to send many items. Currently, when it comes to transactions, each item is a Vec<Transaction> that consists in multiple sub-items of type Transaction. This two-steps hierarchy is completely unnecessary, and was originally written at a time when the networking protocol of Substrate didn't have proper multiplexing.

    +
  • +
  • +

    It makes the implementation way more straight-forward by not having to repeat code related to back-pressure. See explanations below.

    +
  • +
+

Stakeholders

+

Low-level developers.

+

Explanation

+

To give an example, if you send one notification with three transactions, the bytes that are sent on the wire are:

+
concat(
+    leb128(total-size-in-bytes-of-the-rest),
+    scale(compact(3)), scale(transaction1), scale(transaction2), scale(transaction3)
+)
+
+

But you can also send three notifications of one transaction each, in which case it is:

+
concat(
+    leb128(size(scale(transaction1)) + 1), scale(compact(1)), scale(transaction1),
+    leb128(size(scale(transaction2)) + 1), scale(compact(1)), scale(transaction2),
+    leb128(size(scale(transaction3)) + 1), scale(compact(1)), scale(transaction3)
+)
+
+

Right now the sender can choose which of the two encoding to use. This RFC proposes to make the second encoding mandatory.

+

The format of the notification would become a SCALE-encoded (Compact(1), Transaction). +A SCALE-compact encoded 1 is one byte of value 4. In other words, the format of the notification would become concat(&[4], scale_encoded_transaction). +This is equivalent to forcing the Vec<Transaction> to always have a length of 1, and I expect the Substrate implementation to simply modify the sending side to add a for loop that sends one notification per item in the Vec.

+

As explained in the motivation section, this allows extracting scale(transaction) items without having to know how to decode them.

+

By "flattening" the two-steps hierarchy, an implementation only needs to back-pressure individual notifications rather than back-pressure notifications and transactions within notifications.

+

Drawbacks

+

This RFC chooses to maintain backwards compatibility at the cost of introducing a very small wart (the Compact(1)).

+

An alternative could be to introduce a new version of the transactions notifications protocol that sends one Transaction per notification, but this is significantly more complicated to implement and can always be done later in case the Compact(1) is bothersome.

+

Testing, Security, and Privacy

+

Irrelevant.

+

Performance, Ergonomics, and Compatibility

+

Performance

+

Irrelevant.

+

Ergonomics

+

Irrelevant.

+

Compatibility

+

The change is backwards compatible if done in two steps: modify the sender to always send one transaction per notification, then, after a while, modify the receiver to enforce the new format.

+

Prior Art and References

+

Irrelevant.

+

Unresolved Questions

+

None.

+ +

None. This is a simple isolated change.

+ +
+ + +
+
+ + + +
+ + + + + + + + + + + + + + + + + + +
+ + diff --git a/mdbook/text/0059-nodes-capabilities-discovery.html b/mdbook/text/0059-nodes-capabilities-discovery.html new file mode 100644 index 000000000..5260d49b1 --- /dev/null +++ b/mdbook/text/0059-nodes-capabilities-discovery.html @@ -0,0 +1,311 @@ + + + + + + + 0059 - Polkadot Fellowship RFCs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+
+

RFC-0059: Add a discovery mechanism for nodes based on their capabilities

+
+ + + +
Start Date2023-12-18
DescriptionNodes having certain capabilities register themselves in the DHT to be discoverable
AuthorsPierre Krieger
+
+

Summary

+

This RFC proposes to make the mechanism of RFC #8 more generic by introducing the concept of "capabilities".

+

Implementations can implement certain "capabilities", such as serving old block headers or being a parachain bootnode.

+

The discovery mechanism of RFC #8 is extended to be able to discover nodes of specific capabilities.

+

Motivation

+

The Polkadot peer-to-peer network is made of nodes. Not all these nodes are equal. Some nodes store only the headers of recent blocks, some nodes store all the block headers and bodies since the genesis, some nodes store the storage of all blocks since the genesis, and so on.

+

It is currently not possible to know ahead of time (without connecting to it and asking) which nodes have which data available, and it is not easily possible to build a list of nodes that have a specific piece of data available.

+

If you want to download for example the header of block 500, you have to connect to a randomly-chosen node, ask it for block 500, and if it says that it doesn't have the block, disconnect and try another randomly-chosen node. +In certain situations such as downloading the storage of old blocks, nodes that have the information are relatively rare, and finding through trial and error a node that has the data can take a long time.

+

This RFC attempts to solve this problem by giving the possibility to build a list of nodes that are capable of serving specific data.

+

Stakeholders

+

Low-level client developers. +People interested in accessing the archive of the chain.

+

Explanation

+

Reading RFC #8 first might help with comprehension, as this RFC is very similar.

+

Please keep in mind while reading that everything below applies for both relay chains and parachains, except mentioned otherwise.

+

Capabilities

+

This RFC defines a list of so-called capabilities:

+
    +
  • Head of chain provider. An implementation with this capability must be able to serve to other nodes block headers, block bodies, justifications, calls proofs, and storage proofs of "recent" (see below) blocks, and, for relay chains, to serve to other nodes warp sync proofs where the starting block is a session change block and must participate in Grandpa and Beefy gossip.
  • +
  • History provider. An implementation with this capability must be able to serve to other nodes block headers and block bodies of any block since the genesis, and must be able to serve to other nodes justifications of any session change block since the genesis up until and including their currently finalized block.
  • +
  • Archive provider. This capability is a superset of History provider. In addition to the requirements of History provider, an implementation with this capability must be able to serve call proofs and storage proof requests of any block since the genesis up until and including their currently finalized block.
  • +
  • Parachain bootnode (only for relay chains). An implementation with this capability must be able to serve the network request described in RFC 8.
  • +
+

More capabilities might be added in the future.

+

In the context of the head of chain provider, the word "recent" means: any not-finalized-yet block that is equal to or an ancestor of a block that it has announced through a block announce, and any finalized block whose height is superior to its current finalized block minus 16. +This does not include blocks that have been pruned because they're not a descendant of its current finalized block. In other words, blocks that aren't a descendant of the current finalized block can be thrown away. +A gap of blocks is required due to race conditions: when a node finalizes a block, it takes some time for its peers to be made aware of this, during which they might send requests concerning older blocks. The choice of the number of blocks in this gap is arbitrary.

+

Substrate is currently by default a head of chain provider provider. After it has finished warp syncing, it downloads the list of old blocks, after which it becomes a history provider. +If Substrate is instead configured as an archive node, then it downloads all blocks since the genesis and builds their state, after which it becomes an archive provider, history provider, and head of chain provider. +If blocks pruning is enabled and the chain is a relay chain, then Substrate unfortunately doesn't implement any of these capabilities, not even head of chain provider. This is considered as a bug that should be fixed, see https://github.com/paritytech/polkadot-sdk/issues/2733.

+

DHT provider registration

+

This RFC heavily relies on the functionalities of the Kademlia DHT already in use by Polkadot. You can find a link to the specification here.

+

Implementations that have the history provider capability should register themselves as providers under the key sha256(concat("history", randomness)).

+

Implementations that have the archive provider capability should register themselves as providers under the key sha256(concat("archive", randomness)).

+

Implementations that have the parachain bootnode capability should register themselves as provider under the key sha256(concat(scale_compact(para_id), randomness)), as described in RFC 8.

+

"Register themselves as providers" consists in sending ADD_PROVIDER requests to nodes close to the key, as described in the Content provider advertisement section of the specification.

+

The value of randomness can be found in the randomness field when calling the BabeApi_currentEpoch function.

+

In order to avoid downtimes when the key changes, nodes should also register themselves as a secondary key that uses a value of randomness equal to the randomness field when calling BabeApi_nextEpoch.

+

Implementers should be aware that their implementation of Kademlia might already hash the key before XOR'ing it. The key is not meant to be hashed twice.

+

Implementations must not register themselves if they don't fulfill the capability yet. For example, a node configured to be an archive node but that is still building its archive state in the background must register itself only after it has finished building its archive.

+

Secondary DHTs

+

Implementations that have the history provider capability must also participate in a secondary DHT that comprises only of nodes with that capability. The protocol name of that secondary DHT must be /<genesis-hash>/kad/history.

+

Similarly, implementations that have the archive provider capability must also participate in a secondary DHT that comprises only of nodes with that capability and whose protocol name is /<genesis-hash>/kad/archive.

+

Just like implementations must not register themselves if they don't fulfill their capability yet, they must also not participate in the secondary DHT if they don't fulfill their capability yet.

+

Head of the chain providers

+

Implementations that have the head of the chain provider capability do not register themselves as providers, but instead are the nodes that participate in the main DHT. In other words, they are the nodes that serve requests of the /<genesis_hash>/kad protocol.

+

Any implementation that isn't a head of the chain provider (read: light clients) must not participate in the main DHT. This is already presently the case.

+

Implementations must not participate in the main DHT if they don't fulfill the capability yet. For example, a node that is still in the process of warp syncing must not participate in the main DHT. However, assuming that warp syncing doesn't last more than a few seconds, it is acceptable to ignore this requirement in order to avoid complicating implementations too much.

+

Drawbacks

+

None that I can see.

+

Testing, Security, and Privacy

+

The content of this section is basically the same as the one in RFC 8.

+

This mechanism doesn't add or remove any security by itself, as it relies on existing mechanisms.

+

Due to the way Kademlia works, it would become the responsibility of the 20 Polkadot nodes whose sha256(peer_id) is closest to the key (described in the explanations section) to store the list of nodes that have specific capabilities. +Furthermore, when a large number of providers are registered, only the providers closest to the key are kept, up to a certain implementation-defined limit.

+

For this reason, an attacker can abuse this mechanism by randomly generating libp2p PeerIds until they find the 20 entries closest to the key representing the target capability. They are then in control of the list of nodes with that capability. While doing this can in no way be actually harmful, it could lead to eclipse attacks.

+

Because the key changes periodically and isn't predictable, and assuming that the Polkadot DHT is sufficiently large, it is not realistic for an attack like this to be maintained in the long term.

+

Performance, Ergonomics, and Compatibility

+

Performance

+

The DHT mechanism generally has a low overhead, especially given that publishing providers is done only every 24 hours.

+

Doing a Kademlia iterative query then sending a provider record shouldn't take more than around 50 kiB in total of bandwidth for the parachain bootnode.

+

Assuming 1000 nodes with a specific capability, the 20 Polkadot full nodes corresponding to that capability will each receive a sudden spike of a few megabytes of networking traffic when the key rotates. Again, this is relatively negligible. If this becomes a problem, one can add a random delay before a node registers itself to be the provider of the key corresponding to BabeApi_next_epoch.

+

Maybe the biggest uncertainty is the traffic that the 20 Polkadot full nodes will receive from light clients that desire knowing the nodes with a capability. If this every becomes a problem, this value of 20 is an arbitrary constant that can be increased for more redundancy.

+

Ergonomics

+

Irrelevant.

+

Compatibility

+

Irrelevant.

+

Prior Art and References

+

Unknown.

+

Unresolved Questions

+

While it fundamentally doesn't change much to this RFC, using BabeApi_currentEpoch and BabeApi_nextEpoch might be inappropriate. I'm not familiar enough with good practices within the runtime to have an opinion here. Should it be an entirely new pallet?

+ +

This RFC would make it possible to reliably discover archive nodes, which would make it possible to reliably send archive node requests, something that isn't currently possible. This could solve the problem of finding archive RPC node providers by migrating archive-related request to using the native peer-to-peer protocol rather than JSON-RPC.

+

If we ever decide to break backwards compatibility, we could divide the "history" and "archive" capabilities in two, between nodes capable of serving older blocks and nodes capable of serving newer blocks. +We could even add to the peer-to-peer network nodes that are only capable of serving older blocks (by reading from a database) but do not participate in the head of the chain, and that just exist for historical purposes.

+ +
+ + +
+
+ + + +
+ + + + + + + + + + + + + + + + + + +
+ + diff --git a/mdbook/text/0078-merkleized-metadata.html b/mdbook/text/0078-merkleized-metadata.html new file mode 100644 index 000000000..ff4a1f731 --- /dev/null +++ b/mdbook/text/0078-merkleized-metadata.html @@ -0,0 +1,564 @@ + + + + + + + 0078 - Polkadot Fellowship RFCs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+
+

RFC-0078: Merkleized Metadata

+
+ + + +
Start Date22 February 2024
DescriptionInclude merkleized metadata hash in extrinsic signature for trust-less metadata verification.
AuthorsZondax AG, Parity Technologies
+
+

Summary

+

To interact with chains in the Polkadot ecosystem it is required to know how transactions are encoded and how to read state. For doing this, Polkadot-SDK, the framework used by most of the chains in the Polkadot ecosystem, exposes metadata about the runtime to the outside. UIs, wallets, and others can use this metadata to interact with these chains. This makes the metadata a crucial piece of the transaction encoding as users are relying on the interacting software to encode the transactions in the correct format.

+

It gets even more important when the user signs the transaction in an offline wallet, as the device by its nature cannot get access to the metadata without relying on the online wallet to provide it. This makes it so that the offline wallet needs to trust an online party, deeming the security assumptions of the offline devices, mute.

+

This RFC proposes a way for offline wallets to leverage metadata, within the constraints of these. The design idea is that the metadata is chunked and these chunks are put into a merkle tree. The root hash of this merkle tree represents the metadata. The offline wallets can use the root hash to decode transactions by getting proofs for the individual chunks of the metadata. This root hash is also included in the signed data of the transaction (but not sent as part of the transaction). The runtime is then including its known metadata root hash when verifying the transaction. If the metadata root hash known by the runtime differs from the one that the offline wallet used, it very likely means that the online wallet provided some fake data and the verification of the transaction fails.

+

Users depend on offline wallets to correctly display decoded transactions before signing. With merkleized metadata, they can be assured of the transaction's legitimacy, as incorrect transactions will be rejected by the runtime.

+

Motivation

+

Polkadot's innovative design (both relay chain and parachains) present the ability to developers to upgrade their network as frequently as they need. These systems manage to have integrations working after the upgrades with the help of FRAME Metadata. This Metadata, which is in the order of half a MiB for most Polkadot-SDK chains, completely describes chain interfaces and properties. Securing this metadata is key for users to be able to interact with the Polkadot-SDK chain in the expected way.

+

On the other hand, offline wallets provide a secure way for Blockchain users to hold their own keys (some do a better job than others). These devices seldomly get upgraded, usually account for one particular network and hold very small internal memories. Currently in the Polkadot ecosystem there is no secure way of having these offline devices know the latest Metadata of the Polkadot-SDK chain they are interacting with. This results in a plethora of similar yet slightly different offline wallets for all different Polkadot-SDK chains, as well as the impediment of keeping these regularly updated, thus not fully leveraging Polkadot-SDK’s unique forkless upgrade feature.

+

The two main reasons why this is not possible today are:

+
    +
  1. Metadata is too large for offline devices. Currently Polkadot-SDK metadata is on average 500 KiB, which is more than what the mostly adopted offline devices can hold.
  2. +
  3. Metadata is not authenticated. Even if there was enough space on offline devices to hold the metadata, the user would be trusting the entity providing this metadata to the hardware wallet. In the Polkadot ecosystem, this is how currently Polkadot Vault works.
  4. +
+

This RFC proposes a solution to make FRAME Metadata compatible with offline signers in a secure way. As it leverages FRAME Metadata, it does not only ensure that offline devices can always keep up to date with every FRAME based chain, but also that every offline wallet will be compatible with all FRAME based chains, avoiding the need of per-chain implementations.

+

Requirements

+
    +
  1. Metadata's integrity MUST be preserved. If any compromise were to happen, extrinsics sent with compromised metadata SHOULD fail.
  2. +
  3. Metadata information that could be used in signable extrinsic decoding MAY be included in digest, yet its inclusion MUST be indicated in signed extensions.
  4. +
  5. Digest MUST be deterministic with respect to metadata.
  6. +
  7. Digest MUST be cryptographically strong against pre-image, both first (finding an input that results in given digest) and second (finding an input that results in same digest as some other input given).
  8. +
  9. Extra-metadata information necessary for extrinsic decoding and constant within runtime version MUST be included in digest.
  10. +
  11. It SHOULD be possible to quickly withdraw offline signing mechanism without access to cold signing devices.
  12. +
  13. Digest format SHOULD be versioned.
  14. +
  15. Work necessary for proving metadata authenticity MAY be omitted at discretion of signer device design (to support automation tools).
  16. +
+

Reduce metadata size

+

Metadata should be stripped from parts that are not necessary to parse a signable extrinsic, then it should be separated into a finite set of self-descriptive chunks. Thus, a subset of chunks necessary for signable extrinsic decoding and rendering could be sent, possibly in small portions (ultimately, one at a time), to cold devices together with the proof.

+
    +
  1. Single chunk with proof payload size SHOULD fit within few kB;
  2. +
  3. Chunks handling mechanism SHOULD support chunks being sent in any order without memory utilization overhead;
  4. +
  5. Unused enum variants MUST be stripped (this has great impact on transmitted metadata size; examples: era enum, enum with all calls for call batching).
  6. +
+

Stakeholders

+
    +
  • Runtime implementors
  • +
  • UI/wallet implementors
  • +
  • Offline wallet implementors
  • +
+

The idea for this RFC was brought up by runtime implementors and was extensively discussed with offline wallet implementors. It was designed in such a way that it can work easily with the existing offline wallet solutions in the Polkadot ecosystem.

+

Explanation

+

The FRAME metadata provides a wide range of information about a FRAME based runtime. It contains information about the pallets, the calls per pallet, the storage entries per pallet, runtime APIs, and type information about most of the types that are used in the runtime. For decoding extrinsics on an offline wallet, what is mainly required is type information. Most of the other information in the FRAME metadata is actually not required for decoding extrinsics and thus it can be removed. Therefore, the following is a proposal on a custom representation of the metadata and how this custom metadata is chunked, ensuring that only the needed chunks required for decoding a particular extrinsic are sent to the offline wallet. The necessary information to transform the FRAME metadata type information into the type information presented in this RFC will be provided. However, not every single detail on how to convert from FRAME metadata into the RFC type information is described.

+

First, the MetadataDigest is introduced. After that, ExtrinsicMetadata is covered and finally the actual format of the type information. Then pruning of unrelated type information is covered and how to generate the TypeRefs. In the latest step, merkle tree calculation is explained.

+

Metadata digest

+

The metadata digest is the compact representation of the metadata. The hash of this digest is the metadata hash. Below the type declaration of the Hash type and the MetadataDigest itself can be found:

+
#![allow(unused)]
+fn main() {
+type Hash = [u8; 32];
+
+enum MetadataDigest {
+    #[index = 1]
+    V1 {
+        type_information_tree_root: Hash,
+        extrinsic_metadata_hash: Hash,
+        spec_version: u32,
+        spec_name: String,
+        base58_prefix: u16,
+        decimals: u8,
+        token_symbol: String,
+    },
+}
+}
+

The Hash is 32 bytes long and blake3 is used for calculating it. The hash of the MetadataDigest is calculated by blake3(SCALE(MetadataDigest)). Therefore, MetadataDigest is at first SCALE encoded, and then those bytes are hashed.

+

The MetadataDigest itself is represented as an enum. This is done to make it future proof, because a SCALE encoded enum is prefixed by the index of the variant. This index represents the version of the digest. As seen above, there is no index zero and it starts directly with one. Version one of the digest contains the following elements:

+
    +
  • type_information_tree_root: The root of the merkleized type information tree.
  • +
  • extrinsic_metadata_hash: The hash of the extrinsic metadata.
  • +
  • spec_version: The spec_version of the runtime as found in the RuntimeVersion when generating the metadata. While this information can also be found in the metadata, it is hidden in a big blob of data. To avoid transferring this big blob of data, we directly add this information here.
  • +
  • spec_name: Similar to spec_version, but being the spec_name found in the RuntimeVersion.
  • +
  • ss58_prefix: The SS58 prefix used for address encoding.
  • +
  • decimals: The number of decimals for the token.
  • +
  • token_symbol: The symbol of the token.
  • +
+

Extrinsic metadata

+

For decoding an extrinsic, more information on what types are being used is required. The actual format of the extrinsic is the format as described in the Polkadot specification. The metadata for an extrinsic is as follows:

+
#![allow(unused)]
+fn main() {
+struct ExtrinsicMetadata {
+    version: u8,
+    address_ty: TypeRef,
+    call_ty: TypeRef,
+    signature_ty: TypeRef,
+    signed_extensions: Vec<SignedExtensionMetadata>,
+}
+
+struct SignedExtensionMetadata {
+    identifier: String,
+    included_in_extrinsic: TypeRef,
+    included_in_signed_data: TypeRef,
+}
+}
+

To begin with, TypeRef. This is a unique identifier for a type as found in the type information. Using this TypeRef, it is possible to look up the type in the type information tree. More details on this process can be found in the section Generating TypeRef.

+

The actual ExtrinsicMetadata contains the following information:

+
    +
  • version: The version of the extrinsic format. As of writing this, the latest version is 4.
  • +
  • address_ty: The address type used by the chain.
  • +
  • call_ty: The call type used by the chain. The call in FRAME based runtimes represents the type of transaction being executed on chain. It references the actual function to execute and the parameters of this function.
  • +
  • signature_ty: The signature type used by the chain.
  • +
  • signed_extensions: FRAME based runtimes can extend the base extrinsic with extra information. This extra information that is put into an extrinsic is called "signed extensions". These extensions offer the runtime developer the possibility to include data directly into the extrinsic, like nonce, tip, amongst others. This means that the this data is sent alongside the extrinsic to the runtime. The other possibility these extensions offer is to include extra information only in the signed data that is signed by the sender. This means that this data needs to be known by both sides, the signing side and the verification side. An example for this kind of data is the genesis hash that ensures that extrinsics are unique per chain. Another example is the metadata hash itself that will also be included in the signed data. The offline wallets need to know which signed extensions are present in the chain and this is communicated to them using this field.
  • +
+

The SignedExtensionMetadata provides information about a signed extension:

+
    +
  • identifier: The identifier of the signed extension. An identifier is required to be unique in the Polkadot ecosystem as otherwise extrinsics are maybe built incorrectly.
  • +
  • included_in_extrinsic: The type that will be included in the extrinsic by this signed extension.
  • +
  • included_in_signed_data: The type that will be included in the signed data by this signed extension.
  • +
+

Type Information

+

As SCALE is not self descriptive like JSON, a decoder always needs to know the format of the type to decode it properly. This is where the type information comes into play. The format of the extrinsic is fixed as described above and ExtrinsicMetadata provides information on which type information is required for which part of the extrinsic. So, offline wallets only need access to the actual type information. It is a requirement that the type information can be chunked into logical pieces to reduce the amount of data that is sent to the offline wallets for decoding the extrinsics. So, the type information is structured in the following way:

+
#![allow(unused)]
+fn main() {
+struct Type {
+    path: Vec<String>,
+    type_def: TypeDef,
+    type_id: Compact<u32>,
+}
+
+enum TypeDef {
+    Composite(Vec<Field>),
+    Enumeration(EnumerationVariant),
+    Sequence(TypeRef),
+    Array(Array),
+    Tuple(Vec<TypeRef>),
+    BitSequence(BitSequence),
+}
+
+struct Field {
+    name: Option<String>,
+    ty: TypeRef,
+    type_name: Option<String>,
+}
+
+struct Array {
+    len: u32,
+    type_param: TypeRef,
+}
+
+struct BitSequence {
+    num_bytes: u8,
+    least_significant_bit_first: bool,
+}
+
+struct EnumerationVariant {
+    name: String,
+    fields: Vec<Field>,
+    index: Compact<u32>,
+}
+
+enum TypeRef {
+    Bool,
+    Char,
+    Str,
+    U8,
+    U16,
+    U32,
+    U64,
+    U128,
+    U256,
+    I8,
+    I16,
+    I32,
+    I64,
+    I128,
+    I256,
+    CompactU8,
+    CompactU16,
+    CompactU32,
+    CompactU64,
+    CompactU128,
+    CompactU256,
+    Void,
+    PerId(Compact<u32>),
+}
+}
+

The Type declares the structure of a type. The type has the following fields:

+
    +
  • path: A path declares the position of a type locally to the place where it is defined. The path is not globally unique, this means that there can be multiple types with the same path.
  • +
  • type_def: The high-level type definition, e.g. the type is a composition of fields where each field has a type, the type is a composition of different types as tuple etc.
  • +
  • type_id: The unique identifier of this type.
  • +
+

Every Type is composed of multiple different types. Each of these "sub types" can reference either a full Type again or reference one of the primitive types. This is where TypeRef becomes relevant as the type referencing information. To reference a Type in the type information, a unique identifier is used. As primitive types can be represented using a single byte, they are not put as separate types into the type information. Instead the primitive types are directly part of TypeRef to not require the overhead of referencing them in an extra Type. The special primitive type Void represents a type that encodes to nothing and can be decoded from nothing. As FRAME doesn't support Compact as primitive type it requires a more involved implementation to convert a FRAME type to a Compact primitive type. SCALE only supports u8, u16, u32, u64 and u128 as Compact which maps onto the primitive type declaration in the RFC. One special case is a Compact that wraps an empty Tuple which is expressed as primitive type Void.

+

The TypeDef variants have the following meaning:

+
    +
  • Composite: A struct like type that is composed of multiple different fields. Each Field can have its own type. The order of the fields is significant. A Composite with no fields is expressed as primitive type Void.
  • +
  • Enumeration: Stores a EnumerationVariant. A EnumerationVariant is a struct that is described by a name, an index and a vector of Fields, each of which can have it's own type. Typically Enumerations have more than just one variant, and in those cases Enumeration will appear multiple times, each time with a different variant, in the type information. Enumerations can become quite large, yet usually for decoding a type only one variant is required, therefore this design brings optimizations and helps reduce the size of the proof. An Enumeration with no variants is expressed as primitive type Void.
  • +
  • Sequence: A vector like type wrapping the given type.
  • +
  • BitSequence: A vector storing bits. num_bytes represents the size in bytes of the internal storage. If least_significant_bit_first is true the least significant bit is first, otherwise the most significant bit is first.
  • +
  • Array: A fixed-length array of a specific type.
  • +
  • Tuple: A composition of multiple types. A Tuple that is composed of no types is expressed as primitive type Void.
  • +
+

Using the type information together with the SCALE specification provides enough information on how to decode types.

+

Prune unrelated Types

+

The FRAME metadata contains not only the type information for decoding extrinsics, but it also contains type information about storage types. The scope of the RFC is only about decoding transactions on offline wallets. Thus, a lot of type information can be pruned. To know which type information are required to decode all possible extrinsics, ExtrinsicMetadata has been defined. The extrinsic metadata contains all the types that define the layout of an extrinsic. Therefore, all the types that are accessible from the types declared in the extrinsic metadata can be collected. To collect all accessible types, it requires to recursively iterate over all types starting from the types in ExtrinsicMetadata. Note that some types are accessible, but they don't appear in the final type information and thus, can be pruned as well. These are for example inner types of Compact or the types referenced by BitSequence. The result of collecting these accessible types is a list of all the types that are required to decode each possible extrinsic.

+

Generating TypeRef

+

Each TypeRef basically references one of the following types:

+
    +
  • One of the primitive types. All primitive types can be represented by 1 byte and thus, they are directly part of the TypeRef itself to remove an extra level of indirection.
  • +
  • A Type using its unique identifier.
  • +
+

In FRAME metadata a primitive type is represented like any other type. So, the first step is to remove all the primitive only types from the list of types that were generated in the previous section. The resulting list of types is sorted using the id provided by FRAME metadata. In the last step the TypeRefs are created. Each reference to a primitive type is replaced by one of the corresponding TypeRef primitive type variants and every other reference is replaced by the type's unique identifier. The unique identifier of a type is the index of the type in our sorted list. For Enumerations all variants have the same unique identifier, while they are represented as multiple type information. All variants need to have the same unique identifier as the reference doesn't know which variant will appear in the actual encoded data.

+
#![allow(unused)]
+fn main() {
+let pruned_types = get_pruned_types();
+
+for ty in pruned_types {
+    if ty.is_primitive_type() {
+        pruned_types.remove(ty);
+    }
+}
+
+pruned_types.sort(|(left, right)|
+    if left.frame_metadata_id() == right.frame_metadata_id() {
+        left.variant_index() < right.variant_index()
+    } else {
+        left.frame_metadata_id() < right.frame_metadata_id()
+    }
+);
+
+fn generate_type_ref(ty, ty_list) -> TypeRef {
+    if ty.is_primitive_type() {
+        TypeRef::primtive_from_ty(ty)
+    }
+
+    TypeRef::from_id(
+        // Determine the id by using the position of the type in the
+        // list of unique frame metadata ids.
+        ty_list.position_by_frame_metadata_id(ty.frame_metadata_id())
+    )
+}
+
+fn replace_all_sub_types_with_type_refs(ty, ty_list) -> Type {
+    for sub_ty in ty.sub_types() {
+        replace_all_sub_types_with_type_refs(sub_ty, ty_list);
+        sub_ty = generate_type_ref(sub_ty, ty_list)
+    }
+
+    ty
+}
+
+let final_ty_list = Vec::new();
+for ty in pruned_types {
+    final_ty_list.push(replace_all_sub_types_with_type_refs(ty, ty_list))
+}
+}
+

Building the Merkle Tree Root

+

A complete binary merkle tree with blake3 as the hashing function is proposed. For building the merkle tree root, the initial data has to be hashed as a first step. This initial data is referred to as the leaves of the merkle tree. The leaves need to be sorted to make the tree root deterministic. The type information is sorted using their unique identifiers and for the Enumeration, variants are sort using their index. After sorting and hashing all leaves, two leaves have to be combined to one hash. The combination of these of two hashes is referred to as a node.

+
#![allow(unused)]
+fn main() {
+let nodes = leaves;
+while nodes.len() > 1 {
+    let right = nodes.pop_back();
+    let left = nodes.pop_back();
+    nodes.push_front(blake3::hash(scale::encode((left, right))));
+}
+
+let merkle_tree_root = if nodes.is_empty() { [0u8; 32] } else { nodes.back() };
+}
+

The merkle_tree_root in the end is the last node left in the list of nodes. If there are no nodes in the list left, it means that the initial data set was empty. In this case, all zeros hash is used to represent the empty tree.

+

Building a tree with 5 leaves (numbered 0 to 4):

+
nodes: 0 1 2 3 4
+
+nodes: [3, 4] 0 1 2
+
+nodes: [1, 2] [3, 4] 0
+
+nodes: [[3, 4], 0] [1, 2]
+
+nodes: [[[3, 4], 0], [1, 2]]
+
+

The resulting tree visualized:

+
     [root]
+     /    \
+    *      *
+   / \    / \
+  *   0  1   2
+ / \
+3   4
+
+

Building a tree with 6 leaves (numbered 0 to 5):

+
nodes: 0 1 2 3 4 5
+
+nodes: [4, 5] 0 1 2 3
+
+nodes: [2, 3] [4, 5] 0 1
+
+nodes: [0, 1] [2, 3] [4, 5]
+
+nodes: [[2, 3], [4, 5]] [0, 1]
+
+nodes: [[[2, 3], [4, 5]], [0, 1]]
+
+

The resulting tree visualized:

+
       [root]
+      /      \
+     *        *
+   /   \     / \
+  *     *   0   1
+ / \   / \
+2   3 4   5
+
+

Inclusion in an Extrinsic

+

To ensure that the offline wallet used the correct metadata to show the extrinsic to the user the metadata hash needs to be included in the extrinsic. The metadata hash is generated by hashing the SCALE encoded MetadataDigest:

+
#![allow(unused)]
+fn main() {
+blake3::hash(SCALE::encode(MetadataDigest::V1 { .. }))
+}
+

For the runtime the metadata hash is generated at compile time. Wallets will have to generate the hash using the FRAME metadata.

+

The signing side should control whether it wants to add the metadata hash or if it wants to omit it. To accomplish this it is required to add one extra byte to the extrinsic itself. If this byte is 0 the metadata hash is not required and if the byte is 1 the metadata hash is added using V1 of the MetadataDigest. This leaves room for future versions of the MetadataDigest format. When the metadata hash should be included, it is only added to the data that is signed. This brings the advantage of not requiring to include 32 bytes into the extrinsic itself, because the runtime knows the metadata hash as well and can add it to the signed data as well if required. This is similar to the genesis hash, while this isn't added conditionally to the signed data.

+

Drawbacks

+

The chunking may not be the optimal case for every kind of offline wallet.

+

Testing, Security, and Privacy

+

All implementations are required to strictly follow the RFC to generate the metadata hash. This includes which hash function to use and how to construct the metadata types tree. So, all implementations are following the same security criteria. As the chains will calculate the metadata hash at compile time, the build process needs to be trusted. However, this is already a solved problem in the Polkadot ecosystem by using reproducible builds. So, anyone can rebuild a chain runtime to ensure that a proposal is actually containing the changes as advertised.

+

Implementations can also be tested easily against each other by taking some metadata and ensuring that they all come to the same metadata hash.

+

Privacy of users should also not be impacted. This assumes that wallets will generate the metadata hash locally and don't leak any information to third party services about which chunks a user will send to their offline wallet. Besides that, there is no leak of private information as getting the raw metadata from the chain is an operation that is done by almost everyone.

+

Performance, Ergonomics, and Compatibility

+

Performance

+

There should be no measurable impact on performance to Polkadot or any other chain using this feature. The metadata root hash is calculated at compile time and at runtime it is optionally used when checking the signature of a transaction. This means that at runtime no performance heavy operations are done.

+

Ergonomics & Compatibility

+

The proposal alters the way a transaction is built, signed, and verified. So, this imposes some required changes to any kind of developer who wants to construct transactions for Polkadot or any chain using this feature. As the developer can pass 0 for disabling the verification of the metadata root hash, it can be easily ignored.

+

Prior Art and References

+

RFC 46 produced by the Alzymologist team is a previous work reference that goes in this direction as well.

+

On other ecosystems, there are other solutions to the problem of trusted signing. Cosmos for example has a standardized way of transforming a transaction into some textual representation and this textual representation is included in the signed data. Basically achieving the same as what the RFC proposes, but it requires that for every transaction applied in a block, every node in the network always has to generate this textual representation to ensure the transaction signature is valid.

+

Unresolved Questions

+

None.

+ +
    +
  • Does it work with all kind of offline wallets?
  • +
  • Generic types currently appear multiple times in the metadata with each instantiation. It could be may be useful to have generic type only once in the metadata and declare the generic parameters at their instantiation.
  • +
  • The metadata doesn't contain any kind of semantic information. This means that the offline wallet for example doesn't know what is a balance etc. The current solution for this problem is to match on the type name, but this isn't a sustainable solution.
  • +
  • MetadataDigest only provides one token and decimal. However, chains support a lot of chains support multiple tokens for paying fees etc. Probably more a question of having semantic information as mentioned above.
  • +
+ +
+ + +
+
+ + + +
+ + + + + + + + + + + + + + + + + + +
+ + diff --git a/mdbook/text/0084-general-transaction-extrinsic-format.html b/mdbook/text/0084-general-transaction-extrinsic-format.html new file mode 100644 index 000000000..4d5f1ba5f --- /dev/null +++ b/mdbook/text/0084-general-transaction-extrinsic-format.html @@ -0,0 +1,271 @@ + + + + + + + 0084 - Polkadot Fellowship RFCs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+
+

RFC-0084: General transactions in extrinsic format

+
+ + + +
Start Date12 March 2024
DescriptionSupport more extrinsic types by updating the extrinsic format
AuthorsGeorge Pisaltu
+
+

Summary

+

This RFC proposes a change to the extrinsic format to incorporate a new transaction type, the "general" transaction.

+

Motivation

+

"General" transactions, a new type of transaction that this RFC aims to support, are transactions which obey the runtime's extensions and have according extension data yet do not have hard-coded signatures. They are first described in Extrinsic Horizon and supported in 3685. They enable users to authorize origins in new, more flexible ways (e.g. ZK proofs, mutations over pre-authenticated origins). As of now, all transactions are limited to the account signing model for origin authorization and any additional origin changes happen in extrinsic logic, which cannot leverage the validation process of extensions.

+

An example of a use case for such an extension would be sponsoring the transaction fee for some other user. A new extension would be put in place to verify that a part of the initial payload was signed by the author under who the extrinsic should run and change the origin, but the payment for the whole transaction should be handled under a sponsor's account. A POC for this can be found in 3712.

+

The new "general" transaction type would coexist with both current transaction types for a while and, therefore, the current number of supported transaction types, capped at 2, is insufficient. A new extrinsic type must be introduced alongside the current signed and unsigned types. Currently, an encoded extrinsic's first byte indicate the type of extrinsic using the most significant bit - 0 for unsigned, 1 for signed - and the 7 following bits indicate the extrinsic format version, which has been equal to 4 for a long time.

+

By taking one bit from the extrinsic format version encoding, we can support 2 additional extrinsic types while also having a minimal impact on our capability to extend and change the extrinsic format in the future.

+

Stakeholders

+
    +
  • Runtime users
  • +
  • Runtime devs
  • +
  • Wallet devs
  • +
+

Explanation

+

An extrinsic is currently encoded as one byte to identify the extrinsic type and version. This RFC aims to change the interpretation of this byte regarding the reserved bits for the extrinsic type and version. In the following explanation, bits represented using T make up the extrinsic type and bits represented using V make up the extrinsic version.

+

Currently, the bit allocation within the leading encoded byte is 0bTVVV_VVVV. In practice in the Polkadot ecosystem, the leading byte would be 0bT000_0100 as the version has been equal to 4 for a long time.

+

This RFC proposes for the bit allocation to change to 0bTTVV_VVVV. As a result, the extrinsic format version will be bumped to 5 and the extrinsic type bit representation would change as follows:

+
+ + + + +
bitstype
00unsigned
10signed
01reserved
11reserved
+
+

Drawbacks

+

This change would reduce the maximum possible transaction version from the current 127 to 63. In order to bypass the new, lower limit, the extrinsic format would have to change again.

+

Testing, Security, and Privacy

+

There is no impact on testing, security or privacy.

+

Performance, Ergonomics, and Compatibility

+

This change would allow Polkadot to support new types of transactions, with the specific "general" transaction type in mind at the time of writing this proposal.

+

Performance

+

There is no performance impact.

+

Ergonomics

+

The impact to developers and end-users is minimal as it would just be a bitmask update on their part for parsing the extrinsic type along with the version.

+

Compatibility

+

This change breaks backwards compatiblity because any transaction that is neither signed nor unsigned, but a new transaction type, would be interpreted as having a future extrinsic format version.

+

Prior Art and References

+

The original design was originally proposed in the TransactionExtension PR, which is also the motivation behind this effort.

+

Unresolved Questions

+

None.

+ +

Following this change, the "general" transaction type will be introduced as part of the Extrinsic Horizon effort, which will shape future work.

+ +
+ + +
+
+ + + +
+ + + + + + + + + + + + + + + + + + +
+ + diff --git a/text/0001-agile-coretime.html b/text/0001-agile-coretime.html new file mode 100644 index 000000000..ad3d589e6 --- /dev/null +++ b/text/0001-agile-coretime.html @@ -0,0 +1,736 @@ + + + + + + + 0001 - Polkadot Fellowship RFCs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+
+

RFC-1: Agile Coretime

+
+ + + +
Start Date30 June 2023
DescriptionAgile periodic-sale-based model for assigning Coretime on the Polkadot Ubiquitous Computer.
AuthorsGavin Wood
+
+

Summary

+

This proposes a periodic, sale-based method for assigning Polkadot Coretime, the analogue of "block space" within the Polkadot Network. The method takes into account the need for long-term capital expenditure planning for teams building on Polkadot, yet also provides a means to allow Polkadot to capture long-term value in the resource which it sells. It supports the possibility of building rich and dynamic secondary markets to optimize resource allocation and largely avoids the need for parameterization.

+

Motivation

+

Present System

+

The Polkadot Ubiquitous Computer, or just Polkadot UC, represents the public service provided by the Polkadot Network. It is a trust-free, WebAssembly-based, multicore, internet-native omnipresent virtual machine which is highly resilient to interference and corruption.

+

The present system of allocating the limited resources of the Polkadot Ubiquitous Computer is through a process known as parachain slot auctions. This is a parachain-centric paradigm whereby a single core is long-term allocated to a single parachain which itself implies a Substrate/Cumulus-based chain secured and connected via the Relay-chain. Slot auctions are on-chain candle auctions which proceed for several days and result in the core being assigned to the parachain for six months at a time up to 24 months in advance. Practically speaking, we only see two year periods being bid upon and leased.

+

Funds behind the bids made in the slot auctions are merely locked, they are not consumed or paid and become unlocked and returned to the bidder on expiry of the lease period. A means of sharing the deposit trustlessly known as a crowdloan is available allowing token holders to contribute to the overall deposit of a chain without any counterparty risk.

+

Problems

+

The present system is based on a model of one-core-per-parachain. This is a legacy interpretation of the Polkadot platform and is not a reflection of its present capabilities. By restricting ownership and usage to this model, more dynamic and resource-efficient means of utilizing the Polkadot Ubiquitous Computer are lost.

+

More specifically, it is impossible to lease out cores at anything less than six months, and apparently unrealistic to do so at anything less than two years. This removes the ability to dynamically manage the underlying resource, and generally experimentation, iteration and innovation suffer. It bakes into the platform an assumption of permanence for anything deployed into it and restricts the market's ability to find a more optimal allocation of the finite resource.

+

There is no ability to determine capital requirements for hosting a parachain beyond two years from the point of its initial deployment onto Polkadot. While it would be unreasonable to have perfect and indefinite cost predictions for any real-world platform, not having any clarity whatsoever beyond "market rates" two years hence can be a very off-putting prospect for teams to buy into.

+

However, quite possibly the most substantial problem is both a perceived and often real high barrier to entry of the Polkadot ecosystem. By forcing innovators to either raise seven-figure sums through investors or appeal to the wider token-holding community, Polkadot makes it difficult for a small band of innovators to deploy their technology into Polkadot. While not being actually permissioned, it is also far from the barrierless, permissionless ideal which an innovation platform such as Polkadot should be striving for.

+

Requirements

+
    +
  1. The solution SHOULD provide an acceptable value-capture mechanism for the Polkadot network.
  2. +
  3. The solution SHOULD allow parachains and other projects deployed on to the Polkadot UC to make long-term capital expenditure predictions for the cost of ongoing deployment.
  4. +
  5. The solution SHOULD minimize the barriers to entry in the ecosystem.
  6. +
  7. The solution SHOULD work well when the Polkadot UC has up to 1,000 cores.
  8. +
  9. The solution SHOULD work when the number of cores which the Polkadot UC can support changes over time.
  10. +
  11. The solution SHOULD facilitate the optimal allocation of work to cores of the Polkadot UC, including by facilitating the trade of regular core assignment at various intervals and for various spans.
  12. +
  13. The solution SHOULD avoid creating additional dependencies on functionality which the Relay-chain need not strictly provide for the delivery of the Polkadot UC.
  14. +
+

Furthermore, the design SHOULD be implementable and deployable in a timely fashion; three months from the acceptance of this RFC should not be unreasonable.

+

Stakeholders

+

Primary stakeholder sets are:

+
    +
  • Protocol researchers and developers, largely represented by the Polkadot Fellowship and Parity Technologies' Engineering division.
  • +
  • Polkadot Parachain teams both present and future, and their users.
  • +
  • Polkadot DOT token holders.
  • +
+

Socialization:

+

The essensials of this proposal were presented at Polkadot Decoded 2023 Copenhagen on the Main Stage. A small amount of socialization at the Parachain Summit preceeded it and some substantial discussion followed it. Parity Ecosystem team is currently soliciting views from ecosystem teams who would be key stakeholders.

+

Explanation

+

Overview

+

Upon implementation of this proposal, the parachain-centric slot auctions and associated crowdloans cease. Instead, Coretime on the Polkadot UC is sold by the Polkadot System in two separate formats: Bulk Coretime and Instantaneous Coretime.

+

When a Polkadot Core is utilized, we say it is dedicated to a Task rather than a "parachain". The Task to which a Core is dedicated may change at every Relay-chain block and while one predominant type of Task is to secure a Cumulus-based blockchain (i.e. a parachain), other types of Tasks are envisioned.

+

Bulk Coretime is sold periodically on a specialised system chain known as the Coretime-chain and allocated in advance of its usage, whereas Instantaneous Coretime is sold on the Relay-chain immediately prior to usage on a block-by-block basis.

+

This proposal does not fix what should be done with revenue from sales of Coretime and leaves it for a further RFC process.

+

Owners of Bulk Coretime are tracked on the Coretime-chain and the ownership status and properties of the owned Coretime are exposed over XCM as a non-fungible asset.

+

At the request of the owner, the Coretime-chain allows a single Bulk Coretime asset, known as a Region, to be used in various ways including transferal to another owner, allocated to a particular task (e.g. a parachain) or placed in the Instantaneous Coretime Pool. Regions can also be split out, either into non-overlapping sub-spans or exactly-overlapping spans with less regularity.

+

The Coretime-Chain periodically instructs the Relay-chain to assign its cores to alternative tasks as and when Core allocations change due to new Regions coming into effect.

+

Renewal and Migration

+

There is a renewal system which allows a Bulk Coretime assignment of a single core to be renewed unchanged with a known price increase from month to month. Renewals are processed in a period prior to regular purchases, effectively giving them precedence over a fixed number of cores available.

+

Renewals are only enabled when a core's assignment does not include an Instantaneous Coretime allocation and has not been split into shorter segments.

+

Thus, renewals are designed to ensure only that committed parachains get some guarantees about price for predicting future costs. This price-capped renewal system only allows cores to be reused for their same tasks from month to month. In any other context, Bulk Coretime would need to be purchased regularly.

+

As a migration mechanism, pre-existing leases (from the legacy lease/slots/crowdloan framework) are initialized into the Coretime-chain and cores assigned to them prior to Bulk Coretime sales. In the sale where the lease expires, the system offers a renewal, as above, to allow a priority sale of Bulk Coretime and ensure that the Parachain suffers no downtime when transitioning from the legacy framework.

+

Instantaneous Coretime

+

Processing of Instantaneous Coretime happens in part on the Polkadot Relay-chain. Credit is purchased on the Coretime-chain for regular DOT tokens, and this results in a DOT-denominated Instantaneous Coretime Credit account on the Relay-chain being credited for the same amount.

+

Though the Instantaneous Coretime Credit account records a balance for an account identifier (very likely controlled by a collator), it is non-transferable and non-refundable. It can only be consumed in order to purchase some Instantaneous Coretime with immediate availability.

+

The Relay-chain reports this usage back to the Coretime-chain in order to allow it to reward the providers of the underlying Coretime, either the Polkadot System or owners of Bulk Coretime who contributed to the Instantaneous Coretime Pool.

+

Specifically the Relay-chain is expected to be responsible for:

+
    +
  • holding non-transferable, non-refundable DOT-denominated Instantaneous Coretime Credit balance information.
  • +
  • setting and adjusting the price of Instantaneous Coretime based on usage.
  • +
  • allowing collators to consume their Instantaneous Coretime Credit at the current pricing in exchange for the ability to schedule one PoV for near-immediate usage.
  • +
  • ensuring the Coretime-Chain has timely accounting information on Instantaneous Coretime Sales revenue.
  • +
+

Coretime-chain

+

The Coretime-chain is a new system parachain. It has the responsibility of providing the Relay-chain via UMP with information of:

+
    +
  • The number of cores which should be made available.
  • +
  • Which tasks should be running on which cores and in what ratios.
  • +
  • Accounting information for Instantaneous Coretime Credit.
  • +
+

It also expects information from the Relay-chain via DMP:

+
    +
  • The number of cores available to be scheduled.
  • +
  • Account information on Instantaneous Coretime Sales.
  • +
+

The specific interface is properly described in RFC-5.

+

Detail

+

Parameters

+

This proposal includes a number of parameters which need not necessarily be fixed. Their usage is explained below, but their values are suggested or specified in the later section Parameter Values.

+

Reservations and Leases

+

The Coretime-chain includes some governance-set reservations of Coretime; these cover every System-chain. Additionally, governance is expected to initialize details of the pre-existing leased chains.

+

Regions

+

A Region is an assignable period of Coretime with a known regularity.

+

All Regions are associated with a unique Core Index, to identify which core the assignment of which ownership of the Region controls.

+

All Regions are also associated with a Core Mask, an 80-bit bitmap, to denote the regularity at which it may be scheduled on the core. If all bits are set in the Core Mask value, it is said to be Complete. 80 is selected since this results in the size of the datatype used to identify any Region of Polkadot Coretime to be a very convenient 128-bit. Additionally, if TIMESLICE (the number of Relay-chain blocks in a Timeslice) is 80, then a single bit in the Core Mask bitmap represents exactly one Core for one Relay-chain block in one Timeslice.

+

All Regions have a span. Region spans are quantized into periods of TIMESLICE blocks; BULK_PERIOD divides into TIMESLICE a whole number of times.

+

The Timeslice type is a u32 which can be multiplied by TIMESLICE to give a BlockNumber value representing the same quantity in terms of Relay-chain blocks.

+

Regions can be tasked to a TaskId (aka ParaId) or pooled into the Instantaneous Coretime Pool. This process can be Provisional or Final. If done only provisionally or not at all then they are fresh and have an Owner which is able to manipulate them further including reassignment. Once Final, then all ownership information is discarded and they cannot be manipulated further. Renewal is not possible when only provisionally tasked/pooled.

+

Bulk Sales

+

A sale of Bulk Coretime occurs on the Coretime-chain every BULK_PERIOD blocks.

+

In every sale, a BULK_LIMIT of individual Regions are offered for sale.

+

Each Region offered for sale has a different Core Index, ensuring that they each represent an independently allocatable resource on the Polkadot UC.

+

The Regions offered for sale have the same span: they last exactly BULK_PERIOD blocks, and begin immediately following the span of the previous Sale's Regions. The Regions offered for sale also have the complete, non-interlaced, Core Mask.

+

The Sale Period ends immediately as soon as span of the Coretime Regions that are being sold begins. At this point, the next Sale Price is set according to the previous Sale Price together with the number of Regions sold compared to the desired and maximum amount of Regions to be sold. See Price Setting for additional detail on this point.

+

Following the end of the previous Sale Period, there is an Interlude Period lasting INTERLUDE_PERIOD of blocks. After this period is elapsed, regular purchasing begins with the Purchasing Period.

+

This is designed to give at least two weeks worth of time for the purchased regions to be partitioned, interlaced, traded and allocated.

+

The Interlude

+

The Interlude period is a period prior to Regular Purchasing where renewals are allowed to happen. This has the effect of ensuring existing long-term tasks/parachains have a chance to secure their Bulk Coretime for a well-known price prior to general sales.

+

Regular Purchasing

+

Any account may purchase Regions of Bulk Coretime if they have the appropriate funds in place during the Purchasing Period, which is from INTERLUDE_PERIOD blocks after the end of the previous sale until the beginning of the Region of the Bulk Coretime which is for sale as long as there are Regions of Bulk Coretime left for sale (i.e. no more than BULK_LIMIT have already been sold in the Bulk Coretime Sale). The Purchasing Period is thus roughly BULK_PERIOD - INTERLUDE_PERIOD blocks in length.

+

The Sale Price varies during an initial portion of the Purchasing Period called the Leadin Period and then stays stable for the remainder. This initial portion is LEADIN_PERIOD blocks in duration. During the Leadin Period the price decreases towards the Sale Price, which it lands at by the end of the Leadin Period. The actual curve by which the price starts and descends to the Sale Price is outside the scope of this RFC, though a basic suggestion is provided in the Price Setting Notes, below.

+

Renewals

+

At any time when there are remaining Regions of Bulk Coretime to be sold, including during the Interlude Period, then certain Bulk Coretime assignmnents may be Renewed. This is similar to a purchase in that funds must be paid and it consumes one of the Regions of Bulk Coretime which would otherwise be placed for purchase. However there are two key differences.

+

Firstly, the price paid is the minimum of RENEWAL_PRICE_CAP more than what the purchase/renewal price was in the previous renewal and the current (or initial, if yet to begin) regular Sale Price.

+

Secondly, the purchased Region comes preassigned with exactly the same workload as before. It cannot be traded, repartitioned, interlaced or exchanged. As such unlike regular purchasing the Region never has an owner.

+

Renewal is only possible for either cores which have been assigned as a result of a previous renewal, which are migrating from legacy slot leases, or which fill their Bulk Coretime with an unsegmented, fully and finally assigned workload which does not include placement in the Instantaneous Coretime Pool. The renewed workload will be the same as this initial workload.

+

Manipulation

+

Regions may be manipulated in various ways by its owner:

+
    +
  1. Transferred in ownership.
  2. +
  3. Partitioned into quantized, non-overlapping segments of Bulk Coretime with the same ownership.
  4. +
  5. Interlaced into multiple Regions over the same period whose eventual assignments take turns to be scheduled.
  6. +
  7. Assigned to a single, specific task (identified by TaskId aka ParaId). This may be either provisional or final.
  8. +
  9. Pooled into the Instantaneous Coretime Pool, in return for a pro-rata amount of the revenue from the Instantaneous Coretime Sales over its period.
  10. +
+

Enactment

+

Specific functions of the Coretime-chain

+

Several functions of the Coretime-chain SHALL be exposed through dispatchables and/or a nonfungible trait implementation integrated into XCM:

+

1. transfer

+

Regions may have their ownership transferred.

+

A transfer(region: RegionId, new_owner: AccountId) dispatchable shall have the effect of altering the current owner of the Region identified by region from the signed origin to new_owner.

+

An implementation of the nonfungible trait SHOULD include equivalent functionality. RegionId SHOULD be used for the AssetInstance value.

+

2. partition

+

Regions may be split apart into two non-overlapping interior Regions of the same Core Mask which together concatenate to the original Region.

+

A partition(region: RegionId, pivot: Timeslice) dispatchable SHALL have the effect of removing the Region identified by region and adding two new Regions of the same owner and Core Mask. One new Region will begin at the same point of the old Region but end at pivot timeslices into the Region, whereas the other will begin at this point and end at the end point of the original Region.

+

Also:

+
    +
  • owner field of region must the equal to the Signed origin.
  • +
  • pivot must equal neither the begin nor end fields of the region.
  • +
+

3. interlace

+

Regions may be decomposed into two Regions of the same span whose eventual assignments take turns on the core by virtue of having complementary Core Masks.

+

An interlace(region: RegionId, mask: CoreMask) dispatchable shall have the effect of removing the Region identified by region and creating two new Regions. The new Regions will each have the same span and owner of the original Region, but one Region will have a Core Mask equal to mask and the other will have Core Mask equal to the XOR of mask and the Core Mask of the original Region.

+

Also:

+
    +
  • owner field of region must the equal to the Signed origin.
  • +
  • mask must have some bits set AND must not equal the Core Mask of the old Region AND must only have bits set which are also set in the old Region's' Core Mask.
  • +
+

4. assign

+

Regions may be assigned to a core.

+

A assign(region: RegionId, target: TaskId, finality: Finality) dispatchable shall have the effect of placing an item in the workplan corresponding to the region's properties and assigned to the target task.

+

If the region's end has already passed (taking into account any advance notice requirements) then this operation is a no-op. If the region's begining has already passed, then it is effectively altered to become the next schedulable timeslice.

+

finality may have the value of either Final or Provisional. If Final, then the operation is free, the region record is removed entirely from storage and renewal may be possible: if the Region's span is the entire BULK_PERIOD, then the Coretime-chain records in storage that the allocation happened during this period in order to facilitate the possibility for a renewal. (Renewal only becomes possible when the full Core Mask of a core is finally assigned for the full BULK_PERIOD.)

+

Also:

+
    +
  • owner field of region must the equal to the Signed origin.
  • +
+

5. pool

+

Regions may be consumed in exchange for a pro rata portion of the Instantaneous Coretime Sales Revenue from its period and regularity.

+

A pool(region: RegionId, beneficiary: AccountId, finality: Finality) dispatchable shall have the effect of placing an item in the workplan corresponding to the region's properties and assigned to the Instantaneous Coretime Pool. The details of the region will be recorded in order to allow for a pro rata share of the Instantaneous Coretime Sales Revenue at the time of the Region relative to any other providers in the Pool.

+

If the region's end has already passed (taking into account any advance notice requirements) then this operation is a no-op. If the region's begining has already passed, then it is effectively altered to become the next schedulable timeslice.

+

finality may have the value of either Final or Provisional. If Final, then the operation is free and the region record is removed entirely from storage.

+

Also:

+
    +
  • owner field of region must the equal to the Signed origin.
  • +
+

6. Purchases

+

A dispatchable purchase(price_limit: Balance) shall be provided. Any account may call purchase to purchase Bulk Coretime at the maximum price of price_limit.

+

This may be called successfully only:

+
    +
  1. during the regular Purchasing Period;
  2. +
  3. when the caller is a Signed origin and their account balance is reducible by the current sale price;
  4. +
  5. when the current sale price is no greater than price_limit; and
  6. +
  7. when the number of cores already sold is less than BULK_LIMIT.
  8. +
+

If successful, the caller's account balance is reduced by the current sale price and a new Region item for the following Bulk Coretime span is issued with the owner equal to the caller's account.

+

7. Renewals

+

A dispatchable renew(core: CoreIndex) shall be provided. Any account may call renew to purchase Bulk Coretime and renew an active allocation for the given core.

+

This may be called during the Interlude Period as well as the regular Purchasing Period and has the same effect as purchase followed by assign, except that:

+
    +
  1. The price of the sale is the Renewal Price (see next).
  2. +
  3. The Region is allocated exactly the given core is currently allocated for the present Region.
  4. +
+

Renewal is only valid where a Region's span is assigned to Tasks (not placed in the Instantaneous Coretime Pool) for the entire unsplit BULK_PERIOD over all of the Core Mask and with Finality. There are thus three possibilities of a renewal being allowed:

+
    +
  1. Purchased unsplit Coretime with final assignment to tasks over the full Core Mask.
  2. +
  3. Renewed Coretime.
  4. +
  5. A legacy lease which is ending.
  6. +
+

Renewal Price

+

The Renewal Price is the minimum of the current regular Sale Price (or the initial Sale Price if in the Interlude Period) and:

+
    +
  • If the workload being renewed came to be through the Purchase and Assignment of Bulk Coretime, then the price paid during that Purchase operation.
  • +
  • If the workload being renewed was previously renewed, then the price paid during this previous Renewal operation plus RENEWAL_PRICE_CAP.
  • +
  • If the workload being renewed is a migation from a legacy slot auction lease, then the nominal price for a Regular Purchase (outside of the Lead-in Period) of the Sale during which the legacy lease expires.
  • +
+

8. Instantaneous Coretime Credits

+

A dispatchable purchase_credit(amount: Balance, beneficiary: RelayChainAccountId) shall be provided. Any account with at least amount spendable funds may call this. This increases the Instantaneous Coretime Credit balance on the Relay-chain of the beneficiary by the given amount.

+

This Credit is consumable on the Relay-chain as part of the Task scheduling system and its specifics are out of the scope of this proposal. When consumed, revenue is recorded and provided to the Coretime-chain for proper distribution. The API for doing this is specified in RFC-5.

+

Notes on the Instantaneous Coretime Market

+

For an efficient market to form around the provision of Bulk-purchased Cores into the pool of cores available for Instantaneous Coretime purchase, it is crucial to ensure that price changes for the purchase of Instantaneous Coretime are reflected well in the revenues of private Coretime providers during the same period.

+

In order to ensure this, then it is crucial that Instantaneous Coretime, once purchased, cannot be held indefinitely prior to eventual use since, if this were the case, a nefarious collator could purchase Coretime when cheap and utilize it some time later when expensive and deprive private Coretime providers of their revenue.

+

It must therefore be assumed that Instantaneous Coretime, once purchased, has a definite and short "shelf-life", after which it becomes unusable. This incentivizes collators to avoid purchasing Coretime unless they expect to utilize it imminently and thus helps create an efficient market-feedback mechanism whereby a higher price will actually result in material revenues for private Coretime providers who contribute to the pool of Cores available to service Instantaneous Coretime purchases.

+

Notes on Economics

+

The specific pricing mechanisms are out of scope for the present proposal. Proposals on economics should be properly described and discussed in another RFC. However, for the sake of completeness, I provide some basic illustration of how price setting could potentially work.

+

Bulk Price Progression

+

The present proposal assumes the existence of a price-setting mechanism which takes into account several parameters:

+
    +
  • OLD_PRICE: The price of the previous sale.
  • +
  • BULK_TARGET: the target number of cores to be purchased as Bulk Coretime Regions or renewed during the previous sale.
  • +
  • BULK_LIMIT: the maximum number of cores which could have been purchased/renewed during the previous sale.
  • +
  • CORES_SOLD: the actual number of cores purchased/renewed in the previous sale.
  • +
  • SELLOUT_PRICE: the price at which the most recent Bulk Coretime was purchased (not renewed) prior to selling more cores than BULK_TARGET (or immediately after, if none were purchased before). This may not have a value if no Bulk Coretime was purchased.
  • +
+

In general we would expect the price to increase the closer CORES_SOLD gets to BULK_LIMIT and to decrease the closer it gets to zero. If it is exactly equal to BULK_TARGET, then we would expect the price to remain the same.

+

In the edge case that no cores were purchased yet more cores were sold (through renewals) than the target, then we would also avoid altering the price.

+

A simple example of this would be the formula:

+
IF SELLOUT_PRICE == NULL AND CORES_SOLD > BULK_TARGET THEN
+    RETURN OLD_PRICE
+END IF
+EFFECTIVE_PRICE := IF CORES_SOLD > BULK_TARGET THEN
+    SELLOUT_PRICE
+ELSE
+    OLD_PRICE
+END IF
+NEW_PRICE := IF CORES_SOLD < BULK_TARGET THEN
+    EFFECTIVE_PRICE * MAX(CORES_SOLD, 1) / BULK_TARGET
+ELSE
+    EFFECTIVE_PRICE + EFFECTIVE_PRICE *
+        (CORES_SOLD - BULK_TARGET) / (BULK_LIMIT - BULK_TARGET)
+END IF
+
+

This exists only as a trivial example to demonstrate a basic solution exists, and should not be intended as a concrete proposal.

+

Intra-Leadin Price-decrease

+

During the Leadin Period of a sale, the effective price starts higher than the Sale Price and falls to end at the Sale Price at the end of the Leadin Period. The price can thus be defined as a simple factor above one on which the Sale Price is multiplied. A function which returns this factor would accept a factor between zero and one specifying the portion of the Leadin Period which has passed.

+

Thus we assume SALE_PRICE, then we can define PRICE as:

+
PRICE := SALE_PRICE * FACTOR((NOW - LEADIN_BEGIN) / LEADIN_PERIOD)
+
+

We can define a very simple progression where the price decreases monotonically from double the Sale Price at the beginning of the Leadin Period.

+
FACTOR(T) := 2 - T
+
+

Parameter Values

+

Parameters are either suggested or specified. If suggested, it is non-binding and the proposal should not be judged on the value since other RFCs and/or the governance mechanism of Polkadot is expected to specify/maintain it. If specified, then the proposal should be judged on the merit of the value as-is.

+
+ + + + + + + +
NameValue
BULK_PERIOD28 * DAYSspecified
INTERLUDE_PERIOD7 * DAYSspecified
LEADIN_PERIOD7 * DAYSspecified
TIMESLICE8 * MINUTESspecified
BULK_TARGET30suggested
BULK_LIMIT45suggested
RENEWAL_PRICE_CAPPerbill::from_percent(2)suggested
+
+

Instantaneous Price Progression

+

This proposal assumes the existence of a Relay-chain-based price-setting mechanism for the Instantaneous Coretime Market which alters from block to block, taking into account several parameters: the last price, the size of the Instantaneous Coretime Pool (in terms of cores per Relay-chain block) and the amount of Instantaneous Coretime waiting for processing (in terms of Core-blocks queued).

+

The ideal situation is to have the size of the Instantaneous Coretime Pool be equal to some factor of the Instantaneous Coretime waiting. This allows all Instantaneous Coretime sales to be processed with some limited latency while giving limited flexibility over ordering to the Relay-chain apparatus which is needed for efficient operation.

+

If we set a factor of three, and thus aim to retain a queue of Instantaneous Coretime Sales which can be processed within three Relay-chain blocks, then we would increase the price if the queue goes above three times the amount of cores available, and decrease if it goes under.

+

Let us assume the values OLD_PRICE, FACTOR, QUEUE_SIZE and POOL_SIZE. A simple definition of the NEW_PRICE would be thus:

+
NEW_PRICE := IF QUEUE_SIZE < POOL_SIZE * FACTOR THEN
+    OLD_PRICE * 0.95
+ELSE
+    OLD_PRICE / 0.95
+END IF
+
+

This exists only as a trivial example to demonstrate a basic solution exists, and should not be intended as a concrete proposal.

+

Notes on Types

+

This exists only as a short illustration of a potential technical implementation and should not be treated as anything more.

+

Regions

+

This data schema achieves a number of goals:

+
    +
  • Coretime can be individually traded at a level of a single usage of a single core.
  • +
  • Coretime Regions, of arbitrary span and up to 1/80th interlacing can be exposed as NFTs and exchanged.
  • +
  • Any Coretime Region can be contributed to the Instantaneous Coretime Pool.
  • +
  • Unlimited number of individual Coretime contributors to the Instantaneous Coretime Pool. (Effectively limited only in number of cores and interlacing level; with current values this would allow 80,000 individual payees per timeslice).
  • +
  • All keys are self-describing.
  • +
  • Workload to communicate core (re-)assignments is well-bounded and low in weight.
  • +
  • All mandatory bookkeeping workload is well-bounded in weight.
  • +
+
#![allow(unused)]
+fn main() {
+type Timeslice = u32; // 80 block amounts.
+type CoreIndex = u16;
+type CoreMask = [u8; 10]; // 80-bit bitmap.
+
+// 128-bit (16 bytes)
+struct RegionId {
+    begin: Timeslice,
+    core: CoreIndex,
+    mask: CoreMask,
+}
+// 296-bit (37 bytes)
+struct RegionRecord {
+    end: Timeslice,
+    owner: AccountId,
+}
+
+map Regions = Map<RegionId, RegionRecord>;
+
+// 40-bit (5 bytes). Could be 32-bit with a more specialised type.
+enum CoreTask {
+    Off,
+    Assigned { target: TaskId },
+    InstaPool,
+}
+// 120-bit (15 bytes). Could be 14 bytes with a specialised 32-bit `CoreTask`.
+struct ScheduleItem {
+    mask: CoreMask, // 80 bit
+    task: CoreTask, // 40 bit
+}
+
+/// The work we plan on having each core do at a particular time in the future.
+type Workplan = Map<(Timeslice, CoreIndex), BoundedVec<ScheduleItem, 80>>;
+/// The current workload of each core. This gets updated with workplan as timeslices pass.
+type Workload = Map<CoreIndex, BoundedVec<ScheduleItem, 80>>;
+
+enum Contributor {
+    System,
+    Private(AccountId),
+}
+
+struct ContributionRecord {
+    begin: Timeslice,
+    end: Timeslice,
+    core: CoreIndex,
+    mask: CoreMask,
+    payee: Contributor,
+}
+type InstaPoolContribution = Map<ContributionRecord, ()>;
+
+type SignedTotalMaskBits = u32;
+type InstaPoolIo = Map<Timeslice, SignedTotalMaskBits>;
+
+type PoolSize = Value<TotalMaskBits>;
+
+/// Counter for the total CoreMask which could be dedicated to a pool. `u32` so we don't ever get
+/// an overflow.
+type TotalMaskBits = u32;
+struct InstaPoolHistoryRecord {
+    total_contributions: TotalMaskBits,
+    maybe_payout: Option<Balance>,
+}
+/// Total InstaPool rewards for each Timeslice and the number of core Mask which contributed.
+type InstaPoolHistory = Map<Timeslice, InstaPoolHistoryRecord>;
+}
+

CoreMask tracks unique "parts" of a single core. It is used with interlacing in order to give a unique identifier to each component of any possible interlacing configuration of a core, allowing for simple self-describing keys for all core ownership and allocation information. It also allows for each core's workload to be tracked and updated progressively, keeping ongoing compute costs well-bounded and low.

+

Regions are issued into the Regions map and can be transferred, partitioned and interlaced as the owner desires. Regions can only be tasked if they begin after the current scheduling deadline (if they have missed this, then the region can be auto-trimmed until it is).

+

Once tasked, they are removed from there and a record is placed in Workplan. In addition, if they are contributed to the Instantaneous Coretime Pool, then an entry is placing in InstaPoolContribution and InstaPoolIo.

+

Each timeslice, InstaPoolIo is used to update the current value of PoolSize. A new entry in InstaPoolHistory is inserted, with the total_contributions field of InstaPoolHistoryRecord being informed by the PoolSize value. Each core's has its Workload mutated according to its Workplan for the upcoming timeslice.

+

When Instantaneous Coretime Market Revenues are reported for a particular timeslice from the Relay-chain, this information gets placed in the maybe_payout field of the relevant record of InstaPoolHistory.

+

Payments can be requested made for any records in InstaPoolContribution whose begin is the key for a value in InstaPoolHistory whose maybe_payout is Some. In this case, the total_contributions is reduced by the ContributionRecord's mask and a pro rata amount paid. The ContributionRecord is mutated by incrementing begin, or removed if begin becomes equal to end.

+

Example:

+
#![allow(unused)]
+fn main() {
+// Simple example with a `u16` `CoreMask` and bulk sold in 100 timeslices.
+Regions:
+{ core: 0u16, begin: 100, mask: 0b1111_1111_1111_1111u16 } => { end: 200u32, owner: Alice };
+// First split @ 50
+Regions:
+{ core: 0u16, begin: 100, mask: 0b1111_1111_1111_1111u16 } => { end: 150u32, owner: Alice };
+{ core: 0u16, begin: 150, mask: 0b1111_1111_1111_1111u16 } => { end: 200u32, owner: Alice };
+// Share half of first 50 blocks
+Regions:
+{ core: 0u16, begin: 100, mask: 0b1111_1111_0000_0000u16 } => { end: 150u32, owner: Alice };
+{ core: 0u16, begin: 100, mask: 0b0000_0000_1111_1111u16 } => { end: 150u32, owner: Alice };
+{ core: 0u16, begin: 150, mask: 0b1111_1111_1111_1111u16 } => { end: 200u32, owner: Alice };
+// Sell half of them to Bob
+Regions:
+{ core: 0u16, begin: 100, mask: 0b1111_1111_0000_0000u16 } => { end: 150u32, owner: Alice };
+{ core: 0u16, begin: 100, mask: 0b0000_0000_1111_1111u16 } => { end: 150u32, owner: Bob };
+{ core: 0u16, begin: 150, mask: 0b1111_1111_1111_1111u16 } => { end: 200u32, owner: Alice };
+// Bob splits first 10 and assigns them to himself.
+Regions:
+{ core: 0u16, begin: 100, mask: 0b1111_1111_0000_0000u16 } => { end: 150u32, owner: Alice };
+{ core: 0u16, begin: 100, mask: 0b0000_0000_1111_1111u16 } => { end: 110u32, owner: Bob };
+{ core: 0u16, begin: 110, mask: 0b0000_0000_1111_1111u16 } => { end: 150u32, owner: Bob };
+{ core: 0u16, begin: 150, mask: 0b1111_1111_1111_1111u16 } => { end: 200u32, owner: Alice };
+// Bob shares first 10 3 ways and sells smaller shares to Charlie and Dave
+Regions:
+{ core: 0u16, begin: 100, mask: 0b1111_1111_0000_0000u16 } => { end: 150u32, owner: Alice };
+{ core: 0u16, begin: 100, mask: 0b0000_0000_1100_0000u16 } => { end: 110u32, owner: Charlie };
+{ core: 0u16, begin: 100, mask: 0b0000_0000_0011_0000u16 } => { end: 110u32, owner: Dave };
+{ core: 0u16, begin: 100, mask: 0b0000_0000_0000_1111u16 } => { end: 110u32, owner: Bob };
+{ core: 0u16, begin: 110, mask: 0b0000_0000_1111_1111u16 } => { end: 150u32, owner: Bob };
+{ core: 0u16, begin: 150, mask: 0b1111_1111_1111_1111u16 } => { end: 200u32, owner: Alice };
+// Bob assigns to his para B, Charlie and Dave assign to their paras C and D; Alice assigns first 50 to A
+Regions:
+{ core: 0u16, begin: 150, mask: 0b1111_1111_1111_1111u16 } => { end: 200u32, owner: Alice };
+Workplan:
+(100, 0) => vec![
+    { mask: 0b1111_1111_0000_0000u16, task: Assigned(A) },
+    { mask: 0b0000_0000_1100_0000u16, task: Assigned(C) },
+    { mask: 0b0000_0000_0011_0000u16, task: Assigned(D) },
+    { mask: 0b0000_0000_0000_1111u16, task: Assigned(B) },
+]
+(110, 0) => vec![{ mask: 0b0000_0000_1111_1111u16, task: Assigned(B) }]
+// Alice assigns her remaining 50 timeslices to the InstaPool paying herself:
+Regions: (empty)
+Workplan:
+(100, 0) => vec![
+    { mask: 0b1111_1111_0000_0000u16, task: Assigned(A) },
+    { mask: 0b0000_0000_1100_0000u16, task: Assigned(C) },
+    { mask: 0b0000_0000_0011_0000u16, task: Assigned(D) },
+    { mask: 0b0000_0000_0000_1111u16, task: Assigned(B) },
+]
+(110, 0) => vec![{ mask: 0b0000_0000_1111_1111u16, task: Assigned(B) }]
+(150, 0) => vec![{ mask: 0b1111_1111_1111_1111u16, task: InstaPool }]
+InstaPoolContribution:
+{ begin: 150, end: 200, core: 0, mask: 0b1111_1111_1111_1111u16, payee: Alice }
+InstaPoolIo:
+150 => 16
+200 => -16
+// Actual notifications to relay chain.
+// Assumes:
+// - Timeslice is 10 blocks.
+// - Timeslice 0 begins at block #1000.
+// - Relay needs 10 blocks notice of change.
+//
+Workload: 0 => vec![]
+PoolSize: 0
+
+// Block 990:
+Relay <= assign_core(core: 0u16, begin: 1000, assignment: vec![(A, 8), (C, 2), (D, 2), (B, 4)])
+Workload: 0 => vec![
+    { mask: 0b1111_1111_0000_0000u16, task: Assigned(A) },
+    { mask: 0b0000_0000_1100_0000u16, task: Assigned(C) },
+    { mask: 0b0000_0000_0011_0000u16, task: Assigned(D) },
+    { mask: 0b0000_0000_0000_1111u16, task: Assigned(B) },
+]
+PoolSize: 0
+
+// Block 1090:
+Relay <= assign_core(core: 0u16, begin: 1100, assignment: vec![(A, 8), (B, 8)])
+Workload: 0 => vec![
+    { mask: 0b1111_1111_0000_0000u16, task: Assigned(A) },
+    { mask: 0b0000_0000_1111_1111u16, task: Assigned(B) },
+]
+PoolSize: 0
+
+// Block 1490:
+Relay <= assign_core(core: 0u16, begin: 1500, assignment: vec![(Pool, 16)])
+Workload: 0 => vec![
+    { mask: 0b1111_1111_1111_1111u16, task: InstaPool },
+]
+PoolSize: 16
+InstaPoolIo:
+200 => -16
+InstaPoolHistory:
+150 => { total_contributions: 16, maybe_payout: None }
+
+// Sometime after block 1500:
+InstaPoolHistory:
+150 => { total_contributions: 16, maybe_payout: Some(P) }
+
+// Sometime after block 1990:
+InstaPoolIo: (empty)
+PoolSize: 0
+InstaPoolHistory:
+150 => { total_contributions: 16, maybe_payout: Some(P0) }
+151 => { total_contributions: 16, maybe_payout: Some(P1) }
+152 => { total_contributions: 16, maybe_payout: Some(P2) }
+...
+199 => { total_contributions: 16, maybe_payout: Some(P49) }
+
+// Sometime later still Alice calls for a payout
+InstaPoolContribution: (empty)
+InstaPoolHistory: (empty)
+// Alice gets rewarded P0 + P1 + ... P49.
+}
+

Rollout

+

Rollout of this proposal comes in several phases:

+
    +
  1. Finalise the specifics of implementation; this may be done through a design document or through a well-documented prototype implementation.
  2. +
  3. Implement the design, including all associated aspects such as unit tests, benchmarks and any support software needed.
  4. +
  5. If any new parachain is required, launch of this.
  6. +
  7. Formal audit of the implementation and any manual testing.
  8. +
  9. Announcement to the various stakeholders of the imminent changes.
  10. +
  11. Software integration and release.
  12. +
  13. Governance upgrade proposal(s).
  14. +
  15. Monitoring of the upgrade process.
  16. +
+

Performance, Ergonomics and Compatibility

+

No specific considerations.

+

Parachains already deployed into the Polkadot UC must have a clear plan of action to migrate to an agile Coretime market.

+

While this proposal does not introduce documentable features per se, adequate documentation must be provided to potential purchasers of Polkadot Coretime. This SHOULD include any alterations to the Polkadot-SDK software collection.

+

Testing, Security and Privacy

+

Regular testing through unit tests, integration tests, manual testnet tests, zombie-net tests and fuzzing SHOULD be conducted.

+

A regular security review SHOULD be conducted prior to deployment through a review by the Web3 Foundation economic research group.

+

Any final implementation MUST pass a professional external security audit.

+

The proposal introduces no new privacy concerns.

+ +

RFC-3 proposes a means of implementing the high-level allocations within the Relay-chain.

+

RFC-5 proposes the API for interacting with Relay-chain.

+

Additional work should specify the interface for the instantaneous market revenue so that the Coretime-chain can ensure Bulk Coretime placed in the instantaneous market is properly compensated.

+

Drawbacks, Alternatives and Unknowns

+

Unknowns include the economic and resource parameterisations:

+
    +
  • The initial price of Bulk Coretime.
  • +
  • The price-change algorithm between Bulk Coretime sales.
  • +
  • The price increase per Bulk Coretime period for renewals.
  • +
  • The price decrease graph in the Leadin period for Bulk Coretime sales.
  • +
  • The initial price of Instantaneous Coretime.
  • +
  • The price-change algorithm for Instantaneous Coretime sales.
  • +
  • The percentage of cores to be sold as Bulk Coretime.
  • +
  • The fate of revenue collected.
  • +
+

Prior Art and References

+

Robert Habermeier initially wrote on the subject of Polkadot blockspace-centric in the article Polkadot Blockspace over Blockchains. While not going into details, the article served as an early reframing piece for moving beyond one-slot-per-chain models and building out secondary market infrastructure for resource allocation.

+ +
+ + +
+
+ + + +
+ + + + + + + + + + + + + + + + + + + + +
+ + diff --git a/text/0005-coretime-interface.html b/text/0005-coretime-interface.html new file mode 100644 index 000000000..1d49ef9fb --- /dev/null +++ b/text/0005-coretime-interface.html @@ -0,0 +1,361 @@ + + + + + + + 0005 - Polkadot Fellowship RFCs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+
+

RFC-5: Coretime Interface

+
+ + + +
Start Date06 July 2023
DescriptionInterface for manipulating the usage of cores on the Polkadot Ubiquitous Computer.
AuthorsGavin Wood, Robert Habermeier
+
+

Summary

+

In the Agile Coretime model of the Polkadot Ubiquitous Computer, as proposed in RFC-1 and RFC-3, it is necessary for the allocating parachain (envisioned to be one or more pallets on a specialised Brokerage System Chain) to communicate the core assignments to the Relay-chain, which is responsible for ensuring those assignments are properly enacted.

+

This is a proposal for the interface which will exist around the Relay-chain in order to communicate this information and instructions.

+

Motivation

+

The background motivation for this interface is splitting out coretime allocation functions and secondary markets from the Relay-chain onto System parachains. A well-understood and general interface is necessary for ensuring the Relay-chain receives coretime allocation instructions from one or more System chains without introducing dependencies on the implementation details of either side.

+

Requirements

+
    +
  • The interface MUST allow the Relay-chain to be scheduled on a low-latency basis.
  • +
  • Individual cores MUST be schedulable, both in full to a single task (a ParaId or the Instantaneous Coretime Pool) or to many unique tasks in differing ratios.
  • +
  • Typical usage of the interface SHOULD NOT overload the VMP message system.
  • +
  • The interface MUST allow for the allocating chain to be notified of all accounting information relevant for making accurate rewards for contributing to the Instantaneous Coretime Pool.
  • +
  • The interface MUST allow for Instantaneous Coretime Market Credits to be communicated.
  • +
  • The interface MUST allow for the allocating chain to instruct changes to the number of cores which it is able to allocate.
  • +
  • The interface MUST allow for the allocating chain to be notified of changes to the number of cores which are able to be allocated by the allocating chain.
  • +
+

Stakeholders

+

Primary stakeholder sets are:

+
    +
  • Developers of the Relay-chain core-management logic.
  • +
  • Developers of the Brokerage System Chain and its pallets.
  • +
+

Socialization:

+

This content of this RFC was discussed in the Polkdot Fellows channel.

+

Explanation

+

The interface has two sections: The messages which the Relay-chain is able to receive from the allocating parachain (the UMP message types), and messages which the Relay-chain is able to send to the allocating parachain (the DMP message types). These messages are expected to be able to be implemented in a well-known pallet and called with the XCM Transact instruction.

+

Future work may include these messages being introduced into the XCM standard.

+

UMP Message Types

+

request_core_count

+

Prototype:

+
fn request_core_count(
+    count: u16,
+)
+
+

Requests the Relay-chain to alter the number of schedulable cores to count. Under normal operation, the Relay-chain SHOULD send a notify_core_count(count) message back.

+

request_revenue_info_at

+

Prototype:

+
fn request_revenue_at(
+    when: BlockNumber,
+)
+
+

Requests that the Relay-chain send a notify_revenue message back at or soon after Relay-chain block number when whose until parameter is equal to when.

+

The period in to the past which when is allowed to be may be limited; if so the limit should be understood on a channel outside of this proposal. In the case that the request cannot be serviced because when is too old a block then a notify_revenue message must still be returned, but its revenue field may be None.

+

credit_account

+

Prototype:

+
fn credit_account(
+    who: AccountId,
+    amount: Balance,
+)
+
+

Instructs the Relay-chain to add the amount of DOT to the Instantaneous Coretime Market Credit account of who.

+

It is expected that Instantaneous Coretime Market Credit on the Relay-chain is NOT transferrable and only redeemable when used to assign cores in the Instantaneous Coretime Pool.

+

assign_core

+

Prototype:

+
type PartsOf57600 = u16;
+enum CoreAssignment {
+    InstantaneousPool,
+    Task(ParaId),
+}
+fn assign_core(
+    core: CoreIndex,
+    begin: BlockNumber,
+    assignment: Vec<(CoreAssignment, PartsOf57600)>,
+    end_hint: Option<BlockNumber>,
+)
+
+

Requirements:

+
assert!(core < core_count);
+assert!(targets.iter().map(|x| x.0).is_sorted());
+assert_eq!(targets.iter().map(|x| x.0).unique().count(), targets.len());
+assert_eq!(targets.iter().map(|x| x.1).sum(), 57600);
+
+

Where:

+
    +
  • core_count is assumed to be the sole parameter in the last received notify_core_count message.
  • +
+

Instructs the Relay-chain to ensure that the core indexed as core is utilised for a number of assignments in specific ratios given by assignment starting as soon after begin as possible. Core assignments take the form of a CoreAssignment value which can either task the core to a ParaId value or indicate that the core should be used in the Instantaneous Pool. Each assignment comes with a ratio value, represented as the numerator of the fraction with a denominator of 57,600.

+

If end_hint is Some and the inner is greater than the current block number, then the Relay-chain should optimize in the expectation of receiving a new assign_core(core, ...) message at or prior to the block number of the inner value. Specific functionality should remain unchanged regardless of the end_hint value.

+

On the choice of denominator: 57,600 is a very composite number which factors into: 2 ** 8, 3 ** 2, 5 ** 2. By using it as the denominator we allow for various useful fractions to be perfectly represented including thirds, quarters, fifths, tenths, 80ths, percent and 256ths.

+

DMP Message Types

+

notify_core_count

+

Prototype:

+
fn notify_core_count(
+    count: u16,
+)
+
+

Indicate that from this block onwards, the range of acceptable values of the core parameter of assign_core message is [0, count). assign_core will be a no-op if provided with a value for core outside of this range.

+

notify_revenue_info

+

Prototype:

+
fn notify_revenue_info(
+    until: BlockNumber,
+    revenue: Option<Balance>,
+)
+
+

Provide the amount of revenue accumulated from Instantaneous Coretime Sales from Relay-chain block number last_until to until, not including until itself. last_until is defined as being the until argument of the last notify_revenue message sent, or zero for the first call. If revenue is None, this indicates that the information is no longer available.

+

This explicitly disregards the possibility of multiple parachains requesting and being notified of revenue information. The Relay-chain must be configured to ensure that only a single revenue information destination exists.

+

Realistic Limits of the Usage

+

For request_revenue_info, a successful request should be possible if when is no less than the Relay-chain block number on arrival of the message less 100,000.

+

For assign_core, a successful request should be possible if begin is no less than the Relay-chain block number on arrival of the message plus 10 and workload contains no more than 100 items.

+

Performance, Ergonomics and Compatibility

+

No specific considerations.

+

Testing, Security and Privacy

+

Standard Polkadot testing and security auditing applies.

+

The proposal introduces no new privacy concerns.

+ +

RFC-1 proposes a means of determining allocation of Coretime using this interface.

+

RFC-3 proposes a means of implementing the high-level allocations within the Relay-chain.

+

Drawbacks, Alternatives and Unknowns

+

None at present.

+

Prior Art and References

+

None.

+ +
+ + +
+
+ + + +
+ + + + + + + + + + + + + + + + + + + + +
+ + diff --git a/text/0007-system-collator-selection.html b/text/0007-system-collator-selection.html new file mode 100644 index 000000000..9ca1343ae --- /dev/null +++ b/text/0007-system-collator-selection.html @@ -0,0 +1,374 @@ + + + + + + + 0007 - Polkadot Fellowship RFCs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+
+

RFC-0007: System Collator Selection

+
+ + + +
Start Date07 July 2023
DescriptionMechanism for selecting collators of system chains.
AuthorsJoe Petrowski
+
+

Summary

+

As core functionality moves from the Relay Chain into system chains, so increases the reliance on +the liveness of these chains for the use of the network. It is not economically scalable, nor +necessary from a game-theoretic perspective, to pay collators large rewards. This RFC proposes a +mechanism -- part technical and part social -- for ensuring reliable collator sets that are +resilient to attemps to stop any subsytem of the Polkadot protocol.

+

Motivation

+

In order to guarantee access to Polkadot's system, the collators on its system chains must propose +blocks (provide liveness) and allow all transactions to eventually be included. That is, some +collators may censor transactions, but there must exist one collator in the set who will include a +given transaction. In fact, all collators may censor varying subsets of transactions, but as long +as no transaction is in the intersection of every subset, it will eventually be included. The +objective of this RFC is to propose a mechanism to select such a set on each system chain.

+

While the network as a whole uses staking (and inflationary rewards) to attract validators, +collators face different challenges in scale and have lower security assumptions than validators. +Regarding scale, there exist many system chains, and it is economically expensive to pay collators +a premium. Likewise, any staked DOT for collation is not staked for validation. Since collator +sets do not need to meet Byzantine Fault Tolerance criteria, staking as the primary mechanism for +collator selection would remove stake that is securing BFT assumptions, making the network less +secure.

+

Another problem with economic scalability relates to the increasing number of system chains, and +corresponding increase in need for collators (i.e., increase in collator slots). "Good" (highly +available, non-censoring) collators will not want to compete in elections on many chains when they +could use their resources to compete in the more profitable validator election. Such dilution +decreases the required bond on each chain, leaving them vulnerable to takeover by hostile +collator groups.

+

This RFC proposes a system whereby collation is primarily an infrastructure service, with the +on-chain Treasury reimbursing costs of semi-trusted node operators, referred to as "Invulnerables". +The system need not trust the individual operators, only that as a set they would be resilient to +coordinated attempts to stop a single chain from halting or to censor a particular subset of +transactions.

+

In the case that users do not trust this set, this RFC also proposes that each chain always have +available collator positions that can be acquired by anyone by placing a bond.

+

Requirements

+
    +
  • System MUST have at least one valid collator for every chain.
  • +
  • System MUST allow anyone to become a collator, provided they reserve/hold enough DOT.
  • +
  • System SHOULD select a set of collators with reasonable expectation that the set will not collude +to censor any subset of transactions.
  • +
  • Collators selected by governance SHOULD have a reasonable expectation that the Treasury will +reimburse their operating costs.
  • +
+

Stakeholders

+
    +
  • Infrastructure providers (people who run validator/collator nodes)
  • +
  • Polkadot Treasury
  • +
+

Explanation

+

This protocol builds on the existing +Collator Selection pallet +and its notion of Invulnerables. Invulnerables are collators (identified by their AccountIds) who +will be selected as part of the collator set every session. Operations relating to the management +of the Invulnerables are done through privileged, governance origins. The implementation should +maintain an API for adding and removing Invulnerable collators.

+

In addition to Invulnerables, there are also open slots for "Candidates". Anyone can register as a +Candidate by placing a fixed bond. However, with a fixed bond and fixed number of slots, there is +an obvious selection problem: The slots fill up without any logic to replace their occupants.

+

This RFC proposes that the collator selection protocol allow Candidates to increase (and decrease) +their individual bonds, sort the Candidates according to bond, and select the top N Candidates. +The selection and changeover should be coordinated by the session manager.

+

A FRAME pallet already exists for sorting ("bagging") "top N" groups, the +Bags List pallet. +This pallet's SortedListProvider should be integrated into the session manager of the Collator +Selection pallet.

+

Despite the lack of apparent economic incentives (i.e., inflation), several reasons exist why one +may want to bond funds to participate in the Candidates election, for example:

+
    +
  • They want to build credibility to be selected as Invulnerable;
  • +
  • They want to ensure availability of an application, e.g. a stablecoin issuer might run a collator +on Asset Hub to ensure transactions in its asset are included in blocks;
  • +
  • They fear censorship themselves, e.g. a voter might think their votes are being censored from +governance, so they run a collator on the governance chain to include their votes.
  • +
+

Unlike the fixed-bond mechanism that fills up its Candidates, the election mechanism ensures that +anyone can join the collator set by placing the Nth highest bond.

+

Set Size

+

In order to achieve the requirements listed under Motivation, it is reasonable to have +approximately:

+
    +
  • 20 collators per system chain,
  • +
  • of which 15 are Invulnerable, and
  • +
  • five are elected by bond.
  • +
+

Drawbacks

+

The primary drawback is a reliance on governance for continued treasury funding of infrastructure +costs for Invulnerable collators.

+

Testing, Security, and Privacy

+

The vast majority of cases can be covered by unit testing. Integration test should ensure that the +Collator Selection UpdateOrigin, which has permission to modify the Invulnerables and desired +number of Candidates, can handle updates over XCM from the system's governance location.

+

Performance, Ergonomics, and Compatibility

+

This proposal has very little impact on most users of Polkadot, and should improve the performance +of system chains by reducing the number of missed blocks.

+

Performance

+

As chains have strict PoV size limits, care must be taken in the PoV impact of the session manager. +Appropriate benchmarking and tests should ensure that conservative limits are placed on the number +of Invulnerables and Candidates.

+

Ergonomics

+

The primary group affected is Candidate collators, who, after implementation of this RFC, will need +to compete in a bond-based election rather than a race to claim a Candidate spot.

+

Compatibility

+

This RFC is compatible with the existing implementation and can be handled via upgrades and +migration.

+

Prior Art and References

+

Written Discussions

+ +

Prior Feedback and Input From

+
    +
  • Kian Paimani
  • +
  • Jeff Burdges
  • +
  • Rob Habermeier
  • +
  • SR Labs Auditors
  • +
  • Current collators including Paranodes, Stake Plus, Turboflakes, Peter Mensik, SIK, and many more.
  • +
+

Unresolved Questions

+

None at this time.

+ +

There may exist in the future system chains for which this model of collator selection is not +appropriate. These chains should be evaluated on a case-by-case basis.

+ +
+ + +
+
+ + + +
+ + + + + + + + + + + + + + + + + + + + +
+ + diff --git a/text/0008-parachain-bootnodes-dht.html b/text/0008-parachain-bootnodes-dht.html new file mode 100644 index 000000000..b5f8f237b --- /dev/null +++ b/text/0008-parachain-bootnodes-dht.html @@ -0,0 +1,331 @@ + + + + + + + 0008 - Polkadot Fellowship RFCs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+
+

RFC-0008: Store parachain bootnodes in relay chain DHT

+
+ + + +
Start Date2023-07-14
DescriptionParachain bootnodes shall register themselves in the DHT of the relay chain
AuthorsPierre Krieger
+
+

Summary

+

The full nodes of the Polkadot peer-to-peer network maintain a distributed hash table (DHT), which is currently used for full nodes discovery and validators discovery purposes.

+

This RFC proposes to extend this DHT to be used to discover full nodes of the parachains of Polkadot.

+

Motivation

+

The maintenance of bootnodes has long been an annoyance for everyone.

+

When a bootnode is newly-deployed or removed, every chain specification must be updated in order to take the update into account. This has lead to various non-optimal solutions, such as pulling chain specifications from GitHub repositories. +When it comes to RPC nodes, UX developers often have trouble finding up-to-date addresses of parachain RPC nodes. With the ongoing migration from RPC nodes to light clients, similar problems would happen with chain specifications as well.

+

Furthermore, there exists multiple different possible variants of a certain chain specification: with the non-raw storage, with the raw storage, with just the genesis trie root hash, with or without checkpoint, etc. All of this creates confusion. Removing the need for parachain developers to be aware of and manage these different versions would be beneficial.

+

Since the PeerId and addresses of bootnodes needs to be stable, extra maintenance work is required from the chain maintainers. For example, they need to be extra careful when migrating nodes within their infrastructure. In some situations, bootnodes are put behind domain names, which also requires maintenance work.

+

Because the list of bootnodes in chain specifications is so annoying to modify, the consequence is that the number of bootnodes is rather low (typically between 2 and 15). In order to better resist downtimes and DoS attacks, a better solution would be to use every node of a certain chain as potential bootnode, rather than special-casing some specific nodes.

+

While this RFC doesn't solve these problems for relay chains, it aims at solving it for parachains by storing the list of all the full nodes of a parachain on the relay chain DHT.

+

Assuming that this RFC is implemented, and that light clients are used, deploying a parachain wouldn't require more work than registering it onto the relay chain and starting the collators. There wouldn't be any need for special infrastructure nodes anymore.

+

Stakeholders

+

This RFC has been opened on my own initiative because I think that this is a good technical solution to a usability problem that many people are encountering and that they don't realize can be solved.

+

Explanation

+

The content of this RFC only applies for parachains and parachain nodes that are "Substrate-compatible". It is in no way mandatory for parachains to comply to this RFC.

+

Note that "Substrate-compatible" is very loosely defined as "implements the same mechanisms and networking protocols as Substrate". The author of this RFC believes that "Substrate-compatible" should be very precisely specified, but there is controversy on this topic.

+

While a lot of this RFC concerns the implementation of parachain nodes, it makes use of the resources of the Polkadot chain, and as such it is important to describe them in the Polkadot specification.

+

This RFC adds two mechanisms: a registration in the DHT, and a new networking protocol.

+

DHT provider registration

+

This RFC heavily relies on the functionalities of the Kademlia DHT already in use by Polkadot. +You can find a link to the specification here.

+

Full nodes of a parachain registered on Polkadot should register themselves onto the Polkadot DHT as the providers of a key corresponding to the parachain that they are serving, as described in the Content provider advertisement section of the specification. This uses the ADD_PROVIDER system of libp2p-kademlia.

+

This key is: sha256(concat(scale_compact(para_id), randomness)) where the value of randomness can be found in the randomness field when calling the BabeApi_currentEpoch function. +For example, for a para_id equal to 1000, and at the time of writing of this RFC (July 14th 2023 at 09:13 UTC), it is sha(0xa10f12872447958d50aa7b937b0106561a588e0e2628d33f81b5361b13dbcf8df708), which is equal to 0x483dd8084d50dbbbc962067f216c37b627831d9339f5a6e426a32e3076313d87.

+

In order to avoid downtime when the key changes, parachain full nodes should also register themselves as a secondary key that uses a value of randomness equal to the randomness field when calling BabeApi_nextEpoch.

+

Implementers should be aware that their implementation of Kademlia might already hash the key before XOR'ing it. The key is not meant to be hashed twice.

+

The compact SCALE encoding has been chosen in order to avoid problems related to the number of bytes and endianness of the para_id.

+

New networking protocol

+

A new request-response protocol should be added, whose name is /91b171bb158e2d3848fa23a9f1c25182fb8e20313b2c1eb49219da7a70ce90c3/paranode (that hexadecimal number is the genesis hash of the Polkadot chain, and should be adjusted appropriately for Kusama and others).

+

The request consists in a SCALE-compact-encoded para_id. For example, for a para_id equal to 1000, this is 0xa10f.

+

Note that because this is a request-response protocol, the request is always prefixed with its length in bytes. While the body of the request is simply the SCALE-compact-encoded para_id, the data actually sent onto the substream is both the length and body.

+

The response consists in a protobuf struct, defined as:

+
syntax = "proto2";
+
+message Response {
+    // Peer ID of the node on the parachain side.
+    bytes peer_id = 1;
+
+    // Multiaddresses of the parachain side of the node. The list and format are the same as for the `listenAddrs` field of the `identify` protocol.
+    repeated bytes addrs = 2;
+
+    // Genesis hash of the parachain. Used to determine the name of the networking protocol to connect to the parachain. Untrusted.
+    bytes genesis_hash = 3;
+
+    // So-called "fork ID" of the parachain. Used to determine the name of the networking protocol to connect to the parachain. Untrusted.
+    optional string fork_id = 4;
+};
+
+

The maximum size of a response is set to an arbitrary 16kiB. The responding side should make sure to conform to this limit. Given that fork_id is typically very small and that the only variable-length field is addrs, this is easily achieved by limiting the number of addresses.

+

Implementers should be aware that addrs might be very large, and are encouraged to limit the number of addrs to an implementation-defined value.

+

Drawbacks

+

The peer_id and addrs fields are in theory not strictly needed, as the PeerId and addresses could be always equal to the PeerId and addresses of the node being registered as the provider and serving the response. However, the Cumulus implementation currently uses two different networking stacks, one of the parachain and one for the relay chain, using two separate PeerIds and addresses, and as such the PeerId and addresses of the other networking stack must be indicated. Asking them to use only one networking stack wouldn't feasible in a realistic time frame.

+

The values of the genesis_hash and fork_id fields cannot be verified by the requester and are expected to be unused at the moment. Instead, a client that desires connecting to a parachain is expected to obtain the genesis hash and fork ID of the parachain from the parachain chain specification. These fields are included in the networking protocol nonetheless in case an acceptable solution is found in the future, and in order to allow use cases such as discovering parachains in a not-strictly-trusted way.

+

Testing, Security, and Privacy

+

Because not all nodes want to be used as bootnodes, implementers are encouraged to provide a way to disable this mechanism. However, it is very much encouraged to leave this mechanism on by default for all parachain nodes.

+

This mechanism doesn't add or remove any security by itself, as it relies on existing mechanisms. +However, if the principle of chain specification bootnodes is entirely replaced with the mechanism described in this RFC (which is the objective), then it becomes important whether the mechanism in this RFC can be abused in order to make a parachain unreachable.

+

Due to the way Kademlia works, it would become the responsibility of the 20 Polkadot nodes whose sha256(peer_id) is closest to the key (described in the explanations section) to store the list of bootnodes of each parachain. +Furthermore, when a large number of providers (here, a provider is a bootnode) are registered, only the providers closest to the key are kept, up to a certain implementation-defined limit.

+

For this reason, an attacker can abuse this mechanism by randomly generating libp2p PeerIds until they find the 20 entries closest to the key representing the target parachain. They are then in control of the parachain bootnodes. +Because the key changes periodically and isn't predictable, and assuming that the Polkadot DHT is sufficiently large, it is not realistic for an attack like this to be maintained in the long term.

+

Furthermore, parachain clients are expected to cache a list of known good nodes on their disk. If the mechanism described in this RFC went down, it would only prevent new nodes from accessing the parachain, while clients that have connected before would not be affected.

+

Performance, Ergonomics, and Compatibility

+

Performance

+

The DHT mechanism generally has a low overhead, especially given that publishing providers is done only every 24 hours.

+

Doing a Kademlia iterative query then sending a provider record shouldn't take more than around 50 kiB in total of bandwidth for the parachain bootnode.

+

Assuming 1000 parachain full nodes, the 20 Polkadot full nodes corresponding to a specific parachain will each receive a sudden spike of a few megabytes of networking traffic when the key rotates. Again, this is relatively negligible. If this becomes a problem, one can add a random delay before a parachain full node registers itself to be the provider of the key corresponding to BabeApi_next_epoch.

+

Maybe the biggest uncertainty is the traffic that the 20 Polkadot full nodes will receive from light clients that desire knowing the bootnodes of a parachain. Light clients are generally encouraged to cache the peers that they use between restarts, so they should only query these 20 Polkadot full nodes at their first initialization. +If this every becomes a problem, this value of 20 is an arbitrary constant that can be increased for more redundancy.

+

Ergonomics

+

Irrelevant.

+

Compatibility

+

Irrelevant.

+

Prior Art and References

+

None.

+

Unresolved Questions

+

While it fundamentally doesn't change much to this RFC, using BabeApi_currentEpoch and BabeApi_nextEpoch might be inappropriate. I'm not familiar enough with good practices within the runtime to have an opinion here. Should it be an entirely new pallet?

+ +

It is possible that in the future a client could connect to a parachain without having to rely on a trusted parachain specification.

+ +
+ + +
+
+ + + +
+ + + + + + + + + + + + + + + + + + + + +
+ + diff --git a/text/0010-burn-coretime-revenue.html b/text/0010-burn-coretime-revenue.html new file mode 100644 index 000000000..1f2cea890 --- /dev/null +++ b/text/0010-burn-coretime-revenue.html @@ -0,0 +1,272 @@ + + + + + + + 0010 - Polkadot Fellowship RFCs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+
+

RFC-0010: Burn Coretime Revenue

+
+ + + +
Start Date19.07.2023
DescriptionRevenue from Coretime sales should be burned
AuthorsJonas Gehrlein
+
+

Summary

+

The Polkadot UC will generate revenue from the sale of available Coretime. The question then arises: how should we handle these revenues? Broadly, there are two reasonable paths – burning the revenue and thereby removing it from total issuance or divert it to the Treasury. This Request for Comment (RFC) presents arguments favoring burning as the preferred mechanism for handling revenues from Coretime sales.

+

Motivation

+

How to handle the revenue accrued from Coretime sales is an important economic question that influences the value of DOT and should be properly discussed before deciding for either of the options. Now is the best time to start this discussion.

+

Stakeholders

+

Polkadot DOT token holders.

+

Explanation

+

This RFC discusses potential benefits of burning the revenue accrued from Coretime sales instead of diverting them to Treasury. Here are the following arguments for it.

+

It's in the interest of the Polkadot community to have a consistent and predictable Treasury income, because volatility in the inflow can be damaging, especially in situations when it is insufficient. As such, this RFC operates under the presumption of a steady and sustainable Treasury income flow, which is crucial for the Polkadot community's stability. The assurance of a predictable Treasury income, as outlined in a prior discussion here, or through other equally effective measures, serves as a baseline assumption for this argument.

+

Consequently, we need not concern ourselves with this particular issue here. This naturally begs the question - why should we introduce additional volatility to the Treasury by aligning it with the variable Coretime sales? It's worth noting that Coretime revenues often exhibit an inverse relationship with periods when Treasury spending should ideally be ramped up. During periods of low Coretime utilization (indicated by lower revenue), Treasury should spend more on projects and endeavours to increase the demand for Coretime. This pattern underscores that Coretime sales, by their very nature, are an inconsistent and unpredictable source of funding for the Treasury. Given the importance of maintaining a steady and predictable inflow, it's unnecessary to rely on another volatile mechanism. Some might argue that we could have both: a steady inflow (from inflation) and some added bonus from Coretime sales, but burning the revenue would offer further benefits as described below.

+
    +
  • +

    Balancing Inflation: While DOT as a utility token inherently profits from a (reasonable) net inflation, it also benefits from a deflationary force that functions as a counterbalance to the overall inflation. Right now, the only mechanism on Polkadot that burns fees is the one for underutilized DOT in the Treasury. Finding other, more direct target for burns makes sense and the Coretime market is a good option.

    +
  • +
  • +

    Clear incentives: By burning the revenue accrued on Coretime sales, prices paid by buyers are clearly costs. This removes distortion from the market that might arise when the paid tokens occur on some other places within the network. In that case, some actors might have secondary motives of influencing the price of Coretime sales, because they benefit down the line. For example, actors that actively participate in the Coretime sales are likely to also benefit from a higher Treasury balance, because they might frequently request funds for their projects. While those effects might appear far-fetched, they could accumulate. Burning the revenues makes sure that the prices paid are clearly costs to the actors themselves.

    +
  • +
  • +

    Collective Value Accrual: Following the previous argument, burning the revenue also generates some externality, because it reduces the overall issuance of DOT and thereby increases the value of each remaining token. In contrast to the aforementioned argument, this benefits all token holders collectively and equally. Therefore, I'd consider this as the preferrable option, because burns lets all token holders participate at Polkadot's success as Coretime usage increases.

    +
  • +
+ +
+ + +
+
+ + + +
+ + + + + + + + + + + + + + + + + + + + +
+ + diff --git a/text/0012-process-for-adding-new-collectives.html b/text/0012-process-for-adding-new-collectives.html new file mode 100644 index 000000000..51f60e25b --- /dev/null +++ b/text/0012-process-for-adding-new-collectives.html @@ -0,0 +1,329 @@ + + + + + + + 0012 - Polkadot Fellowship RFCs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+
+

RFC-0012: Process for Adding New System Collectives

+
+ + + +
Start Date24 July 2023
DescriptionA process for adding new (and removing existing) system collectives.
AuthorsJoe Petrowski
+
+

Summary

+

Since the introduction of the Collectives parachain, many groups have expressed interest in forming +new -- or migrating existing groups into -- on-chain collectives. While adding a new collective is +relatively simple from a technical standpoint, the Fellowship will need to merge new pallets into +the Collectives parachain for each new collective. This RFC proposes a means for the network to +ratify a new collective, thus instructing the Fellowship to instate it in the runtime.

+

Motivation

+

Many groups have expressed interest in representing collectives on-chain. Some of these include:

+
    +
  • Parachain technical fellowship (new)
  • +
  • Fellowship(s) for media, education, and evangelism (new)
  • +
  • Polkadot Ambassador Program (existing)
  • +
  • Anti-Scam Team (existing)
  • +
+

Collectives that form part of the core Polkadot protocol should have a mandate to serve the +Polkadot network. However, as part of the Polkadot protocol, the Fellowship, in its capacity of +maintaining system runtimes, will need to include modules and configurations for each collective.

+

Once a group has developed a value proposition for the Polkadot network, it should have a clear +path to having its collective accepted on-chain as part of the protocol. Acceptance should direct +the Fellowship to include the new collective with a given initial configuration into the runtime. +However, the network, not the Fellowship, should ultimately decide which collectives are in the +interest of the network.

+

Stakeholders

+
    +
  • Polkadot stakeholders who would like to organize on-chain.
  • +
  • Technical Fellowship, in its role of maintaining system runtimes.
  • +
+

Explanation

+

The group that wishes to operate an on-chain collective should publish the following information:

+
    +
  • Charter, including the collective's mandate and how it benefits Polkadot. This would be similar +to the +Fellowship Manifesto.
  • +
  • Seeding recommendation.
  • +
  • Member types, i.e. should members be individuals or organizations.
  • +
  • Member management strategy, i.e. how do members join and get promoted, if applicable.
  • +
  • How much, if at all, members should get paid in salary.
  • +
  • Any special origins this Collective should have outside its self. For example, the Fellowship +can whitelist calls for referenda via the WhitelistOrigin.
  • +
+

This information could all be in a single document or, for example, a GitHub repository.

+

After publication, members should seek feedback from the community and Technical Fellowship, and +make any revisions needed. When the collective believes the proposal is ready, they should bring a +remark with the text APPROVE_COLLECTIVE("{collective name}, {commitment}") to a Root origin +referendum. The proposer should provide instructions for generating commitment. The passing of +this referendum would be unequivocal direction to the Fellowship that this collective should be +part of the Polkadot runtime.

+

Note: There is no need for a REJECT referendum. Proposals that have not been approved are simply +not included in the runtime.

+

Removing Collectives

+

If someone believes that an existing collective is not acting in the interest of the network or in +accordance with its charter, they should likewise have a means to instruct the Fellowship to +remove that collective from Polkadot.

+

An on-chain remark from the Root origin with the text +REMOVE_COLLECTIVE("{collective name}, {para ID}, [{pallet indices}]") would instruct the +Fellowship to remove the collective via the listed pallet indices on paraId. Should someone want +to construct such a remark, they should have a reasonable expectation that a member of the +Fellowship would help them identify the pallet indices associated with a given collective, whether +or not the Fellowship member agrees with removal.

+

Collective removal may also come with other governance calls, for example voiding any scheduled +Treasury spends that would fund the given collective.

+

Drawbacks

+

Passing a Root origin referendum is slow. However, given the network's investment (in terms of code +maintenance and salaries) in a new collective, this is an appropriate step.

+

Testing, Security, and Privacy

+

No impacts.

+

Performance, Ergonomics, and Compatibility

+

Generally all new collectives will be in the Collectives parachain. Thus, performance impacts +should strictly be limited to this parachain and not affect others. As the majority of logic for +collectives is generalized and reusable, we expect most collectives to be instances of similar +subsets of modules. That is, new collectives should generally be compatible with UIs and other +services that provide collective-related functionality, with little modifications to support new +ones.

+

Prior Art and References

+

The launch of the Technical Fellowship, see the +initial forum post.

+

Unresolved Questions

+

None at this time.

+ +
+ + +
+
+ + + +
+ + + + + + + + + + + + + + + + + + + + +
+ + diff --git a/text/0013-prepare-blockbuilder-and-core-runtime-apis-for-mbms.html b/text/0013-prepare-blockbuilder-and-core-runtime-apis-for-mbms.html new file mode 100644 index 000000000..7af178b42 --- /dev/null +++ b/text/0013-prepare-blockbuilder-and-core-runtime-apis-for-mbms.html @@ -0,0 +1,322 @@ + + + + + + + 0013 - Polkadot Fellowship RFCs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+
+

RFC-0013: Prepare Core runtime API for MBMs

+
+ + + +
Start DateJuly 24, 2023
DescriptionPrepare the Core Runtime API for Multi-Block-Migrations
AuthorsOliver Tale-Yazdi
+
+

Summary

+

Introduces breaking changes to the Core runtime API by letting Core::initialize_block return an enum. The versions of Core is bumped from 4 to 5.

+

Motivation

+

The main feature that motivates this RFC are Multi-Block-Migrations (MBM); these make it possible to split a migration over multiple blocks.
+Further it would be nice to not hinder the possibility of implementing a new hook poll, that runs at the beginning of the block when there are no MBMs and has access to AllPalletsWithSystem. This hook can then be used to replace the use of on_initialize and on_finalize for non-deadline critical logic.
+In a similar fashion, it should not hinder the future addition of a System::PostInherents callback that always runs after all inherents were applied.

+

Stakeholders

+
    +
  • Substrate Maintainers: They have to implement this, including tests, audit and +maintenance burden.
  • +
  • Polkadot Runtime developers: They will have to adapt the runtime files to this breaking change.
  • +
  • Polkadot Parachain Teams: They have to adapt to the breaking changes but then eventually have +multi-block migrations available.
  • +
+

Explanation

+

Core::initialize_block

+

This runtime API function is changed from returning () to ExtrinsicInclusionMode:

+
fn initialize_block(header: &<Block as BlockT>::Header)
++  -> ExtrinsicInclusionMode;
+
+

With ExtrinsicInclusionMode is defined as:

+
#![allow(unused)]
+fn main() {
+enum ExtrinsicInclusionMode {
+  /// All extrinsics are allowed in this block.
+  AllExtrinsics,
+  /// Only inherents are allowed in this block.
+  OnlyInherents,
+}
+}
+

A block author MUST respect the ExtrinsicInclusionMode that is returned by initialize_block. The runtime MUST reject blocks that have non-inherent extrinsics in them while OnlyInherents was returned.

+

Coming back to the motivations and how they can be implemented with this runtime API change:

+

1. Multi-Block-Migrations: The runtime is being put into lock-down mode for the duration of the migration process by returning OnlyInherents from initialize_block. This ensures that no user provided transaction can interfere with the migration process. It is absolutely necessary to ensure this, otherwise a transaction could call into un-migrated storage and violate storage invariants.

+

2. poll is possible by using apply_extrinsic as entry-point and not hindered by this approach. It would not be possible to use a pallet inherent like System::last_inherent to achieve this for two reasons: First is that pallets do not have access to AllPalletsWithSystem which is required to invoke the poll hook on all pallets. Second is that the runtime does currently not enforce an order of inherents.

+

3. System::PostInherents can be done in the same manner as poll.

+

Drawbacks

+

The previous drawback of cementing the order of inherents has been addressed and removed by redesigning the approach. No further drawbacks have been identified thus far.

+

Testing, Security, and Privacy

+

The new logic of initialize_block can be tested by checking that the block-builder will skip transactions when OnlyInherents is returned.

+

Security: n/a

+

Privacy: n/a

+

Performance, Ergonomics, and Compatibility

+

Performance

+

The performance overhead is minimal in the sense that no clutter was added after fulfilling the +requirements. The only performance difference is that initialize_block also returns an enum that needs to be passed through the WASM boundary. This should be negligible.

+

Ergonomics

+

The new interface allows for more extensible runtime logic. In the future, this will be utilized for +multi-block-migrations which should be a huge ergonomic advantage for parachain developers.

+

Compatibility

+

The advice here is OPTIONAL and outside of the RFC. To not degrade +user experience, it is recommended to ensure that an updated node can still import historic blocks.

+

Prior Art and References

+

The RFC is currently being implemented in polkadot-sdk#1781 (formerly substrate#14275). Related issues and merge +requests:

+ +

Unresolved Questions

+

Please suggest a better name for BlockExecutiveMode. We already tried: RuntimeExecutiveMode, +ExtrinsicInclusionMode. The names of the modes Normal and Minimal were also called +AllExtrinsics and OnlyInherents, so if you have naming preferences; please post them.
+=> renamed to ExtrinsicInclusionMode

+

Is post_inherents more consistent instead of last_inherent? Then we should change it.
+=> renamed to last_inherent

+ +

The long-term future here is to move the block building logic into the runtime. Currently there is a tight dance between the block author and the runtime; the author has to call into different runtime functions in quick succession and exact order. Any misstep causes the block to be invalid.
+This can be unified and simplified by moving both parts into the runtime.

+ +
+ + +
+
+ + + +
+ + + + + + + + + + + + + + + + + + + + +
+ + diff --git a/text/0014-improve-locking-mechanism-for-parachains.html b/text/0014-improve-locking-mechanism-for-parachains.html new file mode 100644 index 000000000..e17404040 --- /dev/null +++ b/text/0014-improve-locking-mechanism-for-parachains.html @@ -0,0 +1,363 @@ + + + + + + + 0014 - Polkadot Fellowship RFCs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+
+

RFC-0014: Improve locking mechanism for parachains

+
+ + + +
Start DateJuly 25, 2023
DescriptionImprove locking mechanism for parachains
AuthorsBryan Chen
+
+

Summary

+

This RFC proposes a set of changes to the parachain lock mechanism. The goal is to allow a parachain manager to self-service the parachain without root track governance action.

+

This is achieved by remove existing lock conditions and only lock a parachain when:

+
    +
  • A parachain manager explicitly lock the parachain
  • +
  • OR a parachain block is produced successfully
  • +
+

Motivation

+

The manager of a parachain has permission to manage the parachain when the parachain is unlocked. Parachains are by default locked when onboarded to a slot. This requires the parachain wasm/genesis must be valid, otherwise a root track governance action on relaychain is required to update the parachain.

+

The current reliance on root track governance actions for managing parachains can be time-consuming and burdensome. This RFC aims to address this technical difficulty by allowing parachain managers to take self-service actions, rather than relying on general public voting.

+

The key scenarios this RFC seeks to improve are:

+
    +
  1. Rescue a parachain with invalid wasm/genesis.
  2. +
+

While we have various resources and templates to build a new parachain, it is still not a trivial task. It is very easy to make a mistake and resulting an invalid wasm/genesis. With lack of tools to help detect those issues1, it is very likely that the issues are only discovered after the parachain is onboarded on a slot. In this case, the parachain is locked and the parachain team has to go through a lengthy governance process to rescue the parachain.

+
    +
  1. Perform lease renewal for an existing parachain.
  2. +
+

One way to perform lease renewal for a parachain is by doing a least swap with another parachain with a longer lease. This requires the other parachain must be operational and able to perform XCM transact call into relaychain to dispatch the swap call. Combined with the overhead of setting up a new parachain, this is an time consuming and expensive process. Ideally, the parachain manager should be able to perform the lease swap call without having a running parachain2.

+

Requirements

+
    +
  • A parachain manager SHOULD be able to rescue a parachain by updating the wasm/genesis without root track governance action.
  • +
  • A parachain manager MUST NOT be able to update the wasm/genesis if the parachain is locked.
  • +
  • A parachain SHOULD be locked when it successfully produced the first block.
  • +
  • A parachain manager MUST be able to perform lease swap without having a running parachain.
  • +
+

Stakeholders

+
    +
  • Parachain teams
  • +
  • Parachain users
  • +
+

Explanation

+

Status quo

+

A parachain can either be locked or unlocked3. With parachain locked, the parachain manager does not have any privileges. With parachain unlocked, the parachain manager can perform following actions with the paras_registrar pallet:

+
    +
  • deregister: Deregister a Para Id, freeing all data and returning any deposit.
  • +
  • swap: Initiate or confirm lease swap with another parachain.
  • +
  • add_lock: Lock the parachain.
  • +
  • schedule_code_upgrade: Schedule a parachain upgrade to update parachain wasm.
  • +
  • set_current_head: Set the parachain's current head.
  • +
+

Currently, a parachain can be locked with following conditions:

+
    +
  • From add_lock call, which can be dispatched by relaychain Root origin, the parachain, or the parachain manager.
  • +
  • When a parachain is onboarded on a slot4.
  • +
  • When a crowdloan is created.
  • +
+

Only the relaychain Root origin or the parachain itself can unlock the lock5.

+

This creates an issue that if the parachain is unable to produce block, the parachain manager is unable to do anything and have to rely on relaychain Root origin to manage the parachain.

+

Proposed changes

+

This RFC proposes to change the lock and unlock conditions.

+

A parachain can be locked only with following conditions:

+
    +
  • Relaychain governance MUST be able to lock any parachain.
  • +
  • A parachain MUST be able to lock its own lock.
  • +
  • A parachain manager SHOULD be able to lock the parachain.
  • +
  • A parachain SHOULD be locked when it successfully produced a block for the first time.
  • +
+

A parachain can be unlocked only with following conditions:

+
    +
  • Relaychain governance MUST be able to unlock any parachain.
  • +
  • A parachain MUST be able to unlock its own lock.
  • +
+

Note that create crowdloan MUST NOT lock the parachain and onboard a parachain SHOULD NOT lock it until a new block is successfully produced.

+

Migration

+

A one off migration is proposed in order to apply this change retrospectively so that existing parachains can also be benefited from this RFC. This migration will unlock parachains that confirms with following conditions:

+
    +
  • Parachain is locked.
  • +
  • Parachain never produced a block. Including from expired leases.
  • +
  • Parachain manager never explicitly lock the parachain.
  • +
+

Drawbacks

+

Parachain locks are designed in such way to ensure the decentralization of parachains. If parachains are not locked when it should be, it could introduce centralization risk for new parachains.

+

For example, one possible scenario is that a collective may decide to launch a parachain fully decentralized. However, if the parachain is unable to produce block, the parachain manager will be able to replace the wasm and genesis without the consent of the collective.

+

It is considered this risk is tolerable as it requires the wasm/genesis to be invalid at first place. It is not yet practically possible to develop a parachain without any centralized risk currently.

+

Another case is that a parachain team may decide to use crowdloan to help secure a slot lease. Previously, creating a crowdloan will lock a parachain. This means crowdloan participants will know exactly the genesis of the parachain for the crowdloan they are participating. However, this actually providers little assurance to crowdloan participants. For example, if the genesis block is determined before a crowdloan is started, it is not possible to have onchain mechanism to enforce reward distributions for crowdloan participants. They always have to rely on the parachain team to fulfill the promise after the parachain is alive.

+

Existing operational parachains will not be impacted.

+

Testing, Security, and Privacy

+

The implementation of this RFC will be tested on testnets (Rococo and Westend) first.

+

An audit maybe required to ensure the implementation does not introduce unwanted side effects.

+

There is no privacy related concerns.

+

Performance

+

This RFC should not introduce any performance impact.

+

Ergonomics

+

This RFC should improve the developer experiences for new and existing parachain teams

+

Compatibility

+

This RFC is fully compatibility with existing interfaces.

+

Prior Art and References

+
    +
  • Parachain Slot Extension Story: https://github.com/paritytech/polkadot/issues/4758
  • +
  • Allow parachain to renew lease without actually run another parachain: https://github.com/paritytech/polkadot/issues/6685
  • +
  • Always treat parachain that never produced block for a significant amount of time as unlocked: https://github.com/paritytech/polkadot/issues/7539
  • +
+

Unresolved Questions

+

None at this stage.

+ +

This RFC is only intended to be a short term solution. Slots will be removed in future and lock mechanism is likely going to be replaced with a more generalized parachain manage & recovery system in future. Therefore long term impacts of this RFC are not considered.

+
1 +

https://github.com/paritytech/cumulus/issues/377

+
+
2 +

https://github.com/paritytech/polkadot/issues/6685

+
+
3 +

https://github.com/paritytech/polkadot/blob/994af3de79af25544bf39644844cbe70a7b4d695/runtime/common/src/paras_registrar.rs#L51-L52C15

+
+
4 +

https://github.com/paritytech/polkadot/blob/994af3de79af25544bf39644844cbe70a7b4d695/runtime/common/src/paras_registrar.rs#L473-L475

+
+
5 +

https://github.com/paritytech/polkadot/blob/994af3de79af25544bf39644844cbe70a7b4d695/runtime/common/src/paras_registrar.rs#L333-L340

+
+ +
+ + +
+
+ + + +
+ + + + + + + + + + + + + + + + + + + + +
+ + diff --git a/text/0022-adopt-encointer-runtime.html b/text/0022-adopt-encointer-runtime.html new file mode 100644 index 000000000..a5b9febc4 --- /dev/null +++ b/text/0022-adopt-encointer-runtime.html @@ -0,0 +1,284 @@ + + + + + + + 0022 - Polkadot Fellowship RFCs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+
+

RFC-0022: Adopt Encointer Runtime

+
+ + + +
Start DateAug 22nd 2023
DescriptionPermanently move the Encointer runtime into the Fellowship runtimes repo.
Authors@brenzi for Encointer Association, 8000 Zurich, Switzerland
+
+

Summary

+

Encointer is a system chain on Kusama since Jan 2022 and has been developed and maintained by the Encointer association. This RFC proposes to treat Encointer like any other system chain and include it in the fellowship repo with this PR.

+

Motivation

+

Encointer does not seek to be in control of its runtime repository. As a decentralized system, the fellowship has a more suitable structure to maintain a system chain runtime repo than the Encointer association does.

+

Also, Encointer aims to update its runtime in batches with other system chains in order to have consistency for interoperability across system chains.

+

Stakeholders

+
    +
  • Fellowship: Will continue to take upon them the review and auditing work for the Encointer runtime, but the process is streamlined with other system chains and therefore less time-consuming compared to the separate repo and CI process we currently have.
  • +
  • Kusama Network: Tokenholders can easily see the changes of all system chains in one place.
  • +
  • Encointer Association: Further decentralization of the Encointer Network necessities like devops.
  • +
  • Encointer devs: Being able to work directly in the Fellowship runtimes repo to streamline and synergize with other developers.
  • +
+

Explanation

+

Our PR has all details about our runtime and how we would move it into the fellowship repo.

+

Noteworthy: All Encointer-specific pallets will still be located in encointer's repo for the time being: https://github.com/encointer/pallets

+

It will still be the duty of the Encointer team to keep its runtime up to date and provide adequate test fixtures. Frequent dependency bumps with Polkadot releases would be beneficial for interoperability and could be streamlined with other system chains but that will not be a duty of fellowship. Whenever possible, all system chains could be upgraded jointly (including Encointer) with a batch referendum.

+

Further notes:

+
    +
  • Encointer will publish all its crates crates.io
  • +
  • Encointer does not carry out external auditing of its runtime nor pallets. It would be beneficial but not a requirement from our side if Encointer could join the auditing process of other system chains.
  • +
+

Drawbacks

+

Other than all other system chains, development and maintenance of the Encointer Network is mainly financed by the KSM Treasury and possibly the DOT Treasury in the future. Encointer is dedicated to maintaining its network and runtime code for as long as possible, but there is a dependency on funding which is not in the hands of the fellowship. The only risk in the context of funding, however, is that the Encointer runtime will see less frequent updates if there's less funding.

+

Testing, Security, and Privacy

+

No changes to the existing system are proposed. Only changes to how maintenance is organized.

+

Performance, Ergonomics, and Compatibility

+

No changes

+

Prior Art and References

+

Existing Encointer runtime repo

+

Unresolved Questions

+

None identified

+ +

More info on Encointer: encointer.org

+ +
+ + +
+
+ + + +
+ + + + + + + + + + + + + + + + + + + + +
+ + diff --git a/text/0032-minimal-relay.html b/text/0032-minimal-relay.html new file mode 100644 index 000000000..9aae130f3 --- /dev/null +++ b/text/0032-minimal-relay.html @@ -0,0 +1,452 @@ + + + + + + + 0032 - Polkadot Fellowship RFCs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+
+

RFC-0032: Minimal Relay

+
+ + + +
Start Date20 September 2023
DescriptionProposal to minimise Relay Chain functionality.
AuthorsJoe Petrowski, Gavin Wood
+
+

Summary

+

The Relay Chain contains most of the core logic for the Polkadot network. While this was necessary +prior to the launch of parachains and development of XCM, most of this logic can exist in +parachains. This is a proposal to migrate several subsystems into system parachains.

+

Motivation

+

Polkadot's scaling approach allows many distinct state machines (known generally as parachains) to +operate with common guarantees about the validity and security of their state transitions. Polkadot +provides these common guarantees by executing the state transitions on a strict subset (a backing +group) of the Relay Chain's validator set.

+

However, state transitions on the Relay Chain need to be executed by all validators. If any of +those state transitions can occur on parachains, then the resources of the complement of a single +backing group could be used to offer more cores. As in, they could be offering more coretime (a.k.a. +blockspace) to the network.

+

By minimising state transition logic on the Relay Chain by migrating it into "system chains" -- a +set of parachains that, with the Relay Chain, make up the Polkadot protocol -- the Polkadot +Ubiquitous Computer can maximise its primary offering: secure blockspace.

+

Stakeholders

+
    +
  • Parachains that interact with affected logic on the Relay Chain;
  • +
  • Core protocol and XCM format developers;
  • +
  • Tooling, block explorer, and UI developers.
  • +
+

Explanation

+

The following pallets and subsystems are good candidates to migrate from the Relay Chain:

+
    +
  • Identity
  • +
  • Balances
  • +
  • Staking +
      +
    • Staking
    • +
    • Election Provider
    • +
    • Bags List
    • +
    • NIS
    • +
    • Nomination Pools
    • +
    • Fast Unstake
    • +
    +
  • +
  • Governance +
      +
    • Treasury and Bounties
    • +
    • Conviction Voting
    • +
    • Referenda
    • +
    +
  • +
+

Note: The Auctions and Crowdloan pallets will be replaced by Coretime, its system chain and +interface described in RFC-1 and RFC-5, respectively.

+

Migrations

+

Some subsystems are simpler to move than others. For example, migrating Identity can be done by +simply preventing state changes in the Relay Chain, using the Identity-related state as the genesis +for a new chain, and launching that new chain with the genesis and logic (pallet) needed.

+

Other subsystems cannot experience any downtime like this because they are essential to the +network's functioning, like Staking and Governance. However, these can likely coexist with a +similarly-permissioned system chain for some time, much like how "Gov1" and "OpenGov" coexisted at +the latter's introduction.

+

Specific migration plans will be included in release notes of runtimes from the Polkadot Fellowship +when beginning the work of migrating a particular subsystem.

+

Interfaces

+

The Relay Chain, in many cases, will still need to interact with these subsystems, especially +Staking and Governance. These subsystems will require making some APIs available either via +dispatchable calls accessible to XCM Transact or possibly XCM Instructions in future versions.

+

For example, Staking provides a pallet-API to register points (e.g. for block production) and +offences (e.g. equivocation). With Staking in a system chain, that chain would need to allow the +Relay Chain to update validator points periodically so that it can correctly calculate rewards.

+

A pub-sub protocol may also lend itself to these types of interactions.

+

Functional Architecture

+

This RFC proposes that system chains form individual components within the system's architecture and +that these components are chosen as functional groups. This approach allows synchronous +composibility where it is most valuable, but isolates logic in such a way that provides flexibility +for optimal resource allocation (see Resource Allocation). For the +subsystems discussed in this RFC, namely Identity, Governance, and Staking, this would mean:

+
    +
  • People Chain, for identity and personhood logic, providing functionality related to the attributes +of single actors;
  • +
  • Governance Chain, for governance and system collectives, providing functionality for pluralities +to express their voices within the system;
  • +
  • Staking Chain, for Polkadot's staking system, including elections, nominations, reward +distribution, slashing, and non-interactive staking; and
  • +
  • Asset Hub, for fungible and non-fungible assets, including DOT.
  • +
+

The Collectives chain and Asset Hub already exist, so implementation of this RFC would mean two new +chains (People and Staking), with Governance moving to the currently-known-as Collectives chain +and Asset Hub being increasingly used for DOT over the Relay Chain.

+

Note that one functional group will likely include many pallets, as we do not know how pallet +configurations and interfaces will evolve over time.

+

Resource Allocation

+

The system should minimise wasted blockspace. These three (and other) subsystems may not each +consistently require a dedicated core. However, core scheduling is far more agile than functional +grouping. While migrating functionality from one chain to another can be a multi-month endeavour, +cores can be rescheduled almost on-the-fly.

+

Migrations are also breaking changes to some use cases, for example other parachains that need to +route XCM programs to particular chains. It is thus preferable to do them a single time in migrating +off the Relay Chain, reducing the risk of needing parachain splits in the future.

+

Therefore, chain boundaries should be based on functional grouping where synchronous composibility +is most valuable; and efficient resource allocation should be managed by the core scheduling +protocol.

+

Many of these system chains (including Asset Hub) could often share a single core in a semi-round +robin fashion (the coretime may not be uniform). When needed, for example during NPoS elections or +slashing events, the scheduler could allocate a dedicated core to the chain in need of more +throughput.

+

Deployment

+

Actual migrations should happen based on some prioritization. This RFC proposes to migrate Identity, +Staking, and Governance as the systems to work on first. A brief discussion on the factors involved +in each one:

+

Identity

+

Identity will be one of the simpler pallets to migrate into a system chain, as its logic is largely +self-contained and it does not "share" balances with other subsystems. As in, any DOT is held in +reserve as a storage deposit and cannot be simultaneously used the way locked DOT can be locked for +multiple purposes.

+

Therefore, migration can take place as follows:

+
    +
  1. The pallet can be put in a locked state, blocking most calls to the pallet and preventing updates +to identity info.
  2. +
  3. The frozen state will form the genesis of a new system parachain.
  4. +
  5. Functions will be added to the pallet that allow migrating the deposit to the parachain. The +parachain deposit is on the order of 1/100th of the Relay Chain's. Therefore, this will result in +freeing up Relay State as well as most of each user's reserved balance.
  6. +
  7. The pallet and any leftover state can be removed from the Relay Chain.
  8. +
+

User interfaces that render Identity information will need to source their data from the new system +parachain.

+

Note: In the future, it may make sense to decommission Kusama's Identity chain and do all account +identities via Polkadot's. However, the Kusama chain will serve as a dress rehearsal for Polkadot.

+

Staking

+

Migrating the staking subsystem will likely be the most complex technical undertaking, as the +Staking system cannot stop (the system MUST always have a validator set) nor run in parallel (the +system MUST have only one validator set) and the subsystem itself is made up of subsystems in the +runtime and the node. For example, if offences are reported to the Staking parachain, validator +nodes will need to submit their reports there.

+

Handling balances also introduces complications. The same balance can be used for staking and +governance. Ideally, all balances stay on Asset Hub, and only report "credits" to system chains like +Staking and Governance. However, staking mutates balances by issuing new DOT on era changes and for +rewards. Allowing DOT directly on the Staking parachain would simplify staking changes.

+

Given the complexity, it would be pragmatic to include the Balances pallet in the Staking parachain +in its first version. Any other systems that use overlapping locks, most notably governance, will +need to recognise DOT held on both Asset Hub and the Staking parachain.

+

There is more discussion about staking in a parachain in Moving Staking off the Relay +Chain.

+

Governance

+

Migrating governance into a parachain will be less complicated than staking. Most of the primitives +needed for the migration already exist. The Treasury supports spending assets on remote chains and +collectives like the Polkadot Technical Fellowship already function in a parachain. That is, XCM +already provides the ability to express system origins across chains.

+

Therefore, actually moving the governance logic into a parachain will be simple. It can run in +parallel with the Relay Chain's governance, which can be removed when the parachain has demonstrated +sufficient functionality. It's possible that the Relay Chain maintain a Root-level emergency track +for situations like parachains +halting.

+

The only complication arises from the fact that both Asset Hub and the Staking parachain will have +DOT balances; therefore, the Governance chain will need to be able to credit users' voting power +based on balances from both locations. This is not expected to be difficult to handle.

+

Kusama

+

Although Polkadot and Kusama both have system chains running, they have to date only been used for +introducing new features or bodies, for example fungible assets or the Technical Fellowship. There +has not yet been a migration of logic/state from the Relay Chain into a parachain. Given its more +realistic network conditions than testnets, Kusama is the best stage for rehearsal.

+

In the case of identity, Polkadot's system may be sufficient for the ecosystem. Therefore, Kusama +should be used to test the migration of logic and state from Relay Chain to parachain, but these +features may be (at the will of Kusama's governance) dropped from Kusama entirely after a successful +migration on Polkadot.

+

For Governance, Polkadot already has the Collectives parachain, which would become the Governance +parachain. The entire group of DOT holders is itself a collective (the legislative body), and +governance provides the means to express voice. Launching a Kusama Governance chain would be +sensible to rehearse a migration.

+

The Staking subsystem is perhaps where Kusama would provide the most value in its canary capacity. +Staking is the subsystem most constrained by PoV limits. Ensuring that elections, payouts, session +changes, offences/slashes, etc. work in a parachain on Kusama -- with its larger validator set -- +will give confidence to the chain's robustness on Polkadot.

+

Drawbacks

+

These subsystems will have reduced resources in cores than on the Relay Chain. Staking in particular +may require some optimizations to deal with constraints.

+

Testing, Security, and Privacy

+

Standard audit/review requirements apply. More powerful multi-chain integration test tools would be +useful in developement.

+

Performance, Ergonomics, and Compatibility

+

Describe the impact of the proposal on the exposed functionality of Polkadot.

+

Performance

+

This is an optimization. The removal of public/user transactions on the Relay Chain ensures that its +primary resources are allocated to system performance.

+

Ergonomics

+

This proposal alters very little for coretime users (e.g. parachain developers). Application +developers will need to interact with multiple chains, making ergonomic light client tools +particularly important for application development.

+

For existing parachains that interact with these subsystems, they will need to configure their +runtimes to recognize the new locations in the network.

+

Compatibility

+

Implementing this proposal will require some changes to pallet APIs and/or a pub-sub protocol. +Application developers will need to interact with multiple chains in the network.

+

Prior Art and References

+ +

Unresolved Questions

+

There remain some implementation questions, like how to use balances for both Staking and +Governance. See, for example, Moving Staking off the Relay +Chain.

+ +

Ideally the Relay Chain becomes transactionless, such that not even balances are represented there. +With Staking and Governance off the Relay Chain, this is not an unreasonable next step.

+

With Identity on Polkadot, Kusama may opt to drop its People Chain.

+ +
+ + +
+
+ + + +
+ + + + + + + + + + + + + + + + + + + + +
+ + diff --git a/text/0042-extrinsics-state-version.html b/text/0042-extrinsics-state-version.html new file mode 100644 index 000000000..f4f8cc4ac --- /dev/null +++ b/text/0042-extrinsics-state-version.html @@ -0,0 +1,320 @@ + + + + + + + 0042 - Polkadot Fellowship RFCs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+
+

RFC-0042: Add System version that replaces StateVersion on RuntimeVersion

+
+ + + +
Start Date25th October 2023
DescriptionAdd System Version and remove State Version
AuthorsVedhavyas Singareddi
+
+

Summary

+

At the moment, we have system_version field on RuntimeVersion that derives which state version is used for the +Storage. +We have a use case where we want extrinsics root is derived using StateVersion::V1. Without defining a new field +under RuntimeVersion, +we would like to propose adding system_version that can be used to derive both storage and extrinsic state version.

+

Motivation

+

Since the extrinsic state version is always StateVersion::V0, deriving extrinsic root requires full extrinsic data. +This would be problematic when we need to verify the extrinsics root if the extrinsic sizes are bigger. This problem is +further explored in https://github.com/polkadot-fellows/RFCs/issues/19

+

For Subspace project, we have an enshrined rollups called Domain with optimistic verification and Fraud proofs are +used to detect malicious behavior. +One of the Fraud proof variant is to derive Domain block extrinsic root on Subspace's consensus chain. +Since StateVersion::V0 requires full extrinsic data, we are forced to pass all the extrinsics through the Fraud proof. +One of the main challenge here is some extrinsics could be big enough that this variant of Fraud proof may not be +included in the Consensus block due to Block's weight restriction. +If the extrinsic root is derived using StateVersion::V1, then we do not need to pass the full extrinsic data but +rather at maximum, 32 byte of extrinsic data.

+

Stakeholders

+
    +
  • Technical Fellowship, in its role of maintaining system runtimes.
  • +
+

Explanation

+

In order to use project specific StateVersion for extrinsic roots, we proposed +an implementation that introduced +parameter to frame_system::Config but that unfortunately did not feel correct. +So we would like to propose adding this change to +the RuntimeVersion +object. The system version, if introduced, will be used to derive both storage and extrinsic state version. +If system version is 0, then both Storage and Extrinsic State version would use V0. +If system version is 1, then Storage State version would use V1 and Extrinsic State version would use V0. +If system version is 2, then both Storage and Extrinsic State version would use V1.

+

If implemented, the new RuntimeVersion definition would look something similar to

+
#![allow(unused)]
+fn main() {
+/// Runtime version (Rococo).
+#[sp_version::runtime_version]
+pub const VERSION: RuntimeVersion = RuntimeVersion {
+		spec_name: create_runtime_str!("rococo"),
+		impl_name: create_runtime_str!("parity-rococo-v2.0"),
+		authoring_version: 0,
+		spec_version: 10020,
+		impl_version: 0,
+		apis: RUNTIME_API_VERSIONS,
+		transaction_version: 22,
+		system_version: 1,
+	};
+}
+

Drawbacks

+

There should be no drawbacks as it would replace state_version with same behavior but documentation should be updated +so that chains know which system_version to use.

+

Testing, Security, and Privacy

+

AFAIK, should not have any impact on the security or privacy.

+

Performance, Ergonomics, and Compatibility

+

These changes should be compatible for existing chains if they use state_version value for system_verision.

+

Performance

+

I do not believe there is any performance hit with this change.

+

Ergonomics

+

This does not break any exposed Apis.

+

Compatibility

+

This change should not break any compatibility.

+

Prior Art and References

+

We proposed introducing a similar change by introducing a +parameter to frame_system::Config but did not feel that +is the correct way of introducing this change.

+

Unresolved Questions

+

I do not have any specific questions about this change at the moment.

+ +

IMO, this change is pretty self-contained and there won't be any future work necessary.

+ +
+ + +
+
+ + + +
+ + + + + + + + + + + + + + + + + + + + +
+ + diff --git a/text/0043-storage-proof-size-hostfunction.html b/text/0043-storage-proof-size-hostfunction.html new file mode 100644 index 000000000..ae2eb89f9 --- /dev/null +++ b/text/0043-storage-proof-size-hostfunction.html @@ -0,0 +1,287 @@ + + + + + + + 0043 - Polkadot Fellowship RFCs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+
+

RFC-0043: Introduce storage_proof_size Host Function for Improved Parachain Block Utilization

+
+ + + +
Start Date30 October 2023
DescriptionHost function to provide the storage proof size to runtimes.
AuthorsSebastian Kunert
+
+

Summary

+

This RFC proposes a new host function for parachains, storage_proof_size. It shall provide the size of the currently recorded storage proof to the runtime. Runtime authors can use the proof size to improve block utilization by retroactively reclaiming unused storage weight.

+

Motivation

+

The number of extrinsics that are included in a parachain block is limited by two constraints: execution time and proof size. FRAME weights cover both concepts, and block-builders use them to decide how many extrinsics to include in a block. However, these weights are calculated ahead of time by benchmarking on a machine with reference hardware. The execution-time properties of the state-trie and its storage items are unknown at benchmarking time. Therefore, we make some assumptions about the state-trie:

+
    +
  • Trie Depth: We assume a trie depth to account for intermediary nodes.
  • +
  • Storage Item Size: We make a pessimistic assumption based on the MaxEncodedLen trait.
  • +
+

These pessimistic assumptions lead to an overestimation of storage weight, negatively impacting block utilization on parachains.

+

In addition, the current model does not account for multiple accesses to the same storage items. While these repetitive accesses will not increase storage-proof size, the runtime-side weight monitoring will account for them multiple times. Since the proof size is completely opaque to the runtime, we can not implement retroactive storage weight correction.

+

A solution must provide a way for the runtime to track the exact storage-proof size consumed on a per-extrinsic basis.

+

Stakeholders

+
    +
  • Parachain Teams: They MUST include this host function in their runtime and node.
  • +
  • Light-client Implementors: They SHOULD include this host function in their runtime and node.
  • +
+

Explanation

+

This RFC proposes a new host function that exposes the storage-proof size to the runtime. As a result, runtimes can implement storage weight reclaiming mechanisms that improve block utilization.

+

This RFC proposes the following host function signature:

+
#![allow(unused)]
+fn main() {
+fn ext_storage_proof_size_version_1() -> u64;
+}
+

The host function MUST return an unsigned 64-bit integer value representing the current proof size. In block-execution and block-import contexts, this function MUST return the current size of the proof. To achieve this, parachain node implementors need to enable proof recording for block imports. In other contexts, this function MUST return 18446744073709551615 (u64::MAX), which represents disabled proof recording.

+

Performance, Ergonomics, and Compatibility

+

Performance

+

Parachain nodes need to enable proof recording during block import to correctly implement the proposed host function. Benchmarking conducted with balance transfers has shown a performance reduction of around 0.6% when proof recording is enabled.

+

Ergonomics

+

The host function proposed in this RFC allows parachain runtime developers to keep track of the proof size. Typical usage patterns would be to keep track of the overall proof size or the difference between subsequent calls to the host function.

+

Compatibility

+

Parachain teams will need to include this host function to upgrade.

+

Prior Art and References

+ + +
+ + +
+
+ + + +
+ + + + + + + + + + + + + + + + + + + + +
+ + diff --git a/text/0045-nft-deposits-asset-hub.html b/text/0045-nft-deposits-asset-hub.html new file mode 100644 index 000000000..a84ea7588 --- /dev/null +++ b/text/0045-nft-deposits-asset-hub.html @@ -0,0 +1,449 @@ + + + + + + + 0045 - Polkadot Fellowship RFCs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+
+

RFC-0045: Lowering NFT Deposits on Asset Hub

+
+ + + +
Start Date2 November 2023
DescriptionA proposal to reduce the minimum deposit required for collection creation on the Polkadot and Kusama Asset Hubs.
AuthorsAurora Poppyseed, Just_Luuuu, Viki Val, Joe Petrowski
+
+

Summary

+

This RFC proposes changing the current deposit requirements on the Polkadot and Kusama Asset Hub for +creating an NFT collection, minting an individual NFT, and lowering its corresponding metadata and +attribute deposits. The objective is to lower the barrier to entry for NFT creators, fostering a +more inclusive and vibrant ecosystem while maintaining network integrity and preventing spam.

+

Motivation

+

The current deposit of 10 DOT for collection creation (along with 0.01 DOT for item deposit and 0.2 +DOT for metadata and attribute deposits) on the Polkadot Asset Hub and 0.1 KSM on Kusama Asset Hub +presents a significant financial barrier for many NFT creators. By lowering the deposit +requirements, we aim to encourage more NFT creators to participate in the Polkadot NFT ecosystem, +thereby enriching the diversity and vibrancy of the community and its offerings.

+

The initial introduction of a 10 DOT deposit was an arbitrary starting point that does not consider +the actual storage footprint of an NFT collection. This proposal aims to adjust the deposit first to +a value based on the deposit function, which calculates a deposit based on the number of keys +introduced to storage and the size of corresponding values stored.

+

Further, it suggests a direction for a future of calculating deposits variably based on adoption +and/or market conditions. There is a discussion on tradeoffs of setting deposits too high or too +low.

+

Requirements

+
    +
  • Deposits SHOULD be derived from deposit function, adjusted by correspoding pricing mechansim.
  • +
+

Stakeholders

+
    +
  • NFT Creators: Primary beneficiaries of the proposed change, particularly those who found the +current deposit requirements prohibitive.
  • +
  • NFT Platforms: As the facilitator of artists' relations, NFT marketplaces have a vested +interest in onboarding new users and making their platforms more accessible.
  • +
  • dApp Developers: Making the blockspace more accessible will encourage developers to create and +build unique dApps in the Polkadot ecosystem.
  • +
  • Polkadot Community: Stands to benefit from an influx of artists, creators, and diverse NFT +collections, enhancing the overall ecosystem.
  • +
+

Previous discussions have been held within the Polkadot +Forum, with +artists expressing their concerns about the deposit amounts.

+

Explanation

+

This RFC proposes a revision of the deposit constants in the configuration of the NFTs pallet on the +Polkadot Asset Hub. The new deposit amounts would be determined by a standard deposit formula.

+

As of v1.1.1, the Collection Deposit is 10 DOT and the Item Deposit is 0.01 DOT (see +here).

+

Based on the storage footprint of these items, this RFC proposes changing them to:

+
#![allow(unused)]
+fn main() {
+pub const NftsCollectionDeposit: Balance = system_para_deposit(1, 130);
+pub const NftsItemDeposit: Balance = system_para_deposit(1, 164);
+}
+

This results in the following deposits (calculted using this +repository):

+

Polkadot

+
+ + + + +
NameCurrent Rate (DOT)Calculated with Function (DOT)
collectionDeposit100.20064
itemDeposit0.010.20081
metadataDepositBase0.201290.20076
attributeDepositBase0.20.2
+
+

Similarly, the prices for Kusama were calculated as:

+

Kusama:

+
+ + + + +
NameCurrent Rate (KSM)Calculated with Function (KSM)
collectionDeposit0.10.006688
itemDeposit0.0010.000167
metadataDepositBase0.0067096666170.0006709666617
attributeDepositBase0.006666666660.000666666666
+
+

Enhanced Approach to Further Lower Barriers for Entry

+

This RFC proposes further lowering these deposits below the rate normally charged for such a storage +footprint. This is based on the economic argument that sub-rate deposits are a subsididy for growth +and adoption of a specific technology. If the NFT functionality on Polkadot gains adoption, it makes +it more attractive for future entrants, who would be willing to pay the non-subsidized rate because +of the existing community.

+

Proposed Rate Adjustments

+
#![allow(unused)]
+fn main() {
+parameter_types! {
+	pub const NftsCollectionDeposit: Balance = system_para_deposit(1, 130);
+	pub const NftsItemDeposit: Balance = system_para_deposit(1, 164) / 40;
+	pub const NftsMetadataDepositBase: Balance = system_para_deposit(1, 129) / 10;
+	pub const NftsAttributeDepositBase: Balance = system_para_deposit(1, 0) / 10;
+	pub const NftsDepositPerByte: Balance = system_para_deposit(0, 1);
+}
+}
+

This adjustment would result in the following DOT and KSM deposit values:

+
+ + + + +
NameProposed Rate PolkadotProposed Rate Kusama
collectionDeposit0.20064 DOT0.006688 KSM
itemDeposit0.005 DOT0.000167 KSM
metadataDepositBase0.002 DOT0.0006709666617 KSM
attributeDepositBase0.002 DOT0.000666666666 KSM
+
+

Short- and Long-Term Plans

+

The plan presented above is recommended as an immediate step to make Polkadot a more attractive +place to launch NFTs, although one would note that a forty fold reduction in the Item Deposit is +just as arbitrary as the value it was replacing. As explained earlier, this is meant as a subsidy to +gain more momentum for NFTs on Polkadot.

+

In the long term, an implementation should account for what should happen to the deposit rates +assuming that the subsidy is successful and attracts a lot of deployments. Many options are +discussed in the Addendum.

+

The deposit should be calculated as a function of the number of existing collections with maximum +DOT and stablecoin values limiting the amount. With asset rates available via the Asset Conversion +pallet, the system could take the lower value required. A sigmoid curve would make sense for this +application to avoid sudden rate changes, as in:

+

$$ minDeposit + \frac{\mathrm{min(DotDeposit, StableDeposit) - minDeposit} }{\mathrm{1 + e^{a - b * x}} }$$

+

where the constant a moves the inflection to lower or higher x values, the constant b adjusts +the rate of the deposit increase, and the independent variable x is the number of collections or +items, depending on application.

+

Drawbacks

+

Modifying deposit requirements necessitates a balanced assessment of the potential drawbacks. +Highlighted below are cogent points extracted from the discourse on the Polkadot Forum +conversation, +which provide critical perspectives on the implications of such changes.

+

Adjusting NFT deposit requirements on Polkadot and Kusama Asset Hubs involves key challenges:

+
    +
  1. +

    State Growth and Technical Concerns: Lowering deposit requirements can lead to increased +blockchain state size, potentially causing state bloat. This growth needs to be managed to +prevent strain on the network's resources and maintain operational efficiency. As stated earlier, +the deposit levels proposed here are intentionally low with the thesis that future participants +would pay the standard rate.

    +
  2. +
  3. +

    Network Security and Market Response: Adapting to the cryptocurrency market's volatility is +crucial. The mechanism for setting deposit amounts must be responsive yet stable, avoiding undue +complexity for users.

    +
  4. +
  5. +

    Economic Impact on Previous Stakeholders: The change could have varied economic effects on +previous (before the change) creators, platform operators, and investors. Balancing these +interests is essential to ensure the adjustment benefits the ecosystem without negatively +impacting its value dynamics. However in the particular case of Polkadot and Kusama Asset Hub +this does not pose a concern since there are very few collections currently and thus previous +stakeholders wouldn't be much affected. As of date 9th January 2024 there are 42 collections on +Polkadot Asset Hub and 191 on Kusama Asset Hub with a relatively low volume.

    +
  6. +
+

Testing, Security, and Privacy

+

Security concerns

+

As noted above, state bloat is a security concern. In the case of abuse, governance could adapt by +increasing deposit rates and/or using forceDestroy on collections agreed to be spam.

+

Performance, Ergonomics, and Compatibility

+

Performance

+

The primary performance consideration stems from the potential for state bloat due to increased +activity from lower deposit requirements. It's vital to monitor and manage this to avoid any +negative impact on the chain's performance. Strategies for mitigating state bloat, including +efficient data management and periodic reviews of storage requirements, will be essential.

+

Ergonomics

+

The proposed change aims to enhance the user experience for artists, traders, and utilizers of +Kusama and Polkadot Asset Hubs, making Polkadot and Kusama more accessible and user-friendly.

+

Compatibility

+

The change does not impact compatibility as a redeposit function is already implemented.

+

Unresolved Questions

+

If this RFC is accepted, there should not be any unresolved questions regarding how to adapt the +implementation of deposits for NFT collections.

+

Addendum

+

Several innovative proposals have been considered to enhance the network's adaptability and manage +deposit requirements more effectively. The RFC recommends a mixture of the function-based model and +the stablecoin model, but some tradeoffs of each are maintained here for those interested.

+

Enhanced Weak Governance Origin Model

+

The concept of a weak governance origin, controlled by a consortium like a system collective, has +been proposed. This model would allow for dynamic adjustments of NFT deposit requirements in +response to market conditions, adhering to storage deposit norms.

+
    +
  • Responsiveness: To address concerns about delayed responses, the model could incorporate +automated triggers based on predefined market indicators, ensuring timely adjustments.
  • +
  • Stability vs. Flexibility: Balancing stability with the need for flexibility is challenging. +To mitigate the issue of frequent changes in DOT-based deposits, a mechanism for gradual and +predictable adjustments could be introduced.
  • +
  • Scalability: The model's scalability is a concern, given the numerous deposits across the +system. A more centralized approach to deposit management might be needed to avoid constant, +decentralized adjustments.
  • +
+

Function-Based Pricing Model

+

Another proposal is to use a mathematical function to regulate deposit prices, initially allowing +low prices to encourage participation, followed by a gradual increase to prevent network bloat.

+
    +
  • Choice of Function: A logarithmic or sigmoid function is favored over an exponential one, as +these functions increase prices at a rate that encourages participation while preventing +prohibitive costs.
  • +
  • Adjustment of Constants: To finely tune the pricing rise, one of the function's constants +could correlate with the total number of NFTs on Asset Hub. This would align the deposit +requirements with the actual usage and growth of the network.
  • +
+

Linking Deposit to USD(x) Value

+

This approach suggests pegging the deposit value to a stable currency like the USD, introducing +predictability and stability for network users.

+
    +
  • Market Dynamics: One perspective is that fluctuations in native currency value naturally +balance user participation and pricing, deterring network spam while encouraging higher-value +collections. Conversely, there's an argument for allowing broader participation if the DOT/KSM +value increases.
  • +
  • Complexity and Risks: Implementing a USD-based pricing system could add complexity and +potential risks. The implementation needs to be carefully designed to avoid unintended +consequences, such as excessive reliance on external financial systems or currencies.
  • +
+

Each of these proposals offers unique advantages and challenges. The optimal approach may involve a +combination of these ideas, carefully adjusted to address the specific needs and dynamics of the +Polkadot and Kusama networks.

+ +
+ + +
+
+ + + +
+ + + + + + + + + + + + + + + + + + + + +
+ + diff --git a/text/0047-assignment-of-availability-chunks.html b/text/0047-assignment-of-availability-chunks.html new file mode 100644 index 000000000..64aeba5b6 --- /dev/null +++ b/text/0047-assignment-of-availability-chunks.html @@ -0,0 +1,494 @@ + + + + + + + 0047 - Polkadot Fellowship RFCs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+
+

RFC-0047: Assignment of availability chunks to validators

+
+ + + +
Start Date03 November 2023
DescriptionAn evenly-distributing indirection layer between availability chunks and validators.
AuthorsAlin Dima
+
+

Summary

+

Propose a way of permuting the availability chunk indices assigned to validators, in the context of +recovering available data from systematic chunks, with the +purpose of fairly distributing network bandwidth usage.

+

Motivation

+

Currently, the ValidatorIndex is always identical to the ChunkIndex. Since the validator array is only shuffled once +per session, naively using the ValidatorIndex as the ChunkIndex would pose an unreasonable stress on the first N/3 +validators during an entire session, when favouring availability recovery from systematic chunks.

+

Therefore, the relay chain node needs a deterministic way of evenly distributing the first ~(N_VALIDATORS / 3) +systematic availability chunks to different validators, based on the relay chain block and core. +The main purpose is to ensure fair distribution of network bandwidth usage for availability recovery in general and in +particular for systematic chunk holders.

+

Stakeholders

+

Relay chain node core developers.

+

Explanation

+

Systematic erasure codes

+

An erasure coding algorithm is considered systematic if it preserves the original unencoded data as part of the +resulting code. +The implementation of the erasure coding algorithm used for polkadot's availability data is systematic. +Roughly speaking, the first N_VALIDATORS/3 chunks of data can be cheaply concatenated to retrieve the original data, +without running the resource-intensive and time-consuming reconstruction algorithm.

+

You can find the concatenation procedure of systematic chunks for polkadot's erasure coding algorithm +here

+

In a nutshell, it performs a column-wise concatenation with 2-byte chunks. +The output could be zero-padded at the end, so scale decoding must be aware of the expected length in bytes and ignore +trailing zeros (this assertion is already being made for regular reconstruction).

+

Availability recovery at present

+

According to the polkadot protocol spec:

+
+

A validator should request chunks by picking peers randomly and must recover at least f+1 chunks, where +n=3f+k and k in {1,2,3}.

+
+

For parity's polkadot node implementation, the process was further optimised. At this moment, it works differently based +on the estimated size of the available data:

+

(a) for small PoVs (up to 128 Kib), sequentially try requesting the unencoded data from the backing group, in a random +order. If this fails, fallback to option (b).

+

(b) for large PoVs (over 128 Kib), launch N parallel requests for the erasure coded chunks (currently, N has an upper +limit of 50), until enough chunks were recovered. Validators are tried in a random order. Then, reconstruct the +original data.

+

All options require that after reconstruction, validators then re-encode the data and re-create the erasure chunks trie +in order to check the erasure root.

+

Availability recovery from systematic chunks

+

As part of the effort of +increasing polkadot's resource efficiency, scalability and performance, +work is under way to modify the Availability Recovery protocol by leveraging systematic chunks. See +this comment for preliminary +performance results.

+

In this scheme, the relay chain node will first attempt to retrieve the ~N/3 systematic chunks from the validators that +should hold them, before falling back to recovering from regular chunks, as before.

+

A re-encoding step is still needed for verifying the erasure root, so the erasure coding overhead cannot be completely +brought down to 0.

+

Not being able to retrieve even one systematic chunk would make systematic reconstruction impossible. Therefore, backers +can be used as a backup to retrieve a couple of missing systematic chunks, before falling back to retrieving regular +chunks.

+

Chunk assignment function

+

Properties

+

The function that decides the chunk index for a validator will be parameterized by at least +(validator_index, core_index) +and have the following properties:

+
    +
  1. deterministic
  2. +
  3. relatively quick to compute and resource-efficient.
  4. +
  5. when considering a fixed core_index, the function should describe a permutation of the chunk indices
  6. +
  7. the validators that map to the first N/3 chunk indices should have as little overlap as possible for different cores.
  8. +
+

In other words, we want a uniformly distributed, deterministic mapping from ValidatorIndex to ChunkIndex per core.

+

It's desirable to not embed this function in the runtime, for performance and complexity reasons. +However, this means that the function needs to be kept very simple and with minimal or no external dependencies. +Any change to this function could result in parachains being stalled and needs to be coordinated via a runtime upgrade +or governance call.

+

Proposed function

+

Pseudocode:

+
#![allow(unused)]
+fn main() {
+pub fn get_chunk_index(
+  n_validators: u32,
+  validator_index: ValidatorIndex,
+  core_index: CoreIndex
+) -> ChunkIndex {
+  let threshold = systematic_threshold(n_validators); // Roughly n_validators/3
+  let core_start_pos = core_index * threshold;
+
+  (core_start_pos + validator_index) % n_validators
+}
+}
+

Network protocol

+

The request-response /req_chunk protocol will be bumped to a new version (from v1 to v2). +For v1, the request and response payloads are:

+
#![allow(unused)]
+fn main() {
+/// Request an availability chunk.
+pub struct ChunkFetchingRequest {
+	/// Hash of candidate we want a chunk for.
+	pub candidate_hash: CandidateHash,
+	/// The index of the chunk to fetch.
+	pub index: ValidatorIndex,
+}
+
+/// Receive a requested erasure chunk.
+pub enum ChunkFetchingResponse {
+	/// The requested chunk data.
+	Chunk(ChunkResponse),
+	/// Node was not in possession of the requested chunk.
+	NoSuchChunk,
+}
+
+/// This omits the chunk's index because it is already known by
+/// the requester and by not transmitting it, we ensure the requester is going to use his index
+/// value for validating the response, thus making sure he got what he requested.
+pub struct ChunkResponse {
+	/// The erasure-encoded chunk of data belonging to the candidate block.
+	pub chunk: Vec<u8>,
+	/// Proof for this chunk's branch in the Merkle tree.
+	pub proof: Proof,
+}
+}
+

Version 2 will add an index field to ChunkResponse:

+
#![allow(unused)]
+fn main() {
+#[derive(Debug, Clone, Encode, Decode)]
+pub struct ChunkResponse {
+	/// The erasure-encoded chunk of data belonging to the candidate block.
+	pub chunk: Vec<u8>,
+	/// Proof for this chunk's branch in the Merkle tree.
+	pub proof: Proof,
+	/// Chunk index.
+	pub index: ChunkIndex
+}
+}
+

An important thing to note is that in version 1, the ValidatorIndex value is always equal to the ChunkIndex. +Until the chunk rotation feature is enabled, this will also be true for version 2. However, after the feature is +enabled, this will generally not be true.

+

The requester will send the request to validator with index V. The responder will map the V validator index to the +C chunk index and respond with the C-th chunk. This mapping can be seamless, by having each validator store their +chunk by ValidatorIndex (just as before).

+

The protocol implementation MAY check the returned ChunkIndex against the expected mapping to ensure that +it received the right chunk. +In practice, this is desirable during availability-distribution and systematic chunk recovery. However, regular +recovery may not check this index, which is particularly useful when participating in disputes that don't allow +for easy access to the validator->chunk mapping. See Appendix A for more details.

+

In any case, the requester MUST verify the chunk's proof using the provided index.

+

During availability-recovery, given that the requester may not know (if the mapping is not available) whether the +received chunk corresponds to the requested validator index, it has to keep track of received chunk indices and ignore +duplicates. Such duplicates should be considered the same as an invalid/garbage response (drop it and move on to the +next validator - we can't punish via reputation changes, because we don't know which validator misbehaved).

+

Upgrade path

+

Step 1: Enabling new network protocol

+

In the beginning, both /req_chunk/1 and /req_chunk/2 will be supported, until all validators and +collators have upgraded to use the new version. V1 will be considered deprecated. During this step, the mapping will +still be 1:1 (ValidatorIndex == ChunkIndex), regardless of protocol. +Once all nodes are upgraded, a new release will be cut that removes the v1 protocol. Only once all nodes have upgraded +to this version will step 2 commence.

+

Step 2: Enabling the new validator->chunk mapping

+

Considering that the Validator->Chunk mapping is critical to para consensus, the change needs to be enacted atomically +via governance, only after all validators have upgraded the node to a version that is aware of this mapping, +functionality-wise. +It needs to be explicitly stated that after the governance enactment, validators that run older client versions that +don't support this mapping will not be able to participate in parachain consensus.

+

Additionally, an error will be logged when starting a validator with an older version, after the feature was enabled.

+

On the other hand, collators will not be required to upgrade in this step (but are still require to upgrade for step 1), +as regular chunk recovery will work as before, granted that version 1 of the networking protocol has been removed. +Note that collators only perform availability-recovery in rare, adversarial scenarios, so it is fine to not optimise for +this case and let them upgrade at their own pace.

+

To support enabling this feature via the runtime, we will use the NodeFeatures bitfield of the HostConfiguration +struct (added in https://github.com/paritytech/polkadot-sdk/pull/2177). Adding and enabling a feature +with this scheme does not require a runtime upgrade, but only a referendum that issues a +Configuration::set_node_feature extrinsic. Once the feature is enabled and new configuration is live, the +validator->chunk mapping ceases to be a 1:1 mapping and systematic recovery may begin.

+

Drawbacks

+
    +
  • Getting access to the core_index that used to be occupied by a candidate in some parts of the dispute protocol is +very complicated (See appendix A). This RFC assumes that availability-recovery processes initiated during +disputes will only use regular recovery, as before. This is acceptable since disputes are rare occurrences in practice +and is something that can be optimised later, if need be. Adding the core_index to the CandidateReceipt would +mitigate this problem and will likely be needed in the future for CoreJam and/or Elastic scaling. +Related discussion about updating CandidateReceipt
  • +
  • It's a breaking change that requires all validators and collators to upgrade their node version at least once.
  • +
+

Testing, Security, and Privacy

+

Extensive testing will be conducted - both automated and manual. +This proposal doesn't affect security or privacy.

+

Performance, Ergonomics, and Compatibility

+

Performance

+

This is a necessary data availability optimisation, as reed-solomon erasure coding has proven to be a top consumer of +CPU time in polkadot as we scale up the parachain block size and number of availability cores.

+

With this optimisation, preliminary performance results show that CPU time used for reed-solomon coding/decoding can be +halved and total POV recovery time decrease by 80% for large POVs. See more +here.

+

Ergonomics

+

Not applicable.

+

Compatibility

+

This is a breaking change. See upgrade path section above. +All validators and collators need to have upgraded their node versions before the feature will be enabled via a +governance call.

+

Prior Art and References

+

See comments on the tracking issue and the +in-progress PR

+

Unresolved Questions

+

Not applicable.

+ +

This enables future optimisations for the performance of availability recovery, such as retrieving batched systematic +chunks from backers/approval-checkers.

+

Appendix A

+

This appendix details the intricacies of getting access to the core index of a candidate in parity's polkadot node.

+

Here, core_index refers to the index of the core that a candidate was occupying while it was pending availability +(from backing to inclusion).

+

Availability-recovery can currently be triggered by the following phases in the polkadot protocol:

+
    +
  1. During the approval voting process.
  2. +
  3. By other collators of the same parachain.
  4. +
  5. During disputes.
  6. +
+

Getting the right core index for a candidate can be troublesome. Here's a breakdown of how different parts of the +node implementation can get access to it:

+
    +
  1. +

    The approval-voting process for a candidate begins after observing that the candidate was included. Therefore, the +node has easy access to the block where the candidate got included (and also the core that it occupied).

    +
  2. +
  3. +

    The pov_recovery task of the collators starts availability recovery in response to noticing a candidate getting +backed, which enables easy access to the core index the candidate started occupying.

    +
  4. +
  5. +

    Disputes may be initiated on a number of occasions:

    +

    3.a. is initiated by the validator as a result of finding an invalid candidate while participating in the +approval-voting protocol. In this case, availability-recovery is not needed, since the validator already issued their +vote.

    +

    3.b is initiated by the validator noticing dispute votes recorded on-chain. In this case, we can safely +assume that the backing event for that candidate has been recorded and kept in memory.

    +

    3.c is initiated as a result of getting a dispute statement from another validator. It is possible that the dispute +is happening on a fork that was not yet imported by this validator, so the subsystem may not have seen this candidate +being backed.

    +
  6. +
+

A naive attempt of solving 3.c would be to add a new version for the disputes request-response networking protocol. +Blindly passing the core index in the network payload would not work, since there is no way of validating that +the reported core_index was indeed the one occupied by the candidate at the respective relay parent.

+

Another attempt could be to include in the message the relay block hash where the candidate was included. +This information would be used in order to query the runtime API and retrieve the core index that the candidate was +occupying. However, considering it's part of an unimported fork, the validator cannot call a runtime API on that block.

+

Adding the core_index to the CandidateReceipt would solve this problem and would enable systematic recovery for all +dispute scenarios.

+ +
+ + +
+
+ + + +
+ + + + + + + + + + + + + + + + + + + + +
+ + diff --git a/text/0048-session-keys-runtime-api.html b/text/0048-session-keys-runtime-api.html new file mode 100644 index 000000000..2c1778125 --- /dev/null +++ b/text/0048-session-keys-runtime-api.html @@ -0,0 +1,333 @@ + + + + + + + 0048 - Polkadot Fellowship RFCs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+
+

RFC-0048: Generate ownership proof for SessionKeys

+
+ + + +
Start Date13 November 2023
DescriptionChange SessionKeys runtime api to support generating an ownership proof for the on chain registration.
AuthorsBastian Köcher
+
+

Summary

+

This RFC proposes to changes the SessionKeys::generate_session_keys runtime api interface. This runtime api is used by validator operators to +generate new session keys on a node. The public session keys are then registered manually on chain by the validator operator. +Before this RFC it was not possible by the on chain logic to ensure that the account setting the public session keys is also in +possession of the private session keys. To solve this the RFC proposes to pass the account id of the account doing the +registration on chain to generate_session_keys. Further this RFC proposes to change the return value of the generate_session_keys +function also to not only return the public session keys, but also the proof of ownership for the private session keys. The +validator operator will then need to send the public session keys and the proof together when registering new session keys on chain.

+

Motivation

+

When submitting the new public session keys to the on chain logic there doesn't exist any verification of possession of the private session keys. +This means that users can basically register any kind of public session keys on chain. While the on chain logic ensures that there are +no duplicate keys, someone could try to prevent others from registering new session keys by setting them first. While this wouldn't bring +the "attacker" any kind of advantage, more like disadvantages (potential slashes on their account), it could prevent someone from +e.g. changing its session key in the event of a private session key leak.

+

After this RFC this kind of attack would not be possible anymore, because the on chain logic can verify that the sending account +is in ownership of the private session keys.

+

Stakeholders

+
    +
  • Polkadot runtime implementors
  • +
  • Polkadot node implementors
  • +
  • Validator operators
  • +
+

Explanation

+

We are first going to explain the proof format being used:

+
#![allow(unused)]
+fn main() {
+type Proof = (Signature, Signature, ..);
+}
+

The proof being a SCALE encoded tuple over all signatures of each private session +key signing the account_id. The actual type of each signature depends on the +corresponding session key cryptographic algorithm. The order of the signatures in +the proof is the same as the order of the session keys in the SessionKeys type +declared in the runtime.

+

The version of the SessionKeys needs to be bumped to 1 to reflect the changes to the +signature of SessionKeys_generate_session_keys:

+
#![allow(unused)]
+fn main() {
+pub struct OpaqueGeneratedSessionKeys {
+	pub keys: Vec<u8>,
+	pub proof: Vec<u8>,
+}
+
+fn SessionKeys_generate_session_keys(account_id: Vec<u8>, seed: Option<Vec<u8>>) -> OpaqueGeneratedSessionKeys;
+}
+

The default calling convention for runtime apis is applied, meaning the parameters +passed as SCALE encoded array and the length of the encoded array. The return value +being the SCALE encoded return value as u64 (array_ptr | length << 32). So, the +actual exported function signature looks like:

+
#![allow(unused)]
+fn main() {
+fn SessionKeys_generate_session_keys(array: *const u8, len: usize) -> u64;
+}
+

The on chain logic for setting the SessionKeys needs to be changed as well. It +already gets the proof passed as Vec<u8>. This proof needs to be decoded to +the actual Proof type as explained above. The proof and the SCALE encoded +account_id of the sender are used to verify the ownership of the SessionKeys.

+

Drawbacks

+

Validator operators need to pass the their account id when rotating their session keys in a node. +This will require updating some high level docs and making users familiar with the slightly changed ergonomics.

+

Testing, Security, and Privacy

+

Testing of the new changes only requires passing an appropriate owner for the current testing context. +The changes to the proof generation and verification got audited to ensure they are correct.

+

Performance, Ergonomics, and Compatibility

+

Performance

+

The session key generation is an offchain process and thus, doesn't influence the performance of the +chain. Verifying the proof is done on chain as part of the transaction logic for setting the session keys. +The verification of the proof is a signature verification number of individual session keys times. As setting +the session keys is happening quite rarely, it should not influence the overall system performance.

+

Ergonomics

+

The interfaces have been optimized to make it as easy as possible to generate the ownership proof.

+

Compatibility

+

Introduces a new version of the SessionKeys runtime api. Thus, nodes should be updated before +a runtime is enacted that contains these changes otherwise they will fail to generate session keys. +The RPC that exists around this runtime api needs to be updated to support passing the account id +and for returning the ownership proof alongside the public session keys.

+

UIs would need to be updated to support the new RPC and the changed on chain logic.

+

Prior Art and References

+

None.

+

Unresolved Questions

+

None.

+ +

Substrate implementation of the RFC.

+ +
+ + +
+
+ + + +
+ + + + + + + + + + + + + + + + + + + + +
+ + diff --git a/text/0050-fellowship-salaries.html b/text/0050-fellowship-salaries.html new file mode 100644 index 000000000..1b4419977 --- /dev/null +++ b/text/0050-fellowship-salaries.html @@ -0,0 +1,351 @@ + + + + + + + 0050 - Polkadot Fellowship RFCs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+
+

RFC-0050: Fellowship Salaries

+
+ + + +
Start Date15 November 2023
DescriptionProposal to set rank-based Fellowship salary levels.
AuthorsJoe Petrowski, Gavin Wood
+
+

Summary

+

The Fellowship Manifesto states that members should receive a monthly allowance on par with gross +income in OECD countries. This RFC proposes concrete amounts.

+

Motivation

+

One motivation for the Technical Fellowship is to provide an incentive mechanism that can induct and +retain technical talent for the continued progress of the network.

+

In order for members to uphold their commitment to the network, they should receive support to +ensure that their needs are met such that they have the time to dedicate to their work on Polkadot. +Given the high expectations of Fellows, it is reasonable to consider contributions and requirements +on par with a full-time job. Providing a livable wage to those making such contributions makes it +pragmatic to work full-time on Polkadot.

+

Note: Goals of the Fellowship, expectations for each Dan, and conditions for promotion and demotion +are all explained in the Manifesto. This RFC is only to propose concrete values for allowances.

+

Stakeholders

+
    +
  • Fellowship members
  • +
  • Polkadot Treasury
  • +
+

Explanation

+

This RFC proposes agreeing on salaries relative to a single level, the III Dan. As such, changes to +the amount or asset used would only be on a single value, and all others would adjust relatively. A +III Dan is someone whose contributions match the expectations of a full-time individual contributor. +The salary at this level should be reasonably close to averages in OECD countries.

+
+ + + + + + + + + +
DanFactor
I0.125
II0.25
III1
IV1.5
V2.0
VI2.5
VII2.5
VIII2.5
IX2.5
+
+

Note that there is a sizable increase between II Dan (Proficient) and III Dan (Fellow). By the third +Dan, it is generally expected that one is working on Polkadot as their primary focus in a full-time +capacity.

+

Salary Asset

+

Although the Manifesto (Section 8) specifies a monthly allowance in DOT, this RFC proposes the use +of USDT instead. The allowance is meant to provide members stability in meeting their day-to-day +needs and recognize contributions. Using USDT provides more stability and less speculation.

+

This RFC proposes that a III Dan earn 80,000 USDT per year. The salary at this level is commensurate +with average salaries in OECD countries (note: 77,000 USD in the U.S., with an average engineer at +100,000 USD). The other ranks would thus earn:

+
+ + + + + + + + + +
DanAnnual Salary
I10,000
II20,000
III80,000
IV120,000
V160,000
VI200,000
VII200,000
VIII200,000
IX200,000
+
+

The salary levels for Architects (IV, V, and VI Dan) are typical of senior engineers.

+

Allowances will be managed by the Salary pallet.

+

Projections

+

Based on the current membership, the maximum yearly and monthly costs are shown below:

+
+ + + + + + + + + +
DanSalaryMembersYearlyMonthly
I10,00027270,00022,500
II20,00011220,00018,333
III80,0008640,00053,333
IV120,0003360,00030,000
V160,0005800,00066,667
VI200,0003600,00050,000
> VI200,000000
Total2,890,000240,833
+
+

Note that these are the maximum amounts; members may choose to take a passive (lower) level. On the +other hand, more people will likely join the Fellowship in the coming years.

+

Updates

+

Updates to these levels, whether relative ratios, the asset used, or the amount, shall be done via +RFC.

+

Drawbacks

+

By not using DOT for payment, the protocol relies on the stability of other assets and the ability +to acquire them. However, the asset of choice can be changed in the future.

+

Testing, Security, and Privacy

+

N/A.

+

Performance, Ergonomics, and Compatibility

+

Performance

+

N/A

+

Ergonomics

+

N/A

+

Compatibility

+

N/A

+

Prior Art and References

+ +

Unresolved Questions

+

None at present.

+ +
+ + +
+
+ + + +
+ + + + + + + + + + + + + + + + + + + + +
+ + diff --git a/text/0056-one-transaction-per-notification.html b/text/0056-one-transaction-per-notification.html new file mode 100644 index 000000000..7215d137f --- /dev/null +++ b/text/0056-one-transaction-per-notification.html @@ -0,0 +1,308 @@ + + + + + + + 0056 - Polkadot Fellowship RFCs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+
+

RFC-0056: Enforce only one transaction per notification

+
+ + + +
Start Date2023-11-30
DescriptionModify the transactions notifications protocol to always send only one transaction at a time
AuthorsPierre Krieger
+
+

Summary

+

When two peers connect to each other, they open (amongst other things) a so-called "notifications protocol" substream dedicated to gossiping transactions to each other.

+

Each notification on this substream currently consists in a SCALE-encoded Vec<Transaction> where Transaction is defined in the runtime.

+

This RFC proposes to modify the format of the notification to become (Compact(1), Transaction). This maintains backwards compatibility, as this new format decodes as a Vec of length equal to 1.

+

Motivation

+

There exists three motivations behind this change:

+
    +
  • +

    It is technically impossible to decode a SCALE-encoded Vec<Transaction> into a list of SCALE-encoded transactions without knowing how to decode a Transaction. That's because a Vec<Transaction> consists in several Transactions one after the other in memory, without any delimiter that indicates the end of a transaction and the start of the next. Unfortunately, the format of a Transaction is runtime-specific. This means that the code that receives notifications is necessarily tied to a specific runtime, and it is not possible to write runtime-agnostic code.

    +
  • +
  • +

    Notifications protocols are already designed to be optimized to send many items. Currently, when it comes to transactions, each item is a Vec<Transaction> that consists in multiple sub-items of type Transaction. This two-steps hierarchy is completely unnecessary, and was originally written at a time when the networking protocol of Substrate didn't have proper multiplexing.

    +
  • +
  • +

    It makes the implementation way more straight-forward by not having to repeat code related to back-pressure. See explanations below.

    +
  • +
+

Stakeholders

+

Low-level developers.

+

Explanation

+

To give an example, if you send one notification with three transactions, the bytes that are sent on the wire are:

+
concat(
+    leb128(total-size-in-bytes-of-the-rest),
+    scale(compact(3)), scale(transaction1), scale(transaction2), scale(transaction3)
+)
+
+

But you can also send three notifications of one transaction each, in which case it is:

+
concat(
+    leb128(size(scale(transaction1)) + 1), scale(compact(1)), scale(transaction1),
+    leb128(size(scale(transaction2)) + 1), scale(compact(1)), scale(transaction2),
+    leb128(size(scale(transaction3)) + 1), scale(compact(1)), scale(transaction3)
+)
+
+

Right now the sender can choose which of the two encoding to use. This RFC proposes to make the second encoding mandatory.

+

The format of the notification would become a SCALE-encoded (Compact(1), Transaction). +A SCALE-compact encoded 1 is one byte of value 4. In other words, the format of the notification would become concat(&[4], scale_encoded_transaction). +This is equivalent to forcing the Vec<Transaction> to always have a length of 1, and I expect the Substrate implementation to simply modify the sending side to add a for loop that sends one notification per item in the Vec.

+

As explained in the motivation section, this allows extracting scale(transaction) items without having to know how to decode them.

+

By "flattening" the two-steps hierarchy, an implementation only needs to back-pressure individual notifications rather than back-pressure notifications and transactions within notifications.

+

Drawbacks

+

This RFC chooses to maintain backwards compatibility at the cost of introducing a very small wart (the Compact(1)).

+

An alternative could be to introduce a new version of the transactions notifications protocol that sends one Transaction per notification, but this is significantly more complicated to implement and can always be done later in case the Compact(1) is bothersome.

+

Testing, Security, and Privacy

+

Irrelevant.

+

Performance, Ergonomics, and Compatibility

+

Performance

+

Irrelevant.

+

Ergonomics

+

Irrelevant.

+

Compatibility

+

The change is backwards compatible if done in two steps: modify the sender to always send one transaction per notification, then, after a while, modify the receiver to enforce the new format.

+

Prior Art and References

+

Irrelevant.

+

Unresolved Questions

+

None.

+ +

None. This is a simple isolated change.

+ +
+ + +
+
+ + + +
+ + + + + + + + + + + + + + + + + + + + +
+ + diff --git a/text/0059-nodes-capabilities-discovery.html b/text/0059-nodes-capabilities-discovery.html new file mode 100644 index 000000000..00d7ef126 --- /dev/null +++ b/text/0059-nodes-capabilities-discovery.html @@ -0,0 +1,327 @@ + + + + + + + 0059 - Polkadot Fellowship RFCs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+
+

RFC-0059: Add a discovery mechanism for nodes based on their capabilities

+
+ + + +
Start Date2023-12-18
DescriptionNodes having certain capabilities register themselves in the DHT to be discoverable
AuthorsPierre Krieger
+
+

Summary

+

This RFC proposes to make the mechanism of RFC #8 more generic by introducing the concept of "capabilities".

+

Implementations can implement certain "capabilities", such as serving old block headers or being a parachain bootnode.

+

The discovery mechanism of RFC #8 is extended to be able to discover nodes of specific capabilities.

+

Motivation

+

The Polkadot peer-to-peer network is made of nodes. Not all these nodes are equal. Some nodes store only the headers of recent blocks, some nodes store all the block headers and bodies since the genesis, some nodes store the storage of all blocks since the genesis, and so on.

+

It is currently not possible to know ahead of time (without connecting to it and asking) which nodes have which data available, and it is not easily possible to build a list of nodes that have a specific piece of data available.

+

If you want to download for example the header of block 500, you have to connect to a randomly-chosen node, ask it for block 500, and if it says that it doesn't have the block, disconnect and try another randomly-chosen node. +In certain situations such as downloading the storage of old blocks, nodes that have the information are relatively rare, and finding through trial and error a node that has the data can take a long time.

+

This RFC attempts to solve this problem by giving the possibility to build a list of nodes that are capable of serving specific data.

+

Stakeholders

+

Low-level client developers. +People interested in accessing the archive of the chain.

+

Explanation

+

Reading RFC #8 first might help with comprehension, as this RFC is very similar.

+

Please keep in mind while reading that everything below applies for both relay chains and parachains, except mentioned otherwise.

+

Capabilities

+

This RFC defines a list of so-called capabilities:

+
    +
  • Head of chain provider. An implementation with this capability must be able to serve to other nodes block headers, block bodies, justifications, calls proofs, and storage proofs of "recent" (see below) blocks, and, for relay chains, to serve to other nodes warp sync proofs where the starting block is a session change block and must participate in Grandpa and Beefy gossip.
  • +
  • History provider. An implementation with this capability must be able to serve to other nodes block headers and block bodies of any block since the genesis, and must be able to serve to other nodes justifications of any session change block since the genesis up until and including their currently finalized block.
  • +
  • Archive provider. This capability is a superset of History provider. In addition to the requirements of History provider, an implementation with this capability must be able to serve call proofs and storage proof requests of any block since the genesis up until and including their currently finalized block.
  • +
  • Parachain bootnode (only for relay chains). An implementation with this capability must be able to serve the network request described in RFC 8.
  • +
+

More capabilities might be added in the future.

+

In the context of the head of chain provider, the word "recent" means: any not-finalized-yet block that is equal to or an ancestor of a block that it has announced through a block announce, and any finalized block whose height is superior to its current finalized block minus 16. +This does not include blocks that have been pruned because they're not a descendant of its current finalized block. In other words, blocks that aren't a descendant of the current finalized block can be thrown away. +A gap of blocks is required due to race conditions: when a node finalizes a block, it takes some time for its peers to be made aware of this, during which they might send requests concerning older blocks. The choice of the number of blocks in this gap is arbitrary.

+

Substrate is currently by default a head of chain provider provider. After it has finished warp syncing, it downloads the list of old blocks, after which it becomes a history provider. +If Substrate is instead configured as an archive node, then it downloads all blocks since the genesis and builds their state, after which it becomes an archive provider, history provider, and head of chain provider. +If blocks pruning is enabled and the chain is a relay chain, then Substrate unfortunately doesn't implement any of these capabilities, not even head of chain provider. This is considered as a bug that should be fixed, see https://github.com/paritytech/polkadot-sdk/issues/2733.

+

DHT provider registration

+

This RFC heavily relies on the functionalities of the Kademlia DHT already in use by Polkadot. You can find a link to the specification here.

+

Implementations that have the history provider capability should register themselves as providers under the key sha256(concat("history", randomness)).

+

Implementations that have the archive provider capability should register themselves as providers under the key sha256(concat("archive", randomness)).

+

Implementations that have the parachain bootnode capability should register themselves as provider under the key sha256(concat(scale_compact(para_id), randomness)), as described in RFC 8.

+

"Register themselves as providers" consists in sending ADD_PROVIDER requests to nodes close to the key, as described in the Content provider advertisement section of the specification.

+

The value of randomness can be found in the randomness field when calling the BabeApi_currentEpoch function.

+

In order to avoid downtimes when the key changes, nodes should also register themselves as a secondary key that uses a value of randomness equal to the randomness field when calling BabeApi_nextEpoch.

+

Implementers should be aware that their implementation of Kademlia might already hash the key before XOR'ing it. The key is not meant to be hashed twice.

+

Implementations must not register themselves if they don't fulfill the capability yet. For example, a node configured to be an archive node but that is still building its archive state in the background must register itself only after it has finished building its archive.

+

Secondary DHTs

+

Implementations that have the history provider capability must also participate in a secondary DHT that comprises only of nodes with that capability. The protocol name of that secondary DHT must be /<genesis-hash>/kad/history.

+

Similarly, implementations that have the archive provider capability must also participate in a secondary DHT that comprises only of nodes with that capability and whose protocol name is /<genesis-hash>/kad/archive.

+

Just like implementations must not register themselves if they don't fulfill their capability yet, they must also not participate in the secondary DHT if they don't fulfill their capability yet.

+

Head of the chain providers

+

Implementations that have the head of the chain provider capability do not register themselves as providers, but instead are the nodes that participate in the main DHT. In other words, they are the nodes that serve requests of the /<genesis_hash>/kad protocol.

+

Any implementation that isn't a head of the chain provider (read: light clients) must not participate in the main DHT. This is already presently the case.

+

Implementations must not participate in the main DHT if they don't fulfill the capability yet. For example, a node that is still in the process of warp syncing must not participate in the main DHT. However, assuming that warp syncing doesn't last more than a few seconds, it is acceptable to ignore this requirement in order to avoid complicating implementations too much.

+

Drawbacks

+

None that I can see.

+

Testing, Security, and Privacy

+

The content of this section is basically the same as the one in RFC 8.

+

This mechanism doesn't add or remove any security by itself, as it relies on existing mechanisms.

+

Due to the way Kademlia works, it would become the responsibility of the 20 Polkadot nodes whose sha256(peer_id) is closest to the key (described in the explanations section) to store the list of nodes that have specific capabilities. +Furthermore, when a large number of providers are registered, only the providers closest to the key are kept, up to a certain implementation-defined limit.

+

For this reason, an attacker can abuse this mechanism by randomly generating libp2p PeerIds until they find the 20 entries closest to the key representing the target capability. They are then in control of the list of nodes with that capability. While doing this can in no way be actually harmful, it could lead to eclipse attacks.

+

Because the key changes periodically and isn't predictable, and assuming that the Polkadot DHT is sufficiently large, it is not realistic for an attack like this to be maintained in the long term.

+

Performance, Ergonomics, and Compatibility

+

Performance

+

The DHT mechanism generally has a low overhead, especially given that publishing providers is done only every 24 hours.

+

Doing a Kademlia iterative query then sending a provider record shouldn't take more than around 50 kiB in total of bandwidth for the parachain bootnode.

+

Assuming 1000 nodes with a specific capability, the 20 Polkadot full nodes corresponding to that capability will each receive a sudden spike of a few megabytes of networking traffic when the key rotates. Again, this is relatively negligible. If this becomes a problem, one can add a random delay before a node registers itself to be the provider of the key corresponding to BabeApi_next_epoch.

+

Maybe the biggest uncertainty is the traffic that the 20 Polkadot full nodes will receive from light clients that desire knowing the nodes with a capability. If this every becomes a problem, this value of 20 is an arbitrary constant that can be increased for more redundancy.

+

Ergonomics

+

Irrelevant.

+

Compatibility

+

Irrelevant.

+

Prior Art and References

+

Unknown.

+

Unresolved Questions

+

While it fundamentally doesn't change much to this RFC, using BabeApi_currentEpoch and BabeApi_nextEpoch might be inappropriate. I'm not familiar enough with good practices within the runtime to have an opinion here. Should it be an entirely new pallet?

+ +

This RFC would make it possible to reliably discover archive nodes, which would make it possible to reliably send archive node requests, something that isn't currently possible. This could solve the problem of finding archive RPC node providers by migrating archive-related request to using the native peer-to-peer protocol rather than JSON-RPC.

+

If we ever decide to break backwards compatibility, we could divide the "history" and "archive" capabilities in two, between nodes capable of serving older blocks and nodes capable of serving newer blocks. +We could even add to the peer-to-peer network nodes that are only capable of serving older blocks (by reading from a database) but do not participate in the head of the chain, and that just exist for historical purposes.

+ +
+ + +
+
+ + + +
+ + + + + + + + + + + + + + + + + + + + +
+ + diff --git a/text/0078-merkleized-metadata.html b/text/0078-merkleized-metadata.html new file mode 100644 index 000000000..0fde7bfce --- /dev/null +++ b/text/0078-merkleized-metadata.html @@ -0,0 +1,580 @@ + + + + + + + 0078 - Polkadot Fellowship RFCs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+
+

RFC-0078: Merkleized Metadata

+
+ + + +
Start Date22 February 2024
DescriptionInclude merkleized metadata hash in extrinsic signature for trust-less metadata verification.
AuthorsZondax AG, Parity Technologies
+
+

Summary

+

To interact with chains in the Polkadot ecosystem it is required to know how transactions are encoded and how to read state. For doing this, Polkadot-SDK, the framework used by most of the chains in the Polkadot ecosystem, exposes metadata about the runtime to the outside. UIs, wallets, and others can use this metadata to interact with these chains. This makes the metadata a crucial piece of the transaction encoding as users are relying on the interacting software to encode the transactions in the correct format.

+

It gets even more important when the user signs the transaction in an offline wallet, as the device by its nature cannot get access to the metadata without relying on the online wallet to provide it. This makes it so that the offline wallet needs to trust an online party, deeming the security assumptions of the offline devices, mute.

+

This RFC proposes a way for offline wallets to leverage metadata, within the constraints of these. The design idea is that the metadata is chunked and these chunks are put into a merkle tree. The root hash of this merkle tree represents the metadata. The offline wallets can use the root hash to decode transactions by getting proofs for the individual chunks of the metadata. This root hash is also included in the signed data of the transaction (but not sent as part of the transaction). The runtime is then including its known metadata root hash when verifying the transaction. If the metadata root hash known by the runtime differs from the one that the offline wallet used, it very likely means that the online wallet provided some fake data and the verification of the transaction fails.

+

Users depend on offline wallets to correctly display decoded transactions before signing. With merkleized metadata, they can be assured of the transaction's legitimacy, as incorrect transactions will be rejected by the runtime.

+

Motivation

+

Polkadot's innovative design (both relay chain and parachains) present the ability to developers to upgrade their network as frequently as they need. These systems manage to have integrations working after the upgrades with the help of FRAME Metadata. This Metadata, which is in the order of half a MiB for most Polkadot-SDK chains, completely describes chain interfaces and properties. Securing this metadata is key for users to be able to interact with the Polkadot-SDK chain in the expected way.

+

On the other hand, offline wallets provide a secure way for Blockchain users to hold their own keys (some do a better job than others). These devices seldomly get upgraded, usually account for one particular network and hold very small internal memories. Currently in the Polkadot ecosystem there is no secure way of having these offline devices know the latest Metadata of the Polkadot-SDK chain they are interacting with. This results in a plethora of similar yet slightly different offline wallets for all different Polkadot-SDK chains, as well as the impediment of keeping these regularly updated, thus not fully leveraging Polkadot-SDK’s unique forkless upgrade feature.

+

The two main reasons why this is not possible today are:

+
    +
  1. Metadata is too large for offline devices. Currently Polkadot-SDK metadata is on average 500 KiB, which is more than what the mostly adopted offline devices can hold.
  2. +
  3. Metadata is not authenticated. Even if there was enough space on offline devices to hold the metadata, the user would be trusting the entity providing this metadata to the hardware wallet. In the Polkadot ecosystem, this is how currently Polkadot Vault works.
  4. +
+

This RFC proposes a solution to make FRAME Metadata compatible with offline signers in a secure way. As it leverages FRAME Metadata, it does not only ensure that offline devices can always keep up to date with every FRAME based chain, but also that every offline wallet will be compatible with all FRAME based chains, avoiding the need of per-chain implementations.

+

Requirements

+
    +
  1. Metadata's integrity MUST be preserved. If any compromise were to happen, extrinsics sent with compromised metadata SHOULD fail.
  2. +
  3. Metadata information that could be used in signable extrinsic decoding MAY be included in digest, yet its inclusion MUST be indicated in signed extensions.
  4. +
  5. Digest MUST be deterministic with respect to metadata.
  6. +
  7. Digest MUST be cryptographically strong against pre-image, both first (finding an input that results in given digest) and second (finding an input that results in same digest as some other input given).
  8. +
  9. Extra-metadata information necessary for extrinsic decoding and constant within runtime version MUST be included in digest.
  10. +
  11. It SHOULD be possible to quickly withdraw offline signing mechanism without access to cold signing devices.
  12. +
  13. Digest format SHOULD be versioned.
  14. +
  15. Work necessary for proving metadata authenticity MAY be omitted at discretion of signer device design (to support automation tools).
  16. +
+

Reduce metadata size

+

Metadata should be stripped from parts that are not necessary to parse a signable extrinsic, then it should be separated into a finite set of self-descriptive chunks. Thus, a subset of chunks necessary for signable extrinsic decoding and rendering could be sent, possibly in small portions (ultimately, one at a time), to cold devices together with the proof.

+
    +
  1. Single chunk with proof payload size SHOULD fit within few kB;
  2. +
  3. Chunks handling mechanism SHOULD support chunks being sent in any order without memory utilization overhead;
  4. +
  5. Unused enum variants MUST be stripped (this has great impact on transmitted metadata size; examples: era enum, enum with all calls for call batching).
  6. +
+

Stakeholders

+
    +
  • Runtime implementors
  • +
  • UI/wallet implementors
  • +
  • Offline wallet implementors
  • +
+

The idea for this RFC was brought up by runtime implementors and was extensively discussed with offline wallet implementors. It was designed in such a way that it can work easily with the existing offline wallet solutions in the Polkadot ecosystem.

+

Explanation

+

The FRAME metadata provides a wide range of information about a FRAME based runtime. It contains information about the pallets, the calls per pallet, the storage entries per pallet, runtime APIs, and type information about most of the types that are used in the runtime. For decoding extrinsics on an offline wallet, what is mainly required is type information. Most of the other information in the FRAME metadata is actually not required for decoding extrinsics and thus it can be removed. Therefore, the following is a proposal on a custom representation of the metadata and how this custom metadata is chunked, ensuring that only the needed chunks required for decoding a particular extrinsic are sent to the offline wallet. The necessary information to transform the FRAME metadata type information into the type information presented in this RFC will be provided. However, not every single detail on how to convert from FRAME metadata into the RFC type information is described.

+

First, the MetadataDigest is introduced. After that, ExtrinsicMetadata is covered and finally the actual format of the type information. Then pruning of unrelated type information is covered and how to generate the TypeRefs. In the latest step, merkle tree calculation is explained.

+

Metadata digest

+

The metadata digest is the compact representation of the metadata. The hash of this digest is the metadata hash. Below the type declaration of the Hash type and the MetadataDigest itself can be found:

+
#![allow(unused)]
+fn main() {
+type Hash = [u8; 32];
+
+enum MetadataDigest {
+    #[index = 1]
+    V1 {
+        type_information_tree_root: Hash,
+        extrinsic_metadata_hash: Hash,
+        spec_version: u32,
+        spec_name: String,
+        base58_prefix: u16,
+        decimals: u8,
+        token_symbol: String,
+    },
+}
+}
+

The Hash is 32 bytes long and blake3 is used for calculating it. The hash of the MetadataDigest is calculated by blake3(SCALE(MetadataDigest)). Therefore, MetadataDigest is at first SCALE encoded, and then those bytes are hashed.

+

The MetadataDigest itself is represented as an enum. This is done to make it future proof, because a SCALE encoded enum is prefixed by the index of the variant. This index represents the version of the digest. As seen above, there is no index zero and it starts directly with one. Version one of the digest contains the following elements:

+
    +
  • type_information_tree_root: The root of the merkleized type information tree.
  • +
  • extrinsic_metadata_hash: The hash of the extrinsic metadata.
  • +
  • spec_version: The spec_version of the runtime as found in the RuntimeVersion when generating the metadata. While this information can also be found in the metadata, it is hidden in a big blob of data. To avoid transferring this big blob of data, we directly add this information here.
  • +
  • spec_name: Similar to spec_version, but being the spec_name found in the RuntimeVersion.
  • +
  • ss58_prefix: The SS58 prefix used for address encoding.
  • +
  • decimals: The number of decimals for the token.
  • +
  • token_symbol: The symbol of the token.
  • +
+

Extrinsic metadata

+

For decoding an extrinsic, more information on what types are being used is required. The actual format of the extrinsic is the format as described in the Polkadot specification. The metadata for an extrinsic is as follows:

+
#![allow(unused)]
+fn main() {
+struct ExtrinsicMetadata {
+    version: u8,
+    address_ty: TypeRef,
+    call_ty: TypeRef,
+    signature_ty: TypeRef,
+    signed_extensions: Vec<SignedExtensionMetadata>,
+}
+
+struct SignedExtensionMetadata {
+    identifier: String,
+    included_in_extrinsic: TypeRef,
+    included_in_signed_data: TypeRef,
+}
+}
+

To begin with, TypeRef. This is a unique identifier for a type as found in the type information. Using this TypeRef, it is possible to look up the type in the type information tree. More details on this process can be found in the section Generating TypeRef.

+

The actual ExtrinsicMetadata contains the following information:

+
    +
  • version: The version of the extrinsic format. As of writing this, the latest version is 4.
  • +
  • address_ty: The address type used by the chain.
  • +
  • call_ty: The call type used by the chain. The call in FRAME based runtimes represents the type of transaction being executed on chain. It references the actual function to execute and the parameters of this function.
  • +
  • signature_ty: The signature type used by the chain.
  • +
  • signed_extensions: FRAME based runtimes can extend the base extrinsic with extra information. This extra information that is put into an extrinsic is called "signed extensions". These extensions offer the runtime developer the possibility to include data directly into the extrinsic, like nonce, tip, amongst others. This means that the this data is sent alongside the extrinsic to the runtime. The other possibility these extensions offer is to include extra information only in the signed data that is signed by the sender. This means that this data needs to be known by both sides, the signing side and the verification side. An example for this kind of data is the genesis hash that ensures that extrinsics are unique per chain. Another example is the metadata hash itself that will also be included in the signed data. The offline wallets need to know which signed extensions are present in the chain and this is communicated to them using this field.
  • +
+

The SignedExtensionMetadata provides information about a signed extension:

+
    +
  • identifier: The identifier of the signed extension. An identifier is required to be unique in the Polkadot ecosystem as otherwise extrinsics are maybe built incorrectly.
  • +
  • included_in_extrinsic: The type that will be included in the extrinsic by this signed extension.
  • +
  • included_in_signed_data: The type that will be included in the signed data by this signed extension.
  • +
+

Type Information

+

As SCALE is not self descriptive like JSON, a decoder always needs to know the format of the type to decode it properly. This is where the type information comes into play. The format of the extrinsic is fixed as described above and ExtrinsicMetadata provides information on which type information is required for which part of the extrinsic. So, offline wallets only need access to the actual type information. It is a requirement that the type information can be chunked into logical pieces to reduce the amount of data that is sent to the offline wallets for decoding the extrinsics. So, the type information is structured in the following way:

+
#![allow(unused)]
+fn main() {
+struct Type {
+    path: Vec<String>,
+    type_def: TypeDef,
+    type_id: Compact<u32>,
+}
+
+enum TypeDef {
+    Composite(Vec<Field>),
+    Enumeration(EnumerationVariant),
+    Sequence(TypeRef),
+    Array(Array),
+    Tuple(Vec<TypeRef>),
+    BitSequence(BitSequence),
+}
+
+struct Field {
+    name: Option<String>,
+    ty: TypeRef,
+    type_name: Option<String>,
+}
+
+struct Array {
+    len: u32,
+    type_param: TypeRef,
+}
+
+struct BitSequence {
+    num_bytes: u8,
+    least_significant_bit_first: bool,
+}
+
+struct EnumerationVariant {
+    name: String,
+    fields: Vec<Field>,
+    index: Compact<u32>,
+}
+
+enum TypeRef {
+    Bool,
+    Char,
+    Str,
+    U8,
+    U16,
+    U32,
+    U64,
+    U128,
+    U256,
+    I8,
+    I16,
+    I32,
+    I64,
+    I128,
+    I256,
+    CompactU8,
+    CompactU16,
+    CompactU32,
+    CompactU64,
+    CompactU128,
+    CompactU256,
+    Void,
+    PerId(Compact<u32>),
+}
+}
+

The Type declares the structure of a type. The type has the following fields:

+
    +
  • path: A path declares the position of a type locally to the place where it is defined. The path is not globally unique, this means that there can be multiple types with the same path.
  • +
  • type_def: The high-level type definition, e.g. the type is a composition of fields where each field has a type, the type is a composition of different types as tuple etc.
  • +
  • type_id: The unique identifier of this type.
  • +
+

Every Type is composed of multiple different types. Each of these "sub types" can reference either a full Type again or reference one of the primitive types. This is where TypeRef becomes relevant as the type referencing information. To reference a Type in the type information, a unique identifier is used. As primitive types can be represented using a single byte, they are not put as separate types into the type information. Instead the primitive types are directly part of TypeRef to not require the overhead of referencing them in an extra Type. The special primitive type Void represents a type that encodes to nothing and can be decoded from nothing. As FRAME doesn't support Compact as primitive type it requires a more involved implementation to convert a FRAME type to a Compact primitive type. SCALE only supports u8, u16, u32, u64 and u128 as Compact which maps onto the primitive type declaration in the RFC. One special case is a Compact that wraps an empty Tuple which is expressed as primitive type Void.

+

The TypeDef variants have the following meaning:

+
    +
  • Composite: A struct like type that is composed of multiple different fields. Each Field can have its own type. The order of the fields is significant. A Composite with no fields is expressed as primitive type Void.
  • +
  • Enumeration: Stores a EnumerationVariant. A EnumerationVariant is a struct that is described by a name, an index and a vector of Fields, each of which can have it's own type. Typically Enumerations have more than just one variant, and in those cases Enumeration will appear multiple times, each time with a different variant, in the type information. Enumerations can become quite large, yet usually for decoding a type only one variant is required, therefore this design brings optimizations and helps reduce the size of the proof. An Enumeration with no variants is expressed as primitive type Void.
  • +
  • Sequence: A vector like type wrapping the given type.
  • +
  • BitSequence: A vector storing bits. num_bytes represents the size in bytes of the internal storage. If least_significant_bit_first is true the least significant bit is first, otherwise the most significant bit is first.
  • +
  • Array: A fixed-length array of a specific type.
  • +
  • Tuple: A composition of multiple types. A Tuple that is composed of no types is expressed as primitive type Void.
  • +
+

Using the type information together with the SCALE specification provides enough information on how to decode types.

+

Prune unrelated Types

+

The FRAME metadata contains not only the type information for decoding extrinsics, but it also contains type information about storage types. The scope of the RFC is only about decoding transactions on offline wallets. Thus, a lot of type information can be pruned. To know which type information are required to decode all possible extrinsics, ExtrinsicMetadata has been defined. The extrinsic metadata contains all the types that define the layout of an extrinsic. Therefore, all the types that are accessible from the types declared in the extrinsic metadata can be collected. To collect all accessible types, it requires to recursively iterate over all types starting from the types in ExtrinsicMetadata. Note that some types are accessible, but they don't appear in the final type information and thus, can be pruned as well. These are for example inner types of Compact or the types referenced by BitSequence. The result of collecting these accessible types is a list of all the types that are required to decode each possible extrinsic.

+

Generating TypeRef

+

Each TypeRef basically references one of the following types:

+
    +
  • One of the primitive types. All primitive types can be represented by 1 byte and thus, they are directly part of the TypeRef itself to remove an extra level of indirection.
  • +
  • A Type using its unique identifier.
  • +
+

In FRAME metadata a primitive type is represented like any other type. So, the first step is to remove all the primitive only types from the list of types that were generated in the previous section. The resulting list of types is sorted using the id provided by FRAME metadata. In the last step the TypeRefs are created. Each reference to a primitive type is replaced by one of the corresponding TypeRef primitive type variants and every other reference is replaced by the type's unique identifier. The unique identifier of a type is the index of the type in our sorted list. For Enumerations all variants have the same unique identifier, while they are represented as multiple type information. All variants need to have the same unique identifier as the reference doesn't know which variant will appear in the actual encoded data.

+
#![allow(unused)]
+fn main() {
+let pruned_types = get_pruned_types();
+
+for ty in pruned_types {
+    if ty.is_primitive_type() {
+        pruned_types.remove(ty);
+    }
+}
+
+pruned_types.sort(|(left, right)|
+    if left.frame_metadata_id() == right.frame_metadata_id() {
+        left.variant_index() < right.variant_index()
+    } else {
+        left.frame_metadata_id() < right.frame_metadata_id()
+    }
+);
+
+fn generate_type_ref(ty, ty_list) -> TypeRef {
+    if ty.is_primitive_type() {
+        TypeRef::primtive_from_ty(ty)
+    }
+
+    TypeRef::from_id(
+        // Determine the id by using the position of the type in the
+        // list of unique frame metadata ids.
+        ty_list.position_by_frame_metadata_id(ty.frame_metadata_id())
+    )
+}
+
+fn replace_all_sub_types_with_type_refs(ty, ty_list) -> Type {
+    for sub_ty in ty.sub_types() {
+        replace_all_sub_types_with_type_refs(sub_ty, ty_list);
+        sub_ty = generate_type_ref(sub_ty, ty_list)
+    }
+
+    ty
+}
+
+let final_ty_list = Vec::new();
+for ty in pruned_types {
+    final_ty_list.push(replace_all_sub_types_with_type_refs(ty, ty_list))
+}
+}
+

Building the Merkle Tree Root

+

A complete binary merkle tree with blake3 as the hashing function is proposed. For building the merkle tree root, the initial data has to be hashed as a first step. This initial data is referred to as the leaves of the merkle tree. The leaves need to be sorted to make the tree root deterministic. The type information is sorted using their unique identifiers and for the Enumeration, variants are sort using their index. After sorting and hashing all leaves, two leaves have to be combined to one hash. The combination of these of two hashes is referred to as a node.

+
#![allow(unused)]
+fn main() {
+let nodes = leaves;
+while nodes.len() > 1 {
+    let right = nodes.pop_back();
+    let left = nodes.pop_back();
+    nodes.push_front(blake3::hash(scale::encode((left, right))));
+}
+
+let merkle_tree_root = if nodes.is_empty() { [0u8; 32] } else { nodes.back() };
+}
+

The merkle_tree_root in the end is the last node left in the list of nodes. If there are no nodes in the list left, it means that the initial data set was empty. In this case, all zeros hash is used to represent the empty tree.

+

Building a tree with 5 leaves (numbered 0 to 4):

+
nodes: 0 1 2 3 4
+
+nodes: [3, 4] 0 1 2
+
+nodes: [1, 2] [3, 4] 0
+
+nodes: [[3, 4], 0] [1, 2]
+
+nodes: [[[3, 4], 0], [1, 2]]
+
+

The resulting tree visualized:

+
     [root]
+     /    \
+    *      *
+   / \    / \
+  *   0  1   2
+ / \
+3   4
+
+

Building a tree with 6 leaves (numbered 0 to 5):

+
nodes: 0 1 2 3 4 5
+
+nodes: [4, 5] 0 1 2 3
+
+nodes: [2, 3] [4, 5] 0 1
+
+nodes: [0, 1] [2, 3] [4, 5]
+
+nodes: [[2, 3], [4, 5]] [0, 1]
+
+nodes: [[[2, 3], [4, 5]], [0, 1]]
+
+

The resulting tree visualized:

+
       [root]
+      /      \
+     *        *
+   /   \     / \
+  *     *   0   1
+ / \   / \
+2   3 4   5
+
+

Inclusion in an Extrinsic

+

To ensure that the offline wallet used the correct metadata to show the extrinsic to the user the metadata hash needs to be included in the extrinsic. The metadata hash is generated by hashing the SCALE encoded MetadataDigest:

+
#![allow(unused)]
+fn main() {
+blake3::hash(SCALE::encode(MetadataDigest::V1 { .. }))
+}
+

For the runtime the metadata hash is generated at compile time. Wallets will have to generate the hash using the FRAME metadata.

+

The signing side should control whether it wants to add the metadata hash or if it wants to omit it. To accomplish this it is required to add one extra byte to the extrinsic itself. If this byte is 0 the metadata hash is not required and if the byte is 1 the metadata hash is added using V1 of the MetadataDigest. This leaves room for future versions of the MetadataDigest format. When the metadata hash should be included, it is only added to the data that is signed. This brings the advantage of not requiring to include 32 bytes into the extrinsic itself, because the runtime knows the metadata hash as well and can add it to the signed data as well if required. This is similar to the genesis hash, while this isn't added conditionally to the signed data.

+

Drawbacks

+

The chunking may not be the optimal case for every kind of offline wallet.

+

Testing, Security, and Privacy

+

All implementations are required to strictly follow the RFC to generate the metadata hash. This includes which hash function to use and how to construct the metadata types tree. So, all implementations are following the same security criteria. As the chains will calculate the metadata hash at compile time, the build process needs to be trusted. However, this is already a solved problem in the Polkadot ecosystem by using reproducible builds. So, anyone can rebuild a chain runtime to ensure that a proposal is actually containing the changes as advertised.

+

Implementations can also be tested easily against each other by taking some metadata and ensuring that they all come to the same metadata hash.

+

Privacy of users should also not be impacted. This assumes that wallets will generate the metadata hash locally and don't leak any information to third party services about which chunks a user will send to their offline wallet. Besides that, there is no leak of private information as getting the raw metadata from the chain is an operation that is done by almost everyone.

+

Performance, Ergonomics, and Compatibility

+

Performance

+

There should be no measurable impact on performance to Polkadot or any other chain using this feature. The metadata root hash is calculated at compile time and at runtime it is optionally used when checking the signature of a transaction. This means that at runtime no performance heavy operations are done.

+

Ergonomics & Compatibility

+

The proposal alters the way a transaction is built, signed, and verified. So, this imposes some required changes to any kind of developer who wants to construct transactions for Polkadot or any chain using this feature. As the developer can pass 0 for disabling the verification of the metadata root hash, it can be easily ignored.

+

Prior Art and References

+

RFC 46 produced by the Alzymologist team is a previous work reference that goes in this direction as well.

+

On other ecosystems, there are other solutions to the problem of trusted signing. Cosmos for example has a standardized way of transforming a transaction into some textual representation and this textual representation is included in the signed data. Basically achieving the same as what the RFC proposes, but it requires that for every transaction applied in a block, every node in the network always has to generate this textual representation to ensure the transaction signature is valid.

+

Unresolved Questions

+

None.

+ +
    +
  • Does it work with all kind of offline wallets?
  • +
  • Generic types currently appear multiple times in the metadata with each instantiation. It could be may be useful to have generic type only once in the metadata and declare the generic parameters at their instantiation.
  • +
  • The metadata doesn't contain any kind of semantic information. This means that the offline wallet for example doesn't know what is a balance etc. The current solution for this problem is to match on the type name, but this isn't a sustainable solution.
  • +
  • MetadataDigest only provides one token and decimal. However, chains support a lot of chains support multiple tokens for paying fees etc. Probably more a question of having semantic information as mentioned above.
  • +
+ +
+ + +
+
+ + + +
+ + + + + + + + + + + + + + + + + + + + +
+ + diff --git a/text/0084-general-transaction-extrinsic-format.html b/text/0084-general-transaction-extrinsic-format.html new file mode 100644 index 000000000..83d1d7732 --- /dev/null +++ b/text/0084-general-transaction-extrinsic-format.html @@ -0,0 +1,287 @@ + + + + + + + 0084 - Polkadot Fellowship RFCs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + +
+
+

RFC-0084: General transactions in extrinsic format

+
+ + + +
Start Date12 March 2024
DescriptionSupport more extrinsic types by updating the extrinsic format
AuthorsGeorge Pisaltu
+
+

Summary

+

This RFC proposes a change to the extrinsic format to incorporate a new transaction type, the "general" transaction.

+

Motivation

+

"General" transactions, a new type of transaction that this RFC aims to support, are transactions which obey the runtime's extensions and have according extension data yet do not have hard-coded signatures. They are first described in Extrinsic Horizon and supported in 3685. They enable users to authorize origins in new, more flexible ways (e.g. ZK proofs, mutations over pre-authenticated origins). As of now, all transactions are limited to the account signing model for origin authorization and any additional origin changes happen in extrinsic logic, which cannot leverage the validation process of extensions.

+

An example of a use case for such an extension would be sponsoring the transaction fee for some other user. A new extension would be put in place to verify that a part of the initial payload was signed by the author under who the extrinsic should run and change the origin, but the payment for the whole transaction should be handled under a sponsor's account. A POC for this can be found in 3712.

+

The new "general" transaction type would coexist with both current transaction types for a while and, therefore, the current number of supported transaction types, capped at 2, is insufficient. A new extrinsic type must be introduced alongside the current signed and unsigned types. Currently, an encoded extrinsic's first byte indicate the type of extrinsic using the most significant bit - 0 for unsigned, 1 for signed - and the 7 following bits indicate the extrinsic format version, which has been equal to 4 for a long time.

+

By taking one bit from the extrinsic format version encoding, we can support 2 additional extrinsic types while also having a minimal impact on our capability to extend and change the extrinsic format in the future.

+

Stakeholders

+
    +
  • Runtime users
  • +
  • Runtime devs
  • +
  • Wallet devs
  • +
+

Explanation

+

An extrinsic is currently encoded as one byte to identify the extrinsic type and version. This RFC aims to change the interpretation of this byte regarding the reserved bits for the extrinsic type and version. In the following explanation, bits represented using T make up the extrinsic type and bits represented using V make up the extrinsic version.

+

Currently, the bit allocation within the leading encoded byte is 0bTVVV_VVVV. In practice in the Polkadot ecosystem, the leading byte would be 0bT000_0100 as the version has been equal to 4 for a long time.

+

This RFC proposes for the bit allocation to change to 0bTTVV_VVVV. As a result, the extrinsic format version will be bumped to 5 and the extrinsic type bit representation would change as follows:

+
+ + + + +
bitstype
00unsigned
10signed
01reserved
11reserved
+
+

Drawbacks

+

This change would reduce the maximum possible transaction version from the current 127 to 63. In order to bypass the new, lower limit, the extrinsic format would have to change again.

+

Testing, Security, and Privacy

+

There is no impact on testing, security or privacy.

+

Performance, Ergonomics, and Compatibility

+

This change would allow Polkadot to support new types of transactions, with the specific "general" transaction type in mind at the time of writing this proposal.

+

Performance

+

There is no performance impact.

+

Ergonomics

+

The impact to developers and end-users is minimal as it would just be a bitmask update on their part for parsing the extrinsic type along with the version.

+

Compatibility

+

This change breaks backwards compatiblity because any transaction that is neither signed nor unsigned, but a new transaction type, would be interpreted as having a future extrinsic format version.

+

Prior Art and References

+

The original design was originally proposed in the TransactionExtension PR, which is also the motivation behind this effort.

+

Unresolved Questions

+

None.

+ +

Following this change, the "general" transaction type will be introduced as part of the Extrinsic Horizon effort, which will shape future work.

+ +
+ + +
+
+ + + +
+ + + + + + + + + + + + + + + + + + + + +
+ + diff --git a/text/0145-remove-unnecessary-allocator-usage.md b/text/0145-remove-unnecessary-allocator-usage.md index d64d5c59d..d70804856 100644 --- a/text/0145-remove-unnecessary-allocator-usage.md +++ b/text/0145-remove-unnecessary-allocator-usage.md @@ -116,8 +116,7 @@ The function used to accept only a prefix and a limit and return a SCALE-encoded ```wat (func $ext_storage_clear_prefix_version_3 (param $maybe_prefix i64) (param $maybe_limit i64) (param $maybe_cursor_in i64) - (param $maybe_cursor_out i64) (param $backend i32) (param $unique i32) (param $loops i32) - (result i32)) + (param $maybe_cursor_out i64) (param $counters i32) (result i32)) ``` ##### Arguments @@ -125,10 +124,11 @@ The function used to accept only a prefix and a limit and return a SCALE-encoded * `maybe_prefix` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) containing a (possibly empty) storage prefix being cleared; * `maybe_limit` is an optional positive integer ([New Definition I](#new-def-i)) representing either the maximum number of backend deletions which may happen, or the _absence_ of such a limit. The number of backend iterations may surpass this limit by no more than one; * `maybe_cursor_in` is an optional pointer-size ([New Definition II](#new-def-ii)) representing the cursor returned by the previous (unfinished) call to this function. It should be _absent_ on the first call; -* `maybe_cursor_out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer where the continuation cursor will optionally be written (see also the Result section). The value is actually stored only if the buffer is large enough. Otherwise, the buffer contents are undefined; -* `backend` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to a 4-byte buffer where a 32-bit integer representing the number of items removed from the backend database will be written; -* `unique` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to a 4-byte buffer where a 32-bit integer representing the number of unique keys removed, taking into account both the backend and the overlay; -* `loops` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to a 4-byte buffer where a 32-bit integer representing the number of iterations (each requiring a storage seek/read) which were done will be written. +* `maybe_cursor_out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer where the continuation cursor will optionally be written (see also the Result section). The value is actually stored only if the buffer is large enough. Whenever the value is not written into the buffer, the buffer contents are unmodified; +* `counters` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to a 12-byte buffer where three low-endian 32-bit integers will be stored one after another, representing the counters, respectively: + * Of items removed from the backend database will be written; + * Of unique keys removed, taking into account both the backend and the overlay; + * Of iterations (each requiring a storage seek/read) which were done. ##### Result @@ -256,8 +256,7 @@ The function used to accept only a child storage key and a limit and return a SC ```wat (func $ext_default_child_storage_storage_kill_version_4 (param $storage_key i64) (param $maybe_limit i64) (param $maybe_cursor_in i64) - (param $maybe_cursor_out i64) (param $backend i32) (param $unique i32) (param $loops i32) - (result i32)) + (param $maybe_cursor_out i64) (param $counters i32) (result i32)) ``` ##### Arguments @@ -265,10 +264,11 @@ The function used to accept only a child storage key and a limit and return a SC * `storage_key` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the child storage key ([Definition 219](https://spec.polkadot.network/chap-host-api#defn-child-storage-type)); * `maybe_limit` is an optional positive integer representing either the maximum number of backend deletions which may happen, or the absence of such a limit. The number of backend iterations may surpass this limit by no more than one; * `maybe_cursor_in` is an optional pointer-size representing the cursor returned by the previous (unfinished) call to this function. It should be _absent_ on the first call; -* `maybe_cursor_out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer where the continuation cursor will optionally be written (see also the Result section). The value is actually stored only if the buffer is large enough. Otherwise, the buffer contents are undefined; -* `backend` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to a 4-byte buffer where a 32-bit integer representing the number of items removed from the backend database will be written; -* `unique` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to a 4-byte buffer where a 32-bit integer representing the number of unique keys removed, taking into account both the backend and the overlay; -* `loops` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to a 4-byte buffer where a 32-bit integer representing the number of iterations (each requiring a storage seek/read) which were done will be written. +* `maybe_cursor_out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer where the continuation cursor will optionally be written (see also the Result section). The value is actually stored only if the buffer is large enough. Whenever the value is not written into the buffer, the buffer contents are unmodified; +* `counters` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to a 12-byte buffer where three low-endian 32-bit integers will be stored one after another, representing the counters, respectively: + * Of items removed from the backend database will be written; + * Of unique keys removed, taking into account both the backend and the overlay; + * Of iterations (each requiring a storage seek/read) which were done. ##### Result @@ -293,8 +293,8 @@ The function used to accept (along with the child storage key) only a prefix and ```wat (func $ext_default_child_storage_clear_prefix_version_3 (param $storage_key i64) (param $prefix i64) (param $maybe_limit i64) - (param $maybe_cursor_in i64) (param $maybe_cursor_out i64) (param $backend i32) - (param $unique i32) (param $loops i32) (result i32)) + (param $maybe_cursor_in i64) (param $maybe_cursor_out i64) (param $counters i32) + (result i32)) ``` ##### Arguments @@ -303,10 +303,11 @@ The function used to accept (along with the child storage key) only a prefix and * `prefix` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) containing a storage prefix being cleared; * `maybe_limit` is an optional positive integer representing either the maximum number of backend deletions which may happen, or the absence of such a limit. The number of backend iterations may surpass this limit by no more than one; * `maybe_cursor_in` is an optional pointer-size representing the cursor returned by the previous (unfinished) call to this function. It should be _absent_ on the first call; -* `maybe_cursor_out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer where the continuation cursor will optionally be written (see also the Result section). The value is actually stored only if the buffer is large enough. Otherwise, the buffer contents are undefined; -* `backend` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to a 4-byte buffer where a 32-bit integer representing the number of items removed from the backend database will be written; -* `unique` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to a 4-byte buffer where a 32-bit integer representing the number of unique keys removed, taking into account both the backend and the overlay; -* `loops` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to a 4-byte buffer where a 32-bit integer representing the number of iterations (each requiring a storage seek/read) which were done will be written. +* `maybe_cursor_out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer where the continuation cursor will optionally be written (see also the Result section). The value is actually stored only if the buffer is large enough. Whenever the value is not written into the buffer, the buffer contents are unmodified; +* `counters` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to a 12-byte buffer where three low-endian 32-bit integers will be stored one after another, representing the counters, respectively: + * Of items removed from the backend database will be written; + * Of unique keys removed, taking into account both the backend and the overlay; + * Of iterations (each requiring a storage seek/read) which were done. ##### Result From 3f91dd651853ef7888ac3c1f318a667dce12b8fb Mon Sep 17 00:00:00 2001 From: Dmitry Sinyavin Date: Mon, 17 Nov 2025 15:39:29 +0100 Subject: [PATCH 11/30] Undefined -> unchanged --- ...0145-remove-unnecessary-allocator-usage.md | 22 +++++++++---------- 1 file changed, 11 insertions(+), 11 deletions(-) diff --git a/text/0145-remove-unnecessary-allocator-usage.md b/text/0145-remove-unnecessary-allocator-usage.md index d70804856..6b1cfbcb9 100644 --- a/text/0145-remove-unnecessary-allocator-usage.md +++ b/text/0145-remove-unnecessary-allocator-usage.md @@ -91,7 +91,7 @@ The function was returning a SCALE-encoded `Option`-wrapped 32-bit integer repre ##### Arguments * `key` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the storage key being read; -* `value_out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer where the value read should be stored. The value is actually stored only if the buffer is large enough. Otherwise, the buffer contents are undefined; +* `value_out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer where the value read should be stored. The value is actually stored only if the buffer is large enough. Otherwise, the buffer contents are unchanged; * `value_offset` is a 32-bit offset from which the value reading should start. ##### Result @@ -156,7 +156,7 @@ The old version accepted the state version as an argument and returned a SCALE-e ##### Arguments -* `out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer where the SCALE-encoded storage root, calculated after committing all the existing operations, will be stored. The value is actually stored only if the buffer is large enough. Otherwise, the buffer contents are undefined. +* `out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer where the SCALE-encoded storage root, calculated after committing all the existing operations, will be stored. The value is actually stored only if the buffer is large enough. Otherwise, the buffer contents are unchanged. ##### Results @@ -185,7 +185,7 @@ The old version accepted the key and returned the SCALE-encoded next key in a ho ##### Arguments * `key_in` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer containing a storage key; -* `key_out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to an output buffer where the next key in the storage in the lexicographical order will be written. The value is actually stored only if the buffer is large enough. Otherwise, the buffer contents are undefined. +* `key_out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to an output buffer where the next key in the storage in the lexicographical order will be written. The value is actually stored only if the buffer is large enough. Otherwise, the buffer contents are unchanged. ##### Result @@ -230,7 +230,7 @@ The function was returning a SCALE-encoded `Option`-wrapped 32-bit integer repre * `storage_key` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the child storage key ([Definition 219](https://spec.polkadot.network/chap-host-api#defn-child-storage-type)); * `key` is the storage key being read; -* `value_out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer where the value read should be stored. The value is actually stored only if the buffer is large enough. Otherwise, the buffer contents are undefined; +* `value_out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer where the value read should be stored. The value is actually stored only if the buffer is large enough. Otherwise, the buffer contents are unchanged; * `value_offset` is a 32-bit offset from which the value reading should start. ##### Result @@ -336,7 +336,7 @@ The old version accepted (along with the child storage key) the state version as ##### Arguments * `storage_key` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the child storage key ([Definition 219](https://spec.polkadot.network/chap-host-api#defn-child-storage-type)); -* `out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer where the SCALE-encoded storage root, calculated after committing all the existing operations, will be stored. The value is actually stored only if the buffer is large enough. Otherwise, the buffer contents are undefined. +* `out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer where the SCALE-encoded storage root, calculated after committing all the existing operations, will be stored. The value is actually stored only if the buffer is large enough. Otherwise, the buffer contents are unchanged. ##### Results @@ -366,7 +366,7 @@ The old version accepted (along with the child storage key) the key and returned * `storage_key` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the child storage key ([Definition 219](https://spec.polkadot.network/chap-host-api#defn-child-storage-type)); * `key_in` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer containing a storage key; -* `key_out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to an output buffer where the next key in the storage in the lexicographical order will be written. The value is actually stored only if the buffer is large enough. Otherwise, the buffer contents are undefined. +* `key_out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to an output buffer where the next key in the storage in the lexicographical order will be written. The value is actually stored only if the buffer is large enough. Otherwise, the buffer contents are unchanged. ##### Result @@ -426,7 +426,7 @@ The function used to return the SCALE-encoded runtime version information in a h ##### Arguments * `wasm` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the Wasm blob from which the version information should be extracted; -* `out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the buffer where the SCALE-encoded extracted version information will be stored. The value is actually stored only if the buffer is large enough. Otherwise, the buffer contents are undefined. +* `out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the buffer where the SCALE-encoded extracted version information will be stored. The value is actually stored only if the buffer is large enough. Otherwise, the buffer contents are unchanged. ##### Result @@ -446,7 +446,7 @@ A new function is introduced to make it possible to fetch a cursor produced by ` ``` ##### Arguments -* `out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the buffer where the last cached cursor will be stored, if one exists. The value is actually stored only if the buffer is large enough. Otherwise, the buffer contents are undefined. +* `out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the buffer where the last cached cursor will be stored, if one exists. The value is actually stored only if the buffer is large enough. Otherwise, the buffer contents are unchanged. ##### Result @@ -779,7 +779,7 @@ A new function is introduced to replace `ext_offchain_local_storage_get`. The na * `kind` is an offchain storage kind, where `0` denotes the persistent storage ([Definition 222](https://spec.polkadot.network/chap-host-api#defn-offchain-persistent-storage)), and `1` denotes the local storage ([Definition 223](https://spec.polkadot.network/chap-host-api#defn-offchain-persistent-storage)); * `key` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the storage key being read; -* `value_out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer where the value read should be stored. The value is actually stored only if the buffer is large enough. Otherwise, the buffer contents are undefined; +* `value_out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer where the value read should be stored. The value is actually stored only if the buffer is large enough. Otherwise, the buffer contents are unchanged; * `offset` is a 32-bit offset from which the value reading should start. ##### Result @@ -932,7 +932,7 @@ New function to replace functionality of `ext_offchain_http_response_headers` wi * `request_id` is an i32 integer indicating the ID of the started request, as returned by `ext_offchain_http_request_start`; * `header_index` is an i32 integer indicating the index of the header requested, starting from zero; -* `out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the buffer where the header name will be stored. The value is actually stored only if the buffer is large enough. Otherwise, the buffer contents are undefined; +* `out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the buffer where the header name will be stored. The value is actually stored only if the buffer is large enough. Otherwise, the buffer contents are unchanged; ##### Result @@ -955,7 +955,7 @@ New function to replace functionality of `ext_offchain_http_response_headers` wi * `request_id` is an i32 integer indicating the ID of the started request, as returned by `ext_offchain_http_request_start`; * `header_index` is an i32 integer indicating the index of the header requested, starting from zero; -* `out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the buffer where the header value will be stored. The value is actually stored only if the buffer is large enough. Otherwise, the buffer contents are undefined; +* `out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the buffer where the header value will be stored. The value is actually stored only if the buffer is large enough. Otherwise, the buffer contents are unchanged; ##### Result From 4b7a8f42b89c5cebd7fb1c2ce15f18faa91fc151 Mon Sep 17 00:00:00 2001 From: Dmitry Sinyavin Date: Mon, 17 Nov 2025 16:36:49 +0100 Subject: [PATCH 12/30] Address minor discussions --- text/0145-remove-unnecessary-allocator-usage.md | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) diff --git a/text/0145-remove-unnecessary-allocator-usage.md b/text/0145-remove-unnecessary-allocator-usage.md index 6b1cfbcb9..5adb3569c 100644 --- a/text/0145-remove-unnecessary-allocator-usage.md +++ b/text/0145-remove-unnecessary-allocator-usage.md @@ -92,7 +92,7 @@ The function was returning a SCALE-encoded `Option`-wrapped 32-bit integer repre * `key` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the storage key being read; * `value_out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer where the value read should be stored. The value is actually stored only if the buffer is large enough. Otherwise, the buffer contents are unchanged; -* `value_offset` is a 32-bit offset from which the value reading should start. +* `value_offset` is an unsigned 32-bit offset from which the value reading should start. ##### Result @@ -231,7 +231,7 @@ The function was returning a SCALE-encoded `Option`-wrapped 32-bit integer repre * `storage_key` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the child storage key ([Definition 219](https://spec.polkadot.network/chap-host-api#defn-child-storage-type)); * `key` is the storage key being read; * `value_out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer where the value read should be stored. The value is actually stored only if the buffer is large enough. Otherwise, the buffer contents are unchanged; -* `value_offset` is a 32-bit offset from which the value reading should start. +* `value_offset` is an unsigned 32-bit offset from which the value reading should start. ##### Result @@ -810,7 +810,7 @@ The function used to return a SCALE-encoded `Result` value in a host-allocated b `method` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the HTTP method. Possible values are “GET” and “POST”; `uri` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the URI; -`meta` is a future-reserved field containing additional, SCALE-encoded parameters. Currently, an empty array should be passed. +`meta` is a future-reserved field containing additional, SCALE-encoded parameters. Currently, its value is ignored. ##### Result @@ -1027,10 +1027,12 @@ Currently, all runtime entrypoints have the following identical Wasm function si (func $runtime_entrypoint (param $data i32) (param $len i32) (result i64)) ``` -After this RFC is implemented, such entrypoints are still supported, but considered deprecated. New entrypoints must have the following signature: +After this RFC is implemented, such entrypoints are only supported for the legacy runtimes using the host-side allocator. All the new runtimes, using runtime-side allocator, must use new entry point signature: ```wat (func $runtime_entrypoint (param $len i32) (result i64)) ``` A runtime function called through such an entrypoint gets the length of SCALE-encoded input data as its only argument. After that, the function must allocate exactly the amount of bytes it is requested, and call the `ext_input_read` host function to obtain the encoded input data. + +If a runtime happens to import both functions that allocate on the host side and functions that allocate on the runtime side, the host must not proceed with execution of such a runtime, aborting before the execution takes place. From 90a2a8301921b8c7ea136e570ec37e26b09ecc51 Mon Sep 17 00:00:00 2001 From: Dmitry Sinyavin Date: Tue, 18 Nov 2025 11:04:16 +0100 Subject: [PATCH 13/30] Address discussions: excessive verbosity --- text/0145-remove-unnecessary-allocator-usage.md | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/text/0145-remove-unnecessary-allocator-usage.md b/text/0145-remove-unnecessary-allocator-usage.md index 5adb3569c..d740b6a83 100644 --- a/text/0145-remove-unnecessary-allocator-usage.md +++ b/text/0145-remove-unnecessary-allocator-usage.md @@ -66,7 +66,7 @@ The Runtime optional pointer-size has exactly the same definition as Runtime poi ##### Changes -The function is considered obsolete, as it only implements a subset of functionality of `ext_storage_read` and uses host-allocated buffers. Users are encouraged to use `ext_storage_read_version_2` instead. +Considered obsolete in favor of `ext_storage_read_version_2`. Cannot be used in a runtime using the new-style of entry-point. #### ext_storage_read @@ -202,7 +202,7 @@ The result is the full length of the output key that might have been stored in ` ##### Changes -The function is considered obsolete, as it only implements a subset of functionality of `ext_default_child_storage_read` and uses host-allocated buffers. Users are encouraged to use `ext_default_child_storage_read_version_2` instead. +Considered obsolete in favor of `ext_default_child_storage_read_version_2`. Cannot be used in a runtime using the new-style of entry-point. #### ext_default_child_storage_read @@ -469,12 +469,12 @@ If the buffer had enough capacity and the cursor was stored successfully, the cu ##### Changes -The following functions are considered obsolete: +The following functions are considered obsolete in favor of the new `*_num_public_keys` and `*_public_key` counterparts: * `ext_crypto_ed25519_public_keys_version_1` * `ext_crypto_sr25519_public_keys_version_1` * `ext_crypto_ecdsa_public_keys_version_1` -The functions used to return a host-allocated SCALE-encoded array of public keys of the corresponding type. As it is hard to predict the size of buffer needed to store such an array, new function `*_num_public_keys` and `*_public_key` were introduced to implement iterative approach. +They cannot be used in a runtime using the new-style of entry-point. #### ext_crypto_{ed25519|sr25519|ecdsa}_num_public_keys @@ -702,7 +702,7 @@ The result is `0` for success or `-1` for failure. ##### Changes -The function is considered obsolete. The function used to return a host-allocated value that was only used partially, with the part used being fixed-size. Users are encouraged to use `ext_offchain_network_peer_id_version_1` instead. +Considered obsolete in favor of `ext_offchain_network_peer_id_version_1`. Cannot be used in a runtime using the new-style of entry-point. #### ext_offchain_network_peer_id @@ -760,7 +760,7 @@ The function used to return a host-allocated buffer containing the random seed. ##### Changes -The function is considered obsolete, as it only implements a subset of functionality of `ext_offchain_local_storage_read` and uses host-allocated buffers. Users are encouraged to use `ext_offchain_local_storage_read_version_1` instead. +Considered obsolete in favor of `ext_offchain_local_storage_read_version_1`. Cannot be used in a runtime using the new-style of entry-point. #### ext_offchain_local_storage_read @@ -913,7 +913,7 @@ The function used to return a SCALE-encoded array of request statuses in a host- ##### Changes -The function is considered obsolete in favor of `ext_offchain_http_response_header_name` and `ext_offchain_http_response_header_value`. It used to return a host-allocated SCALE-encoded array of response header names and values. As it's hard to predict what buffer size is needed to accommodate such an array, new functions offer an iterative approach instead. +Considered obsolete in favor of `ext_offchain_http_response_header_name` and `ext_offchain_http_response_header_value`. Cannot be used in a runtime using the new-style of entry-point. #### ext_offchain_http_response_header_name @@ -1000,7 +1000,7 @@ On success, the number of bytes written to the buffer is returned. A value of `0 (func $ext_allocator_free_version_1 (param $ptr i32)) ``` -The functions are considered obsolete and must not be used in new code. +The functions are considered obsolete and cannot be used in a runtime using the new-style of entry-point. #### ext_input_read From 7ba9eb38204d09ddfc480fd025170b22ddbb56a7 Mon Sep 17 00:00:00 2001 From: Dmitry Sinyavin Date: Tue, 18 Nov 2025 13:32:18 +0100 Subject: [PATCH 14/30] Fix Runtime Optional Positive Integer definition --- ...0145-remove-unnecessary-allocator-usage.md | 31 ++++++++++++++++++- 1 file changed, 30 insertions(+), 1 deletion(-) diff --git a/text/0145-remove-unnecessary-allocator-usage.md b/text/0145-remove-unnecessary-allocator-usage.md index d740b6a83..0775ec05d 100644 --- a/text/0145-remove-unnecessary-allocator-usage.md +++ b/text/0145-remove-unnecessary-allocator-usage.md @@ -47,7 +47,36 @@ Runtime developers, who will benefit from the improved performance and more dete #### New Definition I: Runtime Optional Positive Integer -The Runtime optional positive integer is a signed 64-bit value. Positive values in the range of [0..2³²) represent corresponding unsigned 32-bit values. The value of `-1` represents a non-existing value (an _absent_ value). All other values are invalid. +By a Runtime Optional Positive Integer we refer to an abstract value $r \in \mathcal{R}$ where $\mathcal{R} := \{\bot\} \cup \{0, 1, \dots, 2^{32} - 1\},$ and where $\bot$ denotes the _absent_ value. + +At the Host-Runtime interface this type is represented by a signed 64-bit integer $x \in \mathbb{Z}$ (thus $\mathbb{Z} \in \{-2^{63}, \dots, 2^{63} - 1\}$). + +We define the encoding function $\mathrm{Enc}_{\mathrm{ROP}} : \mathcal{R} \to \mathbb{Z}$ and decoding function $\mathrm{Dec}_{\mathrm{ROP}} : \mathbb{Z} \to \mathcal{R} \cup \{\mathrm{error}\}$ as follows. + +For $r \in \mathcal{R}$, + +$$ +\mathrm{Enc}_{\mathrm{ROP}}(r) := +\begin{cases} +-1 & \text{if } r = \bot, \\ +r & \text{if } r \in \{0, 1, \dots, 2^{32} - 1\}. +\end{cases} +$$ + +For a signed 64-bit integer $x$, + +$$ +\mathrm{Dec}_{\mathrm{ROP}}(x) := +\begin{cases} +\bot & \text{if } x = -1, \\ +x & \text{if } 0 \le x < 2^{32}, \\ +\mathrm{error} & \text{otherwise.} +\end{cases} +$$ + +A valid Runtime Optional Positive Integer at the Host-Runtime boundary is any 64-bit signed integer $x$ such that $x \in \{-1\} \cup \{0, 1, \dots, 2^{32} - 1\}$. All other 64-bit integer values are invalid for this type. + +Conforming implementations must not produce invalid values when encoding. Receivers must abort execution if decoding results in $\mathrm{error}$. #### New Definition II: Runtime Optional Pointer-Size From 4baa9c352e8de1ef79512e33b7fe759d05a2ce8d Mon Sep 17 00:00:00 2001 From: Dmitry Sinyavin Date: Tue, 18 Nov 2025 19:48:32 +0100 Subject: [PATCH 15/30] Revert "Address discussions: clear prefix" This reverts commit 858d2d73481e64a2f6a28f9ed64292189f86524e. --- mdbook/book.toml | 11 +- mdbook/text/0001-agile-coretime.html | 720 ----------------- mdbook/text/0005-coretime-interface.html | 345 -------- .../text/0007-system-collator-selection.html | 358 --------- mdbook/text/0008-parachain-bootnodes-dht.html | 315 -------- mdbook/text/0010-burn-coretime-revenue.html | 256 ------ ...12-process-for-adding-new-collectives.html | 313 -------- ...uilder-and-core-runtime-apis-for-mbms.html | 306 -------- ...rove-locking-mechanism-for-parachains.html | 347 --------- mdbook/text/0022-adopt-encointer-runtime.html | 268 ------- mdbook/text/0032-minimal-relay.html | 436 ----------- .../text/0042-extrinsics-state-version.html | 304 -------- .../0043-storage-proof-size-hostfunction.html | 271 ------- mdbook/text/0045-nft-deposits-asset-hub.html | 433 ----------- ...047-assignment-of-availability-chunks.html | 478 ------------ .../text/0048-session-keys-runtime-api.html | 317 -------- mdbook/text/0050-fellowship-salaries.html | 335 -------- ...0056-one-transaction-per-notification.html | 292 ------- .../0059-nodes-capabilities-discovery.html | 311 -------- mdbook/text/0078-merkleized-metadata.html | 564 -------------- ...-general-transaction-extrinsic-format.html | 271 ------- text/0001-agile-coretime.html | 736 ------------------ text/0005-coretime-interface.html | 361 --------- text/0007-system-collator-selection.html | 374 --------- text/0008-parachain-bootnodes-dht.html | 331 -------- text/0010-burn-coretime-revenue.html | 272 ------- ...12-process-for-adding-new-collectives.html | 329 -------- ...uilder-and-core-runtime-apis-for-mbms.html | 322 -------- ...rove-locking-mechanism-for-parachains.html | 363 --------- text/0022-adopt-encointer-runtime.html | 284 ------- text/0032-minimal-relay.html | 452 ----------- text/0042-extrinsics-state-version.html | 320 -------- .../0043-storage-proof-size-hostfunction.html | 287 ------- text/0045-nft-deposits-asset-hub.html | 449 ----------- ...047-assignment-of-availability-chunks.html | 494 ------------ text/0048-session-keys-runtime-api.html | 333 -------- text/0050-fellowship-salaries.html | 351 --------- ...0056-one-transaction-per-notification.html | 308 -------- text/0059-nodes-capabilities-discovery.html | 327 -------- text/0078-merkleized-metadata.html | 580 -------------- ...-general-transaction-extrinsic-format.html | 287 ------- ...0145-remove-unnecessary-allocator-usage.md | 37 +- 42 files changed, 19 insertions(+), 14829 deletions(-) delete mode 100644 mdbook/text/0001-agile-coretime.html delete mode 100644 mdbook/text/0005-coretime-interface.html delete mode 100644 mdbook/text/0007-system-collator-selection.html delete mode 100644 mdbook/text/0008-parachain-bootnodes-dht.html delete mode 100644 mdbook/text/0010-burn-coretime-revenue.html delete mode 100644 mdbook/text/0012-process-for-adding-new-collectives.html delete mode 100644 mdbook/text/0013-prepare-blockbuilder-and-core-runtime-apis-for-mbms.html delete mode 100644 mdbook/text/0014-improve-locking-mechanism-for-parachains.html delete mode 100644 mdbook/text/0022-adopt-encointer-runtime.html delete mode 100644 mdbook/text/0032-minimal-relay.html delete mode 100644 mdbook/text/0042-extrinsics-state-version.html delete mode 100644 mdbook/text/0043-storage-proof-size-hostfunction.html delete mode 100644 mdbook/text/0045-nft-deposits-asset-hub.html delete mode 100644 mdbook/text/0047-assignment-of-availability-chunks.html delete mode 100644 mdbook/text/0048-session-keys-runtime-api.html delete mode 100644 mdbook/text/0050-fellowship-salaries.html delete mode 100644 mdbook/text/0056-one-transaction-per-notification.html delete mode 100644 mdbook/text/0059-nodes-capabilities-discovery.html delete mode 100644 mdbook/text/0078-merkleized-metadata.html delete mode 100644 mdbook/text/0084-general-transaction-extrinsic-format.html delete mode 100644 text/0001-agile-coretime.html delete mode 100644 text/0005-coretime-interface.html delete mode 100644 text/0007-system-collator-selection.html delete mode 100644 text/0008-parachain-bootnodes-dht.html delete mode 100644 text/0010-burn-coretime-revenue.html delete mode 100644 text/0012-process-for-adding-new-collectives.html delete mode 100644 text/0013-prepare-blockbuilder-and-core-runtime-apis-for-mbms.html delete mode 100644 text/0014-improve-locking-mechanism-for-parachains.html delete mode 100644 text/0022-adopt-encointer-runtime.html delete mode 100644 text/0032-minimal-relay.html delete mode 100644 text/0042-extrinsics-state-version.html delete mode 100644 text/0043-storage-proof-size-hostfunction.html delete mode 100644 text/0045-nft-deposits-asset-hub.html delete mode 100644 text/0047-assignment-of-availability-chunks.html delete mode 100644 text/0048-session-keys-runtime-api.html delete mode 100644 text/0050-fellowship-salaries.html delete mode 100644 text/0056-one-transaction-per-notification.html delete mode 100644 text/0059-nodes-capabilities-discovery.html delete mode 100644 text/0078-merkleized-metadata.html delete mode 100644 text/0084-general-transaction-extrinsic-format.html diff --git a/mdbook/book.toml b/mdbook/book.toml index 8918476ab..2eab700e4 100644 --- a/mdbook/book.toml +++ b/mdbook/book.toml @@ -4,7 +4,7 @@ description = "An online book of RFCs approved or proposed within the Polkadot F src = "src" [build] -create-missing = true +create-missing = false [output.html] additional-css = ["theme/polkadot.css"] @@ -17,15 +17,6 @@ no-section-label = true enable = true woff = true -[output.pdf] -print-background=false -margin-top=0.5 -margin-left=0.5 -margin-bottom=0.5 -margin-right=0.5 -paper-width=8.3 -paper-height=11.7 - [preprocessor.toc] command = "mdbook-toc" renderer = ["html"] diff --git a/mdbook/text/0001-agile-coretime.html b/mdbook/text/0001-agile-coretime.html deleted file mode 100644 index 15ef03271..000000000 --- a/mdbook/text/0001-agile-coretime.html +++ /dev/null @@ -1,720 +0,0 @@ - - - - - - - 0001 - Polkadot Fellowship RFCs - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - - - - - - - - - - - - - - - - -
- -
- - - - - - - - -
-
-

RFC-1: Agile Coretime

-
- - - -
Start Date30 June 2023
DescriptionAgile periodic-sale-based model for assigning Coretime on the Polkadot Ubiquitous Computer.
AuthorsGavin Wood
-
-

Summary

-

This proposes a periodic, sale-based method for assigning Polkadot Coretime, the analogue of "block space" within the Polkadot Network. The method takes into account the need for long-term capital expenditure planning for teams building on Polkadot, yet also provides a means to allow Polkadot to capture long-term value in the resource which it sells. It supports the possibility of building rich and dynamic secondary markets to optimize resource allocation and largely avoids the need for parameterization.

-

Motivation

-

Present System

-

The Polkadot Ubiquitous Computer, or just Polkadot UC, represents the public service provided by the Polkadot Network. It is a trust-free, WebAssembly-based, multicore, internet-native omnipresent virtual machine which is highly resilient to interference and corruption.

-

The present system of allocating the limited resources of the Polkadot Ubiquitous Computer is through a process known as parachain slot auctions. This is a parachain-centric paradigm whereby a single core is long-term allocated to a single parachain which itself implies a Substrate/Cumulus-based chain secured and connected via the Relay-chain. Slot auctions are on-chain candle auctions which proceed for several days and result in the core being assigned to the parachain for six months at a time up to 24 months in advance. Practically speaking, we only see two year periods being bid upon and leased.

-

Funds behind the bids made in the slot auctions are merely locked, they are not consumed or paid and become unlocked and returned to the bidder on expiry of the lease period. A means of sharing the deposit trustlessly known as a crowdloan is available allowing token holders to contribute to the overall deposit of a chain without any counterparty risk.

-

Problems

-

The present system is based on a model of one-core-per-parachain. This is a legacy interpretation of the Polkadot platform and is not a reflection of its present capabilities. By restricting ownership and usage to this model, more dynamic and resource-efficient means of utilizing the Polkadot Ubiquitous Computer are lost.

-

More specifically, it is impossible to lease out cores at anything less than six months, and apparently unrealistic to do so at anything less than two years. This removes the ability to dynamically manage the underlying resource, and generally experimentation, iteration and innovation suffer. It bakes into the platform an assumption of permanence for anything deployed into it and restricts the market's ability to find a more optimal allocation of the finite resource.

-

There is no ability to determine capital requirements for hosting a parachain beyond two years from the point of its initial deployment onto Polkadot. While it would be unreasonable to have perfect and indefinite cost predictions for any real-world platform, not having any clarity whatsoever beyond "market rates" two years hence can be a very off-putting prospect for teams to buy into.

-

However, quite possibly the most substantial problem is both a perceived and often real high barrier to entry of the Polkadot ecosystem. By forcing innovators to either raise seven-figure sums through investors or appeal to the wider token-holding community, Polkadot makes it difficult for a small band of innovators to deploy their technology into Polkadot. While not being actually permissioned, it is also far from the barrierless, permissionless ideal which an innovation platform such as Polkadot should be striving for.

-

Requirements

-
    -
  1. The solution SHOULD provide an acceptable value-capture mechanism for the Polkadot network.
  2. -
  3. The solution SHOULD allow parachains and other projects deployed on to the Polkadot UC to make long-term capital expenditure predictions for the cost of ongoing deployment.
  4. -
  5. The solution SHOULD minimize the barriers to entry in the ecosystem.
  6. -
  7. The solution SHOULD work well when the Polkadot UC has up to 1,000 cores.
  8. -
  9. The solution SHOULD work when the number of cores which the Polkadot UC can support changes over time.
  10. -
  11. The solution SHOULD facilitate the optimal allocation of work to cores of the Polkadot UC, including by facilitating the trade of regular core assignment at various intervals and for various spans.
  12. -
  13. The solution SHOULD avoid creating additional dependencies on functionality which the Relay-chain need not strictly provide for the delivery of the Polkadot UC.
  14. -
-

Furthermore, the design SHOULD be implementable and deployable in a timely fashion; three months from the acceptance of this RFC should not be unreasonable.

-

Stakeholders

-

Primary stakeholder sets are:

-
    -
  • Protocol researchers and developers, largely represented by the Polkadot Fellowship and Parity Technologies' Engineering division.
  • -
  • Polkadot Parachain teams both present and future, and their users.
  • -
  • Polkadot DOT token holders.
  • -
-

Socialization:

-

The essensials of this proposal were presented at Polkadot Decoded 2023 Copenhagen on the Main Stage. A small amount of socialization at the Parachain Summit preceeded it and some substantial discussion followed it. Parity Ecosystem team is currently soliciting views from ecosystem teams who would be key stakeholders.

-

Explanation

-

Overview

-

Upon implementation of this proposal, the parachain-centric slot auctions and associated crowdloans cease. Instead, Coretime on the Polkadot UC is sold by the Polkadot System in two separate formats: Bulk Coretime and Instantaneous Coretime.

-

When a Polkadot Core is utilized, we say it is dedicated to a Task rather than a "parachain". The Task to which a Core is dedicated may change at every Relay-chain block and while one predominant type of Task is to secure a Cumulus-based blockchain (i.e. a parachain), other types of Tasks are envisioned.

-

Bulk Coretime is sold periodically on a specialised system chain known as the Coretime-chain and allocated in advance of its usage, whereas Instantaneous Coretime is sold on the Relay-chain immediately prior to usage on a block-by-block basis.

-

This proposal does not fix what should be done with revenue from sales of Coretime and leaves it for a further RFC process.

-

Owners of Bulk Coretime are tracked on the Coretime-chain and the ownership status and properties of the owned Coretime are exposed over XCM as a non-fungible asset.

-

At the request of the owner, the Coretime-chain allows a single Bulk Coretime asset, known as a Region, to be used in various ways including transferal to another owner, allocated to a particular task (e.g. a parachain) or placed in the Instantaneous Coretime Pool. Regions can also be split out, either into non-overlapping sub-spans or exactly-overlapping spans with less regularity.

-

The Coretime-Chain periodically instructs the Relay-chain to assign its cores to alternative tasks as and when Core allocations change due to new Regions coming into effect.

-

Renewal and Migration

-

There is a renewal system which allows a Bulk Coretime assignment of a single core to be renewed unchanged with a known price increase from month to month. Renewals are processed in a period prior to regular purchases, effectively giving them precedence over a fixed number of cores available.

-

Renewals are only enabled when a core's assignment does not include an Instantaneous Coretime allocation and has not been split into shorter segments.

-

Thus, renewals are designed to ensure only that committed parachains get some guarantees about price for predicting future costs. This price-capped renewal system only allows cores to be reused for their same tasks from month to month. In any other context, Bulk Coretime would need to be purchased regularly.

-

As a migration mechanism, pre-existing leases (from the legacy lease/slots/crowdloan framework) are initialized into the Coretime-chain and cores assigned to them prior to Bulk Coretime sales. In the sale where the lease expires, the system offers a renewal, as above, to allow a priority sale of Bulk Coretime and ensure that the Parachain suffers no downtime when transitioning from the legacy framework.

-

Instantaneous Coretime

-

Processing of Instantaneous Coretime happens in part on the Polkadot Relay-chain. Credit is purchased on the Coretime-chain for regular DOT tokens, and this results in a DOT-denominated Instantaneous Coretime Credit account on the Relay-chain being credited for the same amount.

-

Though the Instantaneous Coretime Credit account records a balance for an account identifier (very likely controlled by a collator), it is non-transferable and non-refundable. It can only be consumed in order to purchase some Instantaneous Coretime with immediate availability.

-

The Relay-chain reports this usage back to the Coretime-chain in order to allow it to reward the providers of the underlying Coretime, either the Polkadot System or owners of Bulk Coretime who contributed to the Instantaneous Coretime Pool.

-

Specifically the Relay-chain is expected to be responsible for:

-
    -
  • holding non-transferable, non-refundable DOT-denominated Instantaneous Coretime Credit balance information.
  • -
  • setting and adjusting the price of Instantaneous Coretime based on usage.
  • -
  • allowing collators to consume their Instantaneous Coretime Credit at the current pricing in exchange for the ability to schedule one PoV for near-immediate usage.
  • -
  • ensuring the Coretime-Chain has timely accounting information on Instantaneous Coretime Sales revenue.
  • -
-

Coretime-chain

-

The Coretime-chain is a new system parachain. It has the responsibility of providing the Relay-chain via UMP with information of:

-
    -
  • The number of cores which should be made available.
  • -
  • Which tasks should be running on which cores and in what ratios.
  • -
  • Accounting information for Instantaneous Coretime Credit.
  • -
-

It also expects information from the Relay-chain via DMP:

-
    -
  • The number of cores available to be scheduled.
  • -
  • Account information on Instantaneous Coretime Sales.
  • -
-

The specific interface is properly described in RFC-5.

-

Detail

-

Parameters

-

This proposal includes a number of parameters which need not necessarily be fixed. Their usage is explained below, but their values are suggested or specified in the later section Parameter Values.

-

Reservations and Leases

-

The Coretime-chain includes some governance-set reservations of Coretime; these cover every System-chain. Additionally, governance is expected to initialize details of the pre-existing leased chains.

-

Regions

-

A Region is an assignable period of Coretime with a known regularity.

-

All Regions are associated with a unique Core Index, to identify which core the assignment of which ownership of the Region controls.

-

All Regions are also associated with a Core Mask, an 80-bit bitmap, to denote the regularity at which it may be scheduled on the core. If all bits are set in the Core Mask value, it is said to be Complete. 80 is selected since this results in the size of the datatype used to identify any Region of Polkadot Coretime to be a very convenient 128-bit. Additionally, if TIMESLICE (the number of Relay-chain blocks in a Timeslice) is 80, then a single bit in the Core Mask bitmap represents exactly one Core for one Relay-chain block in one Timeslice.

-

All Regions have a span. Region spans are quantized into periods of TIMESLICE blocks; BULK_PERIOD divides into TIMESLICE a whole number of times.

-

The Timeslice type is a u32 which can be multiplied by TIMESLICE to give a BlockNumber value representing the same quantity in terms of Relay-chain blocks.

-

Regions can be tasked to a TaskId (aka ParaId) or pooled into the Instantaneous Coretime Pool. This process can be Provisional or Final. If done only provisionally or not at all then they are fresh and have an Owner which is able to manipulate them further including reassignment. Once Final, then all ownership information is discarded and they cannot be manipulated further. Renewal is not possible when only provisionally tasked/pooled.

-

Bulk Sales

-

A sale of Bulk Coretime occurs on the Coretime-chain every BULK_PERIOD blocks.

-

In every sale, a BULK_LIMIT of individual Regions are offered for sale.

-

Each Region offered for sale has a different Core Index, ensuring that they each represent an independently allocatable resource on the Polkadot UC.

-

The Regions offered for sale have the same span: they last exactly BULK_PERIOD blocks, and begin immediately following the span of the previous Sale's Regions. The Regions offered for sale also have the complete, non-interlaced, Core Mask.

-

The Sale Period ends immediately as soon as span of the Coretime Regions that are being sold begins. At this point, the next Sale Price is set according to the previous Sale Price together with the number of Regions sold compared to the desired and maximum amount of Regions to be sold. See Price Setting for additional detail on this point.

-

Following the end of the previous Sale Period, there is an Interlude Period lasting INTERLUDE_PERIOD of blocks. After this period is elapsed, regular purchasing begins with the Purchasing Period.

-

This is designed to give at least two weeks worth of time for the purchased regions to be partitioned, interlaced, traded and allocated.

-

The Interlude

-

The Interlude period is a period prior to Regular Purchasing where renewals are allowed to happen. This has the effect of ensuring existing long-term tasks/parachains have a chance to secure their Bulk Coretime for a well-known price prior to general sales.

-

Regular Purchasing

-

Any account may purchase Regions of Bulk Coretime if they have the appropriate funds in place during the Purchasing Period, which is from INTERLUDE_PERIOD blocks after the end of the previous sale until the beginning of the Region of the Bulk Coretime which is for sale as long as there are Regions of Bulk Coretime left for sale (i.e. no more than BULK_LIMIT have already been sold in the Bulk Coretime Sale). The Purchasing Period is thus roughly BULK_PERIOD - INTERLUDE_PERIOD blocks in length.

-

The Sale Price varies during an initial portion of the Purchasing Period called the Leadin Period and then stays stable for the remainder. This initial portion is LEADIN_PERIOD blocks in duration. During the Leadin Period the price decreases towards the Sale Price, which it lands at by the end of the Leadin Period. The actual curve by which the price starts and descends to the Sale Price is outside the scope of this RFC, though a basic suggestion is provided in the Price Setting Notes, below.

-

Renewals

-

At any time when there are remaining Regions of Bulk Coretime to be sold, including during the Interlude Period, then certain Bulk Coretime assignmnents may be Renewed. This is similar to a purchase in that funds must be paid and it consumes one of the Regions of Bulk Coretime which would otherwise be placed for purchase. However there are two key differences.

-

Firstly, the price paid is the minimum of RENEWAL_PRICE_CAP more than what the purchase/renewal price was in the previous renewal and the current (or initial, if yet to begin) regular Sale Price.

-

Secondly, the purchased Region comes preassigned with exactly the same workload as before. It cannot be traded, repartitioned, interlaced or exchanged. As such unlike regular purchasing the Region never has an owner.

-

Renewal is only possible for either cores which have been assigned as a result of a previous renewal, which are migrating from legacy slot leases, or which fill their Bulk Coretime with an unsegmented, fully and finally assigned workload which does not include placement in the Instantaneous Coretime Pool. The renewed workload will be the same as this initial workload.

-

Manipulation

-

Regions may be manipulated in various ways by its owner:

-
    -
  1. Transferred in ownership.
  2. -
  3. Partitioned into quantized, non-overlapping segments of Bulk Coretime with the same ownership.
  4. -
  5. Interlaced into multiple Regions over the same period whose eventual assignments take turns to be scheduled.
  6. -
  7. Assigned to a single, specific task (identified by TaskId aka ParaId). This may be either provisional or final.
  8. -
  9. Pooled into the Instantaneous Coretime Pool, in return for a pro-rata amount of the revenue from the Instantaneous Coretime Sales over its period.
  10. -
-

Enactment

-

Specific functions of the Coretime-chain

-

Several functions of the Coretime-chain SHALL be exposed through dispatchables and/or a nonfungible trait implementation integrated into XCM:

-

1. transfer

-

Regions may have their ownership transferred.

-

A transfer(region: RegionId, new_owner: AccountId) dispatchable shall have the effect of altering the current owner of the Region identified by region from the signed origin to new_owner.

-

An implementation of the nonfungible trait SHOULD include equivalent functionality. RegionId SHOULD be used for the AssetInstance value.

-

2. partition

-

Regions may be split apart into two non-overlapping interior Regions of the same Core Mask which together concatenate to the original Region.

-

A partition(region: RegionId, pivot: Timeslice) dispatchable SHALL have the effect of removing the Region identified by region and adding two new Regions of the same owner and Core Mask. One new Region will begin at the same point of the old Region but end at pivot timeslices into the Region, whereas the other will begin at this point and end at the end point of the original Region.

-

Also:

-
    -
  • owner field of region must the equal to the Signed origin.
  • -
  • pivot must equal neither the begin nor end fields of the region.
  • -
-

3. interlace

-

Regions may be decomposed into two Regions of the same span whose eventual assignments take turns on the core by virtue of having complementary Core Masks.

-

An interlace(region: RegionId, mask: CoreMask) dispatchable shall have the effect of removing the Region identified by region and creating two new Regions. The new Regions will each have the same span and owner of the original Region, but one Region will have a Core Mask equal to mask and the other will have Core Mask equal to the XOR of mask and the Core Mask of the original Region.

-

Also:

-
    -
  • owner field of region must the equal to the Signed origin.
  • -
  • mask must have some bits set AND must not equal the Core Mask of the old Region AND must only have bits set which are also set in the old Region's' Core Mask.
  • -
-

4. assign

-

Regions may be assigned to a core.

-

A assign(region: RegionId, target: TaskId, finality: Finality) dispatchable shall have the effect of placing an item in the workplan corresponding to the region's properties and assigned to the target task.

-

If the region's end has already passed (taking into account any advance notice requirements) then this operation is a no-op. If the region's begining has already passed, then it is effectively altered to become the next schedulable timeslice.

-

finality may have the value of either Final or Provisional. If Final, then the operation is free, the region record is removed entirely from storage and renewal may be possible: if the Region's span is the entire BULK_PERIOD, then the Coretime-chain records in storage that the allocation happened during this period in order to facilitate the possibility for a renewal. (Renewal only becomes possible when the full Core Mask of a core is finally assigned for the full BULK_PERIOD.)

-

Also:

-
    -
  • owner field of region must the equal to the Signed origin.
  • -
-

5. pool

-

Regions may be consumed in exchange for a pro rata portion of the Instantaneous Coretime Sales Revenue from its period and regularity.

-

A pool(region: RegionId, beneficiary: AccountId, finality: Finality) dispatchable shall have the effect of placing an item in the workplan corresponding to the region's properties and assigned to the Instantaneous Coretime Pool. The details of the region will be recorded in order to allow for a pro rata share of the Instantaneous Coretime Sales Revenue at the time of the Region relative to any other providers in the Pool.

-

If the region's end has already passed (taking into account any advance notice requirements) then this operation is a no-op. If the region's begining has already passed, then it is effectively altered to become the next schedulable timeslice.

-

finality may have the value of either Final or Provisional. If Final, then the operation is free and the region record is removed entirely from storage.

-

Also:

-
    -
  • owner field of region must the equal to the Signed origin.
  • -
-

6. Purchases

-

A dispatchable purchase(price_limit: Balance) shall be provided. Any account may call purchase to purchase Bulk Coretime at the maximum price of price_limit.

-

This may be called successfully only:

-
    -
  1. during the regular Purchasing Period;
  2. -
  3. when the caller is a Signed origin and their account balance is reducible by the current sale price;
  4. -
  5. when the current sale price is no greater than price_limit; and
  6. -
  7. when the number of cores already sold is less than BULK_LIMIT.
  8. -
-

If successful, the caller's account balance is reduced by the current sale price and a new Region item for the following Bulk Coretime span is issued with the owner equal to the caller's account.

-

7. Renewals

-

A dispatchable renew(core: CoreIndex) shall be provided. Any account may call renew to purchase Bulk Coretime and renew an active allocation for the given core.

-

This may be called during the Interlude Period as well as the regular Purchasing Period and has the same effect as purchase followed by assign, except that:

-
    -
  1. The price of the sale is the Renewal Price (see next).
  2. -
  3. The Region is allocated exactly the given core is currently allocated for the present Region.
  4. -
-

Renewal is only valid where a Region's span is assigned to Tasks (not placed in the Instantaneous Coretime Pool) for the entire unsplit BULK_PERIOD over all of the Core Mask and with Finality. There are thus three possibilities of a renewal being allowed:

-
    -
  1. Purchased unsplit Coretime with final assignment to tasks over the full Core Mask.
  2. -
  3. Renewed Coretime.
  4. -
  5. A legacy lease which is ending.
  6. -
-

Renewal Price

-

The Renewal Price is the minimum of the current regular Sale Price (or the initial Sale Price if in the Interlude Period) and:

-
    -
  • If the workload being renewed came to be through the Purchase and Assignment of Bulk Coretime, then the price paid during that Purchase operation.
  • -
  • If the workload being renewed was previously renewed, then the price paid during this previous Renewal operation plus RENEWAL_PRICE_CAP.
  • -
  • If the workload being renewed is a migation from a legacy slot auction lease, then the nominal price for a Regular Purchase (outside of the Lead-in Period) of the Sale during which the legacy lease expires.
  • -
-

8. Instantaneous Coretime Credits

-

A dispatchable purchase_credit(amount: Balance, beneficiary: RelayChainAccountId) shall be provided. Any account with at least amount spendable funds may call this. This increases the Instantaneous Coretime Credit balance on the Relay-chain of the beneficiary by the given amount.

-

This Credit is consumable on the Relay-chain as part of the Task scheduling system and its specifics are out of the scope of this proposal. When consumed, revenue is recorded and provided to the Coretime-chain for proper distribution. The API for doing this is specified in RFC-5.

-

Notes on the Instantaneous Coretime Market

-

For an efficient market to form around the provision of Bulk-purchased Cores into the pool of cores available for Instantaneous Coretime purchase, it is crucial to ensure that price changes for the purchase of Instantaneous Coretime are reflected well in the revenues of private Coretime providers during the same period.

-

In order to ensure this, then it is crucial that Instantaneous Coretime, once purchased, cannot be held indefinitely prior to eventual use since, if this were the case, a nefarious collator could purchase Coretime when cheap and utilize it some time later when expensive and deprive private Coretime providers of their revenue.

-

It must therefore be assumed that Instantaneous Coretime, once purchased, has a definite and short "shelf-life", after which it becomes unusable. This incentivizes collators to avoid purchasing Coretime unless they expect to utilize it imminently and thus helps create an efficient market-feedback mechanism whereby a higher price will actually result in material revenues for private Coretime providers who contribute to the pool of Cores available to service Instantaneous Coretime purchases.

-

Notes on Economics

-

The specific pricing mechanisms are out of scope for the present proposal. Proposals on economics should be properly described and discussed in another RFC. However, for the sake of completeness, I provide some basic illustration of how price setting could potentially work.

-

Bulk Price Progression

-

The present proposal assumes the existence of a price-setting mechanism which takes into account several parameters:

-
    -
  • OLD_PRICE: The price of the previous sale.
  • -
  • BULK_TARGET: the target number of cores to be purchased as Bulk Coretime Regions or renewed during the previous sale.
  • -
  • BULK_LIMIT: the maximum number of cores which could have been purchased/renewed during the previous sale.
  • -
  • CORES_SOLD: the actual number of cores purchased/renewed in the previous sale.
  • -
  • SELLOUT_PRICE: the price at which the most recent Bulk Coretime was purchased (not renewed) prior to selling more cores than BULK_TARGET (or immediately after, if none were purchased before). This may not have a value if no Bulk Coretime was purchased.
  • -
-

In general we would expect the price to increase the closer CORES_SOLD gets to BULK_LIMIT and to decrease the closer it gets to zero. If it is exactly equal to BULK_TARGET, then we would expect the price to remain the same.

-

In the edge case that no cores were purchased yet more cores were sold (through renewals) than the target, then we would also avoid altering the price.

-

A simple example of this would be the formula:

-
IF SELLOUT_PRICE == NULL AND CORES_SOLD > BULK_TARGET THEN
-    RETURN OLD_PRICE
-END IF
-EFFECTIVE_PRICE := IF CORES_SOLD > BULK_TARGET THEN
-    SELLOUT_PRICE
-ELSE
-    OLD_PRICE
-END IF
-NEW_PRICE := IF CORES_SOLD < BULK_TARGET THEN
-    EFFECTIVE_PRICE * MAX(CORES_SOLD, 1) / BULK_TARGET
-ELSE
-    EFFECTIVE_PRICE + EFFECTIVE_PRICE *
-        (CORES_SOLD - BULK_TARGET) / (BULK_LIMIT - BULK_TARGET)
-END IF
-
-

This exists only as a trivial example to demonstrate a basic solution exists, and should not be intended as a concrete proposal.

-

Intra-Leadin Price-decrease

-

During the Leadin Period of a sale, the effective price starts higher than the Sale Price and falls to end at the Sale Price at the end of the Leadin Period. The price can thus be defined as a simple factor above one on which the Sale Price is multiplied. A function which returns this factor would accept a factor between zero and one specifying the portion of the Leadin Period which has passed.

-

Thus we assume SALE_PRICE, then we can define PRICE as:

-
PRICE := SALE_PRICE * FACTOR((NOW - LEADIN_BEGIN) / LEADIN_PERIOD)
-
-

We can define a very simple progression where the price decreases monotonically from double the Sale Price at the beginning of the Leadin Period.

-
FACTOR(T) := 2 - T
-
-

Parameter Values

-

Parameters are either suggested or specified. If suggested, it is non-binding and the proposal should not be judged on the value since other RFCs and/or the governance mechanism of Polkadot is expected to specify/maintain it. If specified, then the proposal should be judged on the merit of the value as-is.

-
- - - - - - - -
NameValue
BULK_PERIOD28 * DAYSspecified
INTERLUDE_PERIOD7 * DAYSspecified
LEADIN_PERIOD7 * DAYSspecified
TIMESLICE8 * MINUTESspecified
BULK_TARGET30suggested
BULK_LIMIT45suggested
RENEWAL_PRICE_CAPPerbill::from_percent(2)suggested
-
-

Instantaneous Price Progression

-

This proposal assumes the existence of a Relay-chain-based price-setting mechanism for the Instantaneous Coretime Market which alters from block to block, taking into account several parameters: the last price, the size of the Instantaneous Coretime Pool (in terms of cores per Relay-chain block) and the amount of Instantaneous Coretime waiting for processing (in terms of Core-blocks queued).

-

The ideal situation is to have the size of the Instantaneous Coretime Pool be equal to some factor of the Instantaneous Coretime waiting. This allows all Instantaneous Coretime sales to be processed with some limited latency while giving limited flexibility over ordering to the Relay-chain apparatus which is needed for efficient operation.

-

If we set a factor of three, and thus aim to retain a queue of Instantaneous Coretime Sales which can be processed within three Relay-chain blocks, then we would increase the price if the queue goes above three times the amount of cores available, and decrease if it goes under.

-

Let us assume the values OLD_PRICE, FACTOR, QUEUE_SIZE and POOL_SIZE. A simple definition of the NEW_PRICE would be thus:

-
NEW_PRICE := IF QUEUE_SIZE < POOL_SIZE * FACTOR THEN
-    OLD_PRICE * 0.95
-ELSE
-    OLD_PRICE / 0.95
-END IF
-
-

This exists only as a trivial example to demonstrate a basic solution exists, and should not be intended as a concrete proposal.

-

Notes on Types

-

This exists only as a short illustration of a potential technical implementation and should not be treated as anything more.

-

Regions

-

This data schema achieves a number of goals:

-
    -
  • Coretime can be individually traded at a level of a single usage of a single core.
  • -
  • Coretime Regions, of arbitrary span and up to 1/80th interlacing can be exposed as NFTs and exchanged.
  • -
  • Any Coretime Region can be contributed to the Instantaneous Coretime Pool.
  • -
  • Unlimited number of individual Coretime contributors to the Instantaneous Coretime Pool. (Effectively limited only in number of cores and interlacing level; with current values this would allow 80,000 individual payees per timeslice).
  • -
  • All keys are self-describing.
  • -
  • Workload to communicate core (re-)assignments is well-bounded and low in weight.
  • -
  • All mandatory bookkeeping workload is well-bounded in weight.
  • -
-
#![allow(unused)]
-fn main() {
-type Timeslice = u32; // 80 block amounts.
-type CoreIndex = u16;
-type CoreMask = [u8; 10]; // 80-bit bitmap.
-
-// 128-bit (16 bytes)
-struct RegionId {
-    begin: Timeslice,
-    core: CoreIndex,
-    mask: CoreMask,
-}
-// 296-bit (37 bytes)
-struct RegionRecord {
-    end: Timeslice,
-    owner: AccountId,
-}
-
-map Regions = Map<RegionId, RegionRecord>;
-
-// 40-bit (5 bytes). Could be 32-bit with a more specialised type.
-enum CoreTask {
-    Off,
-    Assigned { target: TaskId },
-    InstaPool,
-}
-// 120-bit (15 bytes). Could be 14 bytes with a specialised 32-bit `CoreTask`.
-struct ScheduleItem {
-    mask: CoreMask, // 80 bit
-    task: CoreTask, // 40 bit
-}
-
-/// The work we plan on having each core do at a particular time in the future.
-type Workplan = Map<(Timeslice, CoreIndex), BoundedVec<ScheduleItem, 80>>;
-/// The current workload of each core. This gets updated with workplan as timeslices pass.
-type Workload = Map<CoreIndex, BoundedVec<ScheduleItem, 80>>;
-
-enum Contributor {
-    System,
-    Private(AccountId),
-}
-
-struct ContributionRecord {
-    begin: Timeslice,
-    end: Timeslice,
-    core: CoreIndex,
-    mask: CoreMask,
-    payee: Contributor,
-}
-type InstaPoolContribution = Map<ContributionRecord, ()>;
-
-type SignedTotalMaskBits = u32;
-type InstaPoolIo = Map<Timeslice, SignedTotalMaskBits>;
-
-type PoolSize = Value<TotalMaskBits>;
-
-/// Counter for the total CoreMask which could be dedicated to a pool. `u32` so we don't ever get
-/// an overflow.
-type TotalMaskBits = u32;
-struct InstaPoolHistoryRecord {
-    total_contributions: TotalMaskBits,
-    maybe_payout: Option<Balance>,
-}
-/// Total InstaPool rewards for each Timeslice and the number of core Mask which contributed.
-type InstaPoolHistory = Map<Timeslice, InstaPoolHistoryRecord>;
-}
-

CoreMask tracks unique "parts" of a single core. It is used with interlacing in order to give a unique identifier to each component of any possible interlacing configuration of a core, allowing for simple self-describing keys for all core ownership and allocation information. It also allows for each core's workload to be tracked and updated progressively, keeping ongoing compute costs well-bounded and low.

-

Regions are issued into the Regions map and can be transferred, partitioned and interlaced as the owner desires. Regions can only be tasked if they begin after the current scheduling deadline (if they have missed this, then the region can be auto-trimmed until it is).

-

Once tasked, they are removed from there and a record is placed in Workplan. In addition, if they are contributed to the Instantaneous Coretime Pool, then an entry is placing in InstaPoolContribution and InstaPoolIo.

-

Each timeslice, InstaPoolIo is used to update the current value of PoolSize. A new entry in InstaPoolHistory is inserted, with the total_contributions field of InstaPoolHistoryRecord being informed by the PoolSize value. Each core's has its Workload mutated according to its Workplan for the upcoming timeslice.

-

When Instantaneous Coretime Market Revenues are reported for a particular timeslice from the Relay-chain, this information gets placed in the maybe_payout field of the relevant record of InstaPoolHistory.

-

Payments can be requested made for any records in InstaPoolContribution whose begin is the key for a value in InstaPoolHistory whose maybe_payout is Some. In this case, the total_contributions is reduced by the ContributionRecord's mask and a pro rata amount paid. The ContributionRecord is mutated by incrementing begin, or removed if begin becomes equal to end.

-

Example:

-
#![allow(unused)]
-fn main() {
-// Simple example with a `u16` `CoreMask` and bulk sold in 100 timeslices.
-Regions:
-{ core: 0u16, begin: 100, mask: 0b1111_1111_1111_1111u16 } => { end: 200u32, owner: Alice };
-// First split @ 50
-Regions:
-{ core: 0u16, begin: 100, mask: 0b1111_1111_1111_1111u16 } => { end: 150u32, owner: Alice };
-{ core: 0u16, begin: 150, mask: 0b1111_1111_1111_1111u16 } => { end: 200u32, owner: Alice };
-// Share half of first 50 blocks
-Regions:
-{ core: 0u16, begin: 100, mask: 0b1111_1111_0000_0000u16 } => { end: 150u32, owner: Alice };
-{ core: 0u16, begin: 100, mask: 0b0000_0000_1111_1111u16 } => { end: 150u32, owner: Alice };
-{ core: 0u16, begin: 150, mask: 0b1111_1111_1111_1111u16 } => { end: 200u32, owner: Alice };
-// Sell half of them to Bob
-Regions:
-{ core: 0u16, begin: 100, mask: 0b1111_1111_0000_0000u16 } => { end: 150u32, owner: Alice };
-{ core: 0u16, begin: 100, mask: 0b0000_0000_1111_1111u16 } => { end: 150u32, owner: Bob };
-{ core: 0u16, begin: 150, mask: 0b1111_1111_1111_1111u16 } => { end: 200u32, owner: Alice };
-// Bob splits first 10 and assigns them to himself.
-Regions:
-{ core: 0u16, begin: 100, mask: 0b1111_1111_0000_0000u16 } => { end: 150u32, owner: Alice };
-{ core: 0u16, begin: 100, mask: 0b0000_0000_1111_1111u16 } => { end: 110u32, owner: Bob };
-{ core: 0u16, begin: 110, mask: 0b0000_0000_1111_1111u16 } => { end: 150u32, owner: Bob };
-{ core: 0u16, begin: 150, mask: 0b1111_1111_1111_1111u16 } => { end: 200u32, owner: Alice };
-// Bob shares first 10 3 ways and sells smaller shares to Charlie and Dave
-Regions:
-{ core: 0u16, begin: 100, mask: 0b1111_1111_0000_0000u16 } => { end: 150u32, owner: Alice };
-{ core: 0u16, begin: 100, mask: 0b0000_0000_1100_0000u16 } => { end: 110u32, owner: Charlie };
-{ core: 0u16, begin: 100, mask: 0b0000_0000_0011_0000u16 } => { end: 110u32, owner: Dave };
-{ core: 0u16, begin: 100, mask: 0b0000_0000_0000_1111u16 } => { end: 110u32, owner: Bob };
-{ core: 0u16, begin: 110, mask: 0b0000_0000_1111_1111u16 } => { end: 150u32, owner: Bob };
-{ core: 0u16, begin: 150, mask: 0b1111_1111_1111_1111u16 } => { end: 200u32, owner: Alice };
-// Bob assigns to his para B, Charlie and Dave assign to their paras C and D; Alice assigns first 50 to A
-Regions:
-{ core: 0u16, begin: 150, mask: 0b1111_1111_1111_1111u16 } => { end: 200u32, owner: Alice };
-Workplan:
-(100, 0) => vec![
-    { mask: 0b1111_1111_0000_0000u16, task: Assigned(A) },
-    { mask: 0b0000_0000_1100_0000u16, task: Assigned(C) },
-    { mask: 0b0000_0000_0011_0000u16, task: Assigned(D) },
-    { mask: 0b0000_0000_0000_1111u16, task: Assigned(B) },
-]
-(110, 0) => vec![{ mask: 0b0000_0000_1111_1111u16, task: Assigned(B) }]
-// Alice assigns her remaining 50 timeslices to the InstaPool paying herself:
-Regions: (empty)
-Workplan:
-(100, 0) => vec![
-    { mask: 0b1111_1111_0000_0000u16, task: Assigned(A) },
-    { mask: 0b0000_0000_1100_0000u16, task: Assigned(C) },
-    { mask: 0b0000_0000_0011_0000u16, task: Assigned(D) },
-    { mask: 0b0000_0000_0000_1111u16, task: Assigned(B) },
-]
-(110, 0) => vec![{ mask: 0b0000_0000_1111_1111u16, task: Assigned(B) }]
-(150, 0) => vec![{ mask: 0b1111_1111_1111_1111u16, task: InstaPool }]
-InstaPoolContribution:
-{ begin: 150, end: 200, core: 0, mask: 0b1111_1111_1111_1111u16, payee: Alice }
-InstaPoolIo:
-150 => 16
-200 => -16
-// Actual notifications to relay chain.
-// Assumes:
-// - Timeslice is 10 blocks.
-// - Timeslice 0 begins at block #1000.
-// - Relay needs 10 blocks notice of change.
-//
-Workload: 0 => vec![]
-PoolSize: 0
-
-// Block 990:
-Relay <= assign_core(core: 0u16, begin: 1000, assignment: vec![(A, 8), (C, 2), (D, 2), (B, 4)])
-Workload: 0 => vec![
-    { mask: 0b1111_1111_0000_0000u16, task: Assigned(A) },
-    { mask: 0b0000_0000_1100_0000u16, task: Assigned(C) },
-    { mask: 0b0000_0000_0011_0000u16, task: Assigned(D) },
-    { mask: 0b0000_0000_0000_1111u16, task: Assigned(B) },
-]
-PoolSize: 0
-
-// Block 1090:
-Relay <= assign_core(core: 0u16, begin: 1100, assignment: vec![(A, 8), (B, 8)])
-Workload: 0 => vec![
-    { mask: 0b1111_1111_0000_0000u16, task: Assigned(A) },
-    { mask: 0b0000_0000_1111_1111u16, task: Assigned(B) },
-]
-PoolSize: 0
-
-// Block 1490:
-Relay <= assign_core(core: 0u16, begin: 1500, assignment: vec![(Pool, 16)])
-Workload: 0 => vec![
-    { mask: 0b1111_1111_1111_1111u16, task: InstaPool },
-]
-PoolSize: 16
-InstaPoolIo:
-200 => -16
-InstaPoolHistory:
-150 => { total_contributions: 16, maybe_payout: None }
-
-// Sometime after block 1500:
-InstaPoolHistory:
-150 => { total_contributions: 16, maybe_payout: Some(P) }
-
-// Sometime after block 1990:
-InstaPoolIo: (empty)
-PoolSize: 0
-InstaPoolHistory:
-150 => { total_contributions: 16, maybe_payout: Some(P0) }
-151 => { total_contributions: 16, maybe_payout: Some(P1) }
-152 => { total_contributions: 16, maybe_payout: Some(P2) }
-...
-199 => { total_contributions: 16, maybe_payout: Some(P49) }
-
-// Sometime later still Alice calls for a payout
-InstaPoolContribution: (empty)
-InstaPoolHistory: (empty)
-// Alice gets rewarded P0 + P1 + ... P49.
-}
-

Rollout

-

Rollout of this proposal comes in several phases:

-
    -
  1. Finalise the specifics of implementation; this may be done through a design document or through a well-documented prototype implementation.
  2. -
  3. Implement the design, including all associated aspects such as unit tests, benchmarks and any support software needed.
  4. -
  5. If any new parachain is required, launch of this.
  6. -
  7. Formal audit of the implementation and any manual testing.
  8. -
  9. Announcement to the various stakeholders of the imminent changes.
  10. -
  11. Software integration and release.
  12. -
  13. Governance upgrade proposal(s).
  14. -
  15. Monitoring of the upgrade process.
  16. -
-

Performance, Ergonomics and Compatibility

-

No specific considerations.

-

Parachains already deployed into the Polkadot UC must have a clear plan of action to migrate to an agile Coretime market.

-

While this proposal does not introduce documentable features per se, adequate documentation must be provided to potential purchasers of Polkadot Coretime. This SHOULD include any alterations to the Polkadot-SDK software collection.

-

Testing, Security and Privacy

-

Regular testing through unit tests, integration tests, manual testnet tests, zombie-net tests and fuzzing SHOULD be conducted.

-

A regular security review SHOULD be conducted prior to deployment through a review by the Web3 Foundation economic research group.

-

Any final implementation MUST pass a professional external security audit.

-

The proposal introduces no new privacy concerns.

- -

RFC-3 proposes a means of implementing the high-level allocations within the Relay-chain.

-

RFC-5 proposes the API for interacting with Relay-chain.

-

Additional work should specify the interface for the instantaneous market revenue so that the Coretime-chain can ensure Bulk Coretime placed in the instantaneous market is properly compensated.

-

Drawbacks, Alternatives and Unknowns

-

Unknowns include the economic and resource parameterisations:

-
    -
  • The initial price of Bulk Coretime.
  • -
  • The price-change algorithm between Bulk Coretime sales.
  • -
  • The price increase per Bulk Coretime period for renewals.
  • -
  • The price decrease graph in the Leadin period for Bulk Coretime sales.
  • -
  • The initial price of Instantaneous Coretime.
  • -
  • The price-change algorithm for Instantaneous Coretime sales.
  • -
  • The percentage of cores to be sold as Bulk Coretime.
  • -
  • The fate of revenue collected.
  • -
-

Prior Art and References

-

Robert Habermeier initially wrote on the subject of Polkadot blockspace-centric in the article Polkadot Blockspace over Blockchains. While not going into details, the article served as an early reframing piece for moving beyond one-slot-per-chain models and building out secondary market infrastructure for resource allocation.

- -
- - -
-
- - - -
- - - - - - - - - - - - - - - - - - -
- - diff --git a/mdbook/text/0005-coretime-interface.html b/mdbook/text/0005-coretime-interface.html deleted file mode 100644 index 11e9a97f5..000000000 --- a/mdbook/text/0005-coretime-interface.html +++ /dev/null @@ -1,345 +0,0 @@ - - - - - - - 0005 - Polkadot Fellowship RFCs - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - - - - - - - - - - - - - - - - -
- -
- - - - - - - - -
-
-

RFC-5: Coretime Interface

-
- - - -
Start Date06 July 2023
DescriptionInterface for manipulating the usage of cores on the Polkadot Ubiquitous Computer.
AuthorsGavin Wood, Robert Habermeier
-
-

Summary

-

In the Agile Coretime model of the Polkadot Ubiquitous Computer, as proposed in RFC-1 and RFC-3, it is necessary for the allocating parachain (envisioned to be one or more pallets on a specialised Brokerage System Chain) to communicate the core assignments to the Relay-chain, which is responsible for ensuring those assignments are properly enacted.

-

This is a proposal for the interface which will exist around the Relay-chain in order to communicate this information and instructions.

-

Motivation

-

The background motivation for this interface is splitting out coretime allocation functions and secondary markets from the Relay-chain onto System parachains. A well-understood and general interface is necessary for ensuring the Relay-chain receives coretime allocation instructions from one or more System chains without introducing dependencies on the implementation details of either side.

-

Requirements

-
    -
  • The interface MUST allow the Relay-chain to be scheduled on a low-latency basis.
  • -
  • Individual cores MUST be schedulable, both in full to a single task (a ParaId or the Instantaneous Coretime Pool) or to many unique tasks in differing ratios.
  • -
  • Typical usage of the interface SHOULD NOT overload the VMP message system.
  • -
  • The interface MUST allow for the allocating chain to be notified of all accounting information relevant for making accurate rewards for contributing to the Instantaneous Coretime Pool.
  • -
  • The interface MUST allow for Instantaneous Coretime Market Credits to be communicated.
  • -
  • The interface MUST allow for the allocating chain to instruct changes to the number of cores which it is able to allocate.
  • -
  • The interface MUST allow for the allocating chain to be notified of changes to the number of cores which are able to be allocated by the allocating chain.
  • -
-

Stakeholders

-

Primary stakeholder sets are:

-
    -
  • Developers of the Relay-chain core-management logic.
  • -
  • Developers of the Brokerage System Chain and its pallets.
  • -
-

Socialization:

-

This content of this RFC was discussed in the Polkdot Fellows channel.

-

Explanation

-

The interface has two sections: The messages which the Relay-chain is able to receive from the allocating parachain (the UMP message types), and messages which the Relay-chain is able to send to the allocating parachain (the DMP message types). These messages are expected to be able to be implemented in a well-known pallet and called with the XCM Transact instruction.

-

Future work may include these messages being introduced into the XCM standard.

-

UMP Message Types

-

request_core_count

-

Prototype:

-
fn request_core_count(
-    count: u16,
-)
-
-

Requests the Relay-chain to alter the number of schedulable cores to count. Under normal operation, the Relay-chain SHOULD send a notify_core_count(count) message back.

-

request_revenue_info_at

-

Prototype:

-
fn request_revenue_at(
-    when: BlockNumber,
-)
-
-

Requests that the Relay-chain send a notify_revenue message back at or soon after Relay-chain block number when whose until parameter is equal to when.

-

The period in to the past which when is allowed to be may be limited; if so the limit should be understood on a channel outside of this proposal. In the case that the request cannot be serviced because when is too old a block then a notify_revenue message must still be returned, but its revenue field may be None.

-

credit_account

-

Prototype:

-
fn credit_account(
-    who: AccountId,
-    amount: Balance,
-)
-
-

Instructs the Relay-chain to add the amount of DOT to the Instantaneous Coretime Market Credit account of who.

-

It is expected that Instantaneous Coretime Market Credit on the Relay-chain is NOT transferrable and only redeemable when used to assign cores in the Instantaneous Coretime Pool.

-

assign_core

-

Prototype:

-
type PartsOf57600 = u16;
-enum CoreAssignment {
-    InstantaneousPool,
-    Task(ParaId),
-}
-fn assign_core(
-    core: CoreIndex,
-    begin: BlockNumber,
-    assignment: Vec<(CoreAssignment, PartsOf57600)>,
-    end_hint: Option<BlockNumber>,
-)
-
-

Requirements:

-
assert!(core < core_count);
-assert!(targets.iter().map(|x| x.0).is_sorted());
-assert_eq!(targets.iter().map(|x| x.0).unique().count(), targets.len());
-assert_eq!(targets.iter().map(|x| x.1).sum(), 57600);
-
-

Where:

-
    -
  • core_count is assumed to be the sole parameter in the last received notify_core_count message.
  • -
-

Instructs the Relay-chain to ensure that the core indexed as core is utilised for a number of assignments in specific ratios given by assignment starting as soon after begin as possible. Core assignments take the form of a CoreAssignment value which can either task the core to a ParaId value or indicate that the core should be used in the Instantaneous Pool. Each assignment comes with a ratio value, represented as the numerator of the fraction with a denominator of 57,600.

-

If end_hint is Some and the inner is greater than the current block number, then the Relay-chain should optimize in the expectation of receiving a new assign_core(core, ...) message at or prior to the block number of the inner value. Specific functionality should remain unchanged regardless of the end_hint value.

-

On the choice of denominator: 57,600 is a very composite number which factors into: 2 ** 8, 3 ** 2, 5 ** 2. By using it as the denominator we allow for various useful fractions to be perfectly represented including thirds, quarters, fifths, tenths, 80ths, percent and 256ths.

-

DMP Message Types

-

notify_core_count

-

Prototype:

-
fn notify_core_count(
-    count: u16,
-)
-
-

Indicate that from this block onwards, the range of acceptable values of the core parameter of assign_core message is [0, count). assign_core will be a no-op if provided with a value for core outside of this range.

-

notify_revenue_info

-

Prototype:

-
fn notify_revenue_info(
-    until: BlockNumber,
-    revenue: Option<Balance>,
-)
-
-

Provide the amount of revenue accumulated from Instantaneous Coretime Sales from Relay-chain block number last_until to until, not including until itself. last_until is defined as being the until argument of the last notify_revenue message sent, or zero for the first call. If revenue is None, this indicates that the information is no longer available.

-

This explicitly disregards the possibility of multiple parachains requesting and being notified of revenue information. The Relay-chain must be configured to ensure that only a single revenue information destination exists.

-

Realistic Limits of the Usage

-

For request_revenue_info, a successful request should be possible if when is no less than the Relay-chain block number on arrival of the message less 100,000.

-

For assign_core, a successful request should be possible if begin is no less than the Relay-chain block number on arrival of the message plus 10 and workload contains no more than 100 items.

-

Performance, Ergonomics and Compatibility

-

No specific considerations.

-

Testing, Security and Privacy

-

Standard Polkadot testing and security auditing applies.

-

The proposal introduces no new privacy concerns.

- -

RFC-1 proposes a means of determining allocation of Coretime using this interface.

-

RFC-3 proposes a means of implementing the high-level allocations within the Relay-chain.

-

Drawbacks, Alternatives and Unknowns

-

None at present.

-

Prior Art and References

-

None.

- -
- - -
-
- - - -
- - - - - - - - - - - - - - - - - - -
- - diff --git a/mdbook/text/0007-system-collator-selection.html b/mdbook/text/0007-system-collator-selection.html deleted file mode 100644 index e6ea8cf48..000000000 --- a/mdbook/text/0007-system-collator-selection.html +++ /dev/null @@ -1,358 +0,0 @@ - - - - - - - 0007 - Polkadot Fellowship RFCs - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - - - - - - - - - - - - - - - - -
- -
- - - - - - - - -
-
-

RFC-0007: System Collator Selection

-
- - - -
Start Date07 July 2023
DescriptionMechanism for selecting collators of system chains.
AuthorsJoe Petrowski
-
-

Summary

-

As core functionality moves from the Relay Chain into system chains, so increases the reliance on -the liveness of these chains for the use of the network. It is not economically scalable, nor -necessary from a game-theoretic perspective, to pay collators large rewards. This RFC proposes a -mechanism -- part technical and part social -- for ensuring reliable collator sets that are -resilient to attemps to stop any subsytem of the Polkadot protocol.

-

Motivation

-

In order to guarantee access to Polkadot's system, the collators on its system chains must propose -blocks (provide liveness) and allow all transactions to eventually be included. That is, some -collators may censor transactions, but there must exist one collator in the set who will include a -given transaction. In fact, all collators may censor varying subsets of transactions, but as long -as no transaction is in the intersection of every subset, it will eventually be included. The -objective of this RFC is to propose a mechanism to select such a set on each system chain.

-

While the network as a whole uses staking (and inflationary rewards) to attract validators, -collators face different challenges in scale and have lower security assumptions than validators. -Regarding scale, there exist many system chains, and it is economically expensive to pay collators -a premium. Likewise, any staked DOT for collation is not staked for validation. Since collator -sets do not need to meet Byzantine Fault Tolerance criteria, staking as the primary mechanism for -collator selection would remove stake that is securing BFT assumptions, making the network less -secure.

-

Another problem with economic scalability relates to the increasing number of system chains, and -corresponding increase in need for collators (i.e., increase in collator slots). "Good" (highly -available, non-censoring) collators will not want to compete in elections on many chains when they -could use their resources to compete in the more profitable validator election. Such dilution -decreases the required bond on each chain, leaving them vulnerable to takeover by hostile -collator groups.

-

This RFC proposes a system whereby collation is primarily an infrastructure service, with the -on-chain Treasury reimbursing costs of semi-trusted node operators, referred to as "Invulnerables". -The system need not trust the individual operators, only that as a set they would be resilient to -coordinated attempts to stop a single chain from halting or to censor a particular subset of -transactions.

-

In the case that users do not trust this set, this RFC also proposes that each chain always have -available collator positions that can be acquired by anyone by placing a bond.

-

Requirements

-
    -
  • System MUST have at least one valid collator for every chain.
  • -
  • System MUST allow anyone to become a collator, provided they reserve/hold enough DOT.
  • -
  • System SHOULD select a set of collators with reasonable expectation that the set will not collude -to censor any subset of transactions.
  • -
  • Collators selected by governance SHOULD have a reasonable expectation that the Treasury will -reimburse their operating costs.
  • -
-

Stakeholders

-
    -
  • Infrastructure providers (people who run validator/collator nodes)
  • -
  • Polkadot Treasury
  • -
-

Explanation

-

This protocol builds on the existing -Collator Selection pallet -and its notion of Invulnerables. Invulnerables are collators (identified by their AccountIds) who -will be selected as part of the collator set every session. Operations relating to the management -of the Invulnerables are done through privileged, governance origins. The implementation should -maintain an API for adding and removing Invulnerable collators.

-

In addition to Invulnerables, there are also open slots for "Candidates". Anyone can register as a -Candidate by placing a fixed bond. However, with a fixed bond and fixed number of slots, there is -an obvious selection problem: The slots fill up without any logic to replace their occupants.

-

This RFC proposes that the collator selection protocol allow Candidates to increase (and decrease) -their individual bonds, sort the Candidates according to bond, and select the top N Candidates. -The selection and changeover should be coordinated by the session manager.

-

A FRAME pallet already exists for sorting ("bagging") "top N" groups, the -Bags List pallet. -This pallet's SortedListProvider should be integrated into the session manager of the Collator -Selection pallet.

-

Despite the lack of apparent economic incentives (i.e., inflation), several reasons exist why one -may want to bond funds to participate in the Candidates election, for example:

-
    -
  • They want to build credibility to be selected as Invulnerable;
  • -
  • They want to ensure availability of an application, e.g. a stablecoin issuer might run a collator -on Asset Hub to ensure transactions in its asset are included in blocks;
  • -
  • They fear censorship themselves, e.g. a voter might think their votes are being censored from -governance, so they run a collator on the governance chain to include their votes.
  • -
-

Unlike the fixed-bond mechanism that fills up its Candidates, the election mechanism ensures that -anyone can join the collator set by placing the Nth highest bond.

-

Set Size

-

In order to achieve the requirements listed under Motivation, it is reasonable to have -approximately:

-
    -
  • 20 collators per system chain,
  • -
  • of which 15 are Invulnerable, and
  • -
  • five are elected by bond.
  • -
-

Drawbacks

-

The primary drawback is a reliance on governance for continued treasury funding of infrastructure -costs for Invulnerable collators.

-

Testing, Security, and Privacy

-

The vast majority of cases can be covered by unit testing. Integration test should ensure that the -Collator Selection UpdateOrigin, which has permission to modify the Invulnerables and desired -number of Candidates, can handle updates over XCM from the system's governance location.

-

Performance, Ergonomics, and Compatibility

-

This proposal has very little impact on most users of Polkadot, and should improve the performance -of system chains by reducing the number of missed blocks.

-

Performance

-

As chains have strict PoV size limits, care must be taken in the PoV impact of the session manager. -Appropriate benchmarking and tests should ensure that conservative limits are placed on the number -of Invulnerables and Candidates.

-

Ergonomics

-

The primary group affected is Candidate collators, who, after implementation of this RFC, will need -to compete in a bond-based election rather than a race to claim a Candidate spot.

-

Compatibility

-

This RFC is compatible with the existing implementation and can be handled via upgrades and -migration.

-

Prior Art and References

-

Written Discussions

- -

Prior Feedback and Input From

-
    -
  • Kian Paimani
  • -
  • Jeff Burdges
  • -
  • Rob Habermeier
  • -
  • SR Labs Auditors
  • -
  • Current collators including Paranodes, Stake Plus, Turboflakes, Peter Mensik, SIK, and many more.
  • -
-

Unresolved Questions

-

None at this time.

- -

There may exist in the future system chains for which this model of collator selection is not -appropriate. These chains should be evaluated on a case-by-case basis.

- -
- - -
-
- - - -
- - - - - - - - - - - - - - - - - - -
- - diff --git a/mdbook/text/0008-parachain-bootnodes-dht.html b/mdbook/text/0008-parachain-bootnodes-dht.html deleted file mode 100644 index 5c7c2e68e..000000000 --- a/mdbook/text/0008-parachain-bootnodes-dht.html +++ /dev/null @@ -1,315 +0,0 @@ - - - - - - - 0008 - Polkadot Fellowship RFCs - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - - - - - - - - - - - - - - - - -
- -
- - - - - - - - -
-
-

RFC-0008: Store parachain bootnodes in relay chain DHT

-
- - - -
Start Date2023-07-14
DescriptionParachain bootnodes shall register themselves in the DHT of the relay chain
AuthorsPierre Krieger
-
-

Summary

-

The full nodes of the Polkadot peer-to-peer network maintain a distributed hash table (DHT), which is currently used for full nodes discovery and validators discovery purposes.

-

This RFC proposes to extend this DHT to be used to discover full nodes of the parachains of Polkadot.

-

Motivation

-

The maintenance of bootnodes has long been an annoyance for everyone.

-

When a bootnode is newly-deployed or removed, every chain specification must be updated in order to take the update into account. This has lead to various non-optimal solutions, such as pulling chain specifications from GitHub repositories. -When it comes to RPC nodes, UX developers often have trouble finding up-to-date addresses of parachain RPC nodes. With the ongoing migration from RPC nodes to light clients, similar problems would happen with chain specifications as well.

-

Furthermore, there exists multiple different possible variants of a certain chain specification: with the non-raw storage, with the raw storage, with just the genesis trie root hash, with or without checkpoint, etc. All of this creates confusion. Removing the need for parachain developers to be aware of and manage these different versions would be beneficial.

-

Since the PeerId and addresses of bootnodes needs to be stable, extra maintenance work is required from the chain maintainers. For example, they need to be extra careful when migrating nodes within their infrastructure. In some situations, bootnodes are put behind domain names, which also requires maintenance work.

-

Because the list of bootnodes in chain specifications is so annoying to modify, the consequence is that the number of bootnodes is rather low (typically between 2 and 15). In order to better resist downtimes and DoS attacks, a better solution would be to use every node of a certain chain as potential bootnode, rather than special-casing some specific nodes.

-

While this RFC doesn't solve these problems for relay chains, it aims at solving it for parachains by storing the list of all the full nodes of a parachain on the relay chain DHT.

-

Assuming that this RFC is implemented, and that light clients are used, deploying a parachain wouldn't require more work than registering it onto the relay chain and starting the collators. There wouldn't be any need for special infrastructure nodes anymore.

-

Stakeholders

-

This RFC has been opened on my own initiative because I think that this is a good technical solution to a usability problem that many people are encountering and that they don't realize can be solved.

-

Explanation

-

The content of this RFC only applies for parachains and parachain nodes that are "Substrate-compatible". It is in no way mandatory for parachains to comply to this RFC.

-

Note that "Substrate-compatible" is very loosely defined as "implements the same mechanisms and networking protocols as Substrate". The author of this RFC believes that "Substrate-compatible" should be very precisely specified, but there is controversy on this topic.

-

While a lot of this RFC concerns the implementation of parachain nodes, it makes use of the resources of the Polkadot chain, and as such it is important to describe them in the Polkadot specification.

-

This RFC adds two mechanisms: a registration in the DHT, and a new networking protocol.

-

DHT provider registration

-

This RFC heavily relies on the functionalities of the Kademlia DHT already in use by Polkadot. -You can find a link to the specification here.

-

Full nodes of a parachain registered on Polkadot should register themselves onto the Polkadot DHT as the providers of a key corresponding to the parachain that they are serving, as described in the Content provider advertisement section of the specification. This uses the ADD_PROVIDER system of libp2p-kademlia.

-

This key is: sha256(concat(scale_compact(para_id), randomness)) where the value of randomness can be found in the randomness field when calling the BabeApi_currentEpoch function. -For example, for a para_id equal to 1000, and at the time of writing of this RFC (July 14th 2023 at 09:13 UTC), it is sha(0xa10f12872447958d50aa7b937b0106561a588e0e2628d33f81b5361b13dbcf8df708), which is equal to 0x483dd8084d50dbbbc962067f216c37b627831d9339f5a6e426a32e3076313d87.

-

In order to avoid downtime when the key changes, parachain full nodes should also register themselves as a secondary key that uses a value of randomness equal to the randomness field when calling BabeApi_nextEpoch.

-

Implementers should be aware that their implementation of Kademlia might already hash the key before XOR'ing it. The key is not meant to be hashed twice.

-

The compact SCALE encoding has been chosen in order to avoid problems related to the number of bytes and endianness of the para_id.

-

New networking protocol

-

A new request-response protocol should be added, whose name is /91b171bb158e2d3848fa23a9f1c25182fb8e20313b2c1eb49219da7a70ce90c3/paranode (that hexadecimal number is the genesis hash of the Polkadot chain, and should be adjusted appropriately for Kusama and others).

-

The request consists in a SCALE-compact-encoded para_id. For example, for a para_id equal to 1000, this is 0xa10f.

-

Note that because this is a request-response protocol, the request is always prefixed with its length in bytes. While the body of the request is simply the SCALE-compact-encoded para_id, the data actually sent onto the substream is both the length and body.

-

The response consists in a protobuf struct, defined as:

-
syntax = "proto2";
-
-message Response {
-    // Peer ID of the node on the parachain side.
-    bytes peer_id = 1;
-
-    // Multiaddresses of the parachain side of the node. The list and format are the same as for the `listenAddrs` field of the `identify` protocol.
-    repeated bytes addrs = 2;
-
-    // Genesis hash of the parachain. Used to determine the name of the networking protocol to connect to the parachain. Untrusted.
-    bytes genesis_hash = 3;
-
-    // So-called "fork ID" of the parachain. Used to determine the name of the networking protocol to connect to the parachain. Untrusted.
-    optional string fork_id = 4;
-};
-
-

The maximum size of a response is set to an arbitrary 16kiB. The responding side should make sure to conform to this limit. Given that fork_id is typically very small and that the only variable-length field is addrs, this is easily achieved by limiting the number of addresses.

-

Implementers should be aware that addrs might be very large, and are encouraged to limit the number of addrs to an implementation-defined value.

-

Drawbacks

-

The peer_id and addrs fields are in theory not strictly needed, as the PeerId and addresses could be always equal to the PeerId and addresses of the node being registered as the provider and serving the response. However, the Cumulus implementation currently uses two different networking stacks, one of the parachain and one for the relay chain, using two separate PeerIds and addresses, and as such the PeerId and addresses of the other networking stack must be indicated. Asking them to use only one networking stack wouldn't feasible in a realistic time frame.

-

The values of the genesis_hash and fork_id fields cannot be verified by the requester and are expected to be unused at the moment. Instead, a client that desires connecting to a parachain is expected to obtain the genesis hash and fork ID of the parachain from the parachain chain specification. These fields are included in the networking protocol nonetheless in case an acceptable solution is found in the future, and in order to allow use cases such as discovering parachains in a not-strictly-trusted way.

-

Testing, Security, and Privacy

-

Because not all nodes want to be used as bootnodes, implementers are encouraged to provide a way to disable this mechanism. However, it is very much encouraged to leave this mechanism on by default for all parachain nodes.

-

This mechanism doesn't add or remove any security by itself, as it relies on existing mechanisms. -However, if the principle of chain specification bootnodes is entirely replaced with the mechanism described in this RFC (which is the objective), then it becomes important whether the mechanism in this RFC can be abused in order to make a parachain unreachable.

-

Due to the way Kademlia works, it would become the responsibility of the 20 Polkadot nodes whose sha256(peer_id) is closest to the key (described in the explanations section) to store the list of bootnodes of each parachain. -Furthermore, when a large number of providers (here, a provider is a bootnode) are registered, only the providers closest to the key are kept, up to a certain implementation-defined limit.

-

For this reason, an attacker can abuse this mechanism by randomly generating libp2p PeerIds until they find the 20 entries closest to the key representing the target parachain. They are then in control of the parachain bootnodes. -Because the key changes periodically and isn't predictable, and assuming that the Polkadot DHT is sufficiently large, it is not realistic for an attack like this to be maintained in the long term.

-

Furthermore, parachain clients are expected to cache a list of known good nodes on their disk. If the mechanism described in this RFC went down, it would only prevent new nodes from accessing the parachain, while clients that have connected before would not be affected.

-

Performance, Ergonomics, and Compatibility

-

Performance

-

The DHT mechanism generally has a low overhead, especially given that publishing providers is done only every 24 hours.

-

Doing a Kademlia iterative query then sending a provider record shouldn't take more than around 50 kiB in total of bandwidth for the parachain bootnode.

-

Assuming 1000 parachain full nodes, the 20 Polkadot full nodes corresponding to a specific parachain will each receive a sudden spike of a few megabytes of networking traffic when the key rotates. Again, this is relatively negligible. If this becomes a problem, one can add a random delay before a parachain full node registers itself to be the provider of the key corresponding to BabeApi_next_epoch.

-

Maybe the biggest uncertainty is the traffic that the 20 Polkadot full nodes will receive from light clients that desire knowing the bootnodes of a parachain. Light clients are generally encouraged to cache the peers that they use between restarts, so they should only query these 20 Polkadot full nodes at their first initialization. -If this every becomes a problem, this value of 20 is an arbitrary constant that can be increased for more redundancy.

-

Ergonomics

-

Irrelevant.

-

Compatibility

-

Irrelevant.

-

Prior Art and References

-

None.

-

Unresolved Questions

-

While it fundamentally doesn't change much to this RFC, using BabeApi_currentEpoch and BabeApi_nextEpoch might be inappropriate. I'm not familiar enough with good practices within the runtime to have an opinion here. Should it be an entirely new pallet?

- -

It is possible that in the future a client could connect to a parachain without having to rely on a trusted parachain specification.

- -
- - -
-
- - - -
- - - - - - - - - - - - - - - - - - -
- - diff --git a/mdbook/text/0010-burn-coretime-revenue.html b/mdbook/text/0010-burn-coretime-revenue.html deleted file mode 100644 index 469e692ae..000000000 --- a/mdbook/text/0010-burn-coretime-revenue.html +++ /dev/null @@ -1,256 +0,0 @@ - - - - - - - 0010 - Polkadot Fellowship RFCs - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - - - - - - - - - - - - - - - - -
- -
- - - - - - - - -
-
-

RFC-0010: Burn Coretime Revenue

-
- - - -
Start Date19.07.2023
DescriptionRevenue from Coretime sales should be burned
AuthorsJonas Gehrlein
-
-

Summary

-

The Polkadot UC will generate revenue from the sale of available Coretime. The question then arises: how should we handle these revenues? Broadly, there are two reasonable paths – burning the revenue and thereby removing it from total issuance or divert it to the Treasury. This Request for Comment (RFC) presents arguments favoring burning as the preferred mechanism for handling revenues from Coretime sales.

-

Motivation

-

How to handle the revenue accrued from Coretime sales is an important economic question that influences the value of DOT and should be properly discussed before deciding for either of the options. Now is the best time to start this discussion.

-

Stakeholders

-

Polkadot DOT token holders.

-

Explanation

-

This RFC discusses potential benefits of burning the revenue accrued from Coretime sales instead of diverting them to Treasury. Here are the following arguments for it.

-

It's in the interest of the Polkadot community to have a consistent and predictable Treasury income, because volatility in the inflow can be damaging, especially in situations when it is insufficient. As such, this RFC operates under the presumption of a steady and sustainable Treasury income flow, which is crucial for the Polkadot community's stability. The assurance of a predictable Treasury income, as outlined in a prior discussion here, or through other equally effective measures, serves as a baseline assumption for this argument.

-

Consequently, we need not concern ourselves with this particular issue here. This naturally begs the question - why should we introduce additional volatility to the Treasury by aligning it with the variable Coretime sales? It's worth noting that Coretime revenues often exhibit an inverse relationship with periods when Treasury spending should ideally be ramped up. During periods of low Coretime utilization (indicated by lower revenue), Treasury should spend more on projects and endeavours to increase the demand for Coretime. This pattern underscores that Coretime sales, by their very nature, are an inconsistent and unpredictable source of funding for the Treasury. Given the importance of maintaining a steady and predictable inflow, it's unnecessary to rely on another volatile mechanism. Some might argue that we could have both: a steady inflow (from inflation) and some added bonus from Coretime sales, but burning the revenue would offer further benefits as described below.

-
    -
  • -

    Balancing Inflation: While DOT as a utility token inherently profits from a (reasonable) net inflation, it also benefits from a deflationary force that functions as a counterbalance to the overall inflation. Right now, the only mechanism on Polkadot that burns fees is the one for underutilized DOT in the Treasury. Finding other, more direct target for burns makes sense and the Coretime market is a good option.

    -
  • -
  • -

    Clear incentives: By burning the revenue accrued on Coretime sales, prices paid by buyers are clearly costs. This removes distortion from the market that might arise when the paid tokens occur on some other places within the network. In that case, some actors might have secondary motives of influencing the price of Coretime sales, because they benefit down the line. For example, actors that actively participate in the Coretime sales are likely to also benefit from a higher Treasury balance, because they might frequently request funds for their projects. While those effects might appear far-fetched, they could accumulate. Burning the revenues makes sure that the prices paid are clearly costs to the actors themselves.

    -
  • -
  • -

    Collective Value Accrual: Following the previous argument, burning the revenue also generates some externality, because it reduces the overall issuance of DOT and thereby increases the value of each remaining token. In contrast to the aforementioned argument, this benefits all token holders collectively and equally. Therefore, I'd consider this as the preferrable option, because burns lets all token holders participate at Polkadot's success as Coretime usage increases.

    -
  • -
- -
- - -
-
- - - -
- - - - - - - - - - - - - - - - - - -
- - diff --git a/mdbook/text/0012-process-for-adding-new-collectives.html b/mdbook/text/0012-process-for-adding-new-collectives.html deleted file mode 100644 index c5d97d2a7..000000000 --- a/mdbook/text/0012-process-for-adding-new-collectives.html +++ /dev/null @@ -1,313 +0,0 @@ - - - - - - - 0012 - Polkadot Fellowship RFCs - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - - - - - - - - - - - - - - - - -
- -
- - - - - - - - -
-
-

RFC-0012: Process for Adding New System Collectives

-
- - - -
Start Date24 July 2023
DescriptionA process for adding new (and removing existing) system collectives.
AuthorsJoe Petrowski
-
-

Summary

-

Since the introduction of the Collectives parachain, many groups have expressed interest in forming -new -- or migrating existing groups into -- on-chain collectives. While adding a new collective is -relatively simple from a technical standpoint, the Fellowship will need to merge new pallets into -the Collectives parachain for each new collective. This RFC proposes a means for the network to -ratify a new collective, thus instructing the Fellowship to instate it in the runtime.

-

Motivation

-

Many groups have expressed interest in representing collectives on-chain. Some of these include:

-
    -
  • Parachain technical fellowship (new)
  • -
  • Fellowship(s) for media, education, and evangelism (new)
  • -
  • Polkadot Ambassador Program (existing)
  • -
  • Anti-Scam Team (existing)
  • -
-

Collectives that form part of the core Polkadot protocol should have a mandate to serve the -Polkadot network. However, as part of the Polkadot protocol, the Fellowship, in its capacity of -maintaining system runtimes, will need to include modules and configurations for each collective.

-

Once a group has developed a value proposition for the Polkadot network, it should have a clear -path to having its collective accepted on-chain as part of the protocol. Acceptance should direct -the Fellowship to include the new collective with a given initial configuration into the runtime. -However, the network, not the Fellowship, should ultimately decide which collectives are in the -interest of the network.

-

Stakeholders

-
    -
  • Polkadot stakeholders who would like to organize on-chain.
  • -
  • Technical Fellowship, in its role of maintaining system runtimes.
  • -
-

Explanation

-

The group that wishes to operate an on-chain collective should publish the following information:

-
    -
  • Charter, including the collective's mandate and how it benefits Polkadot. This would be similar -to the -Fellowship Manifesto.
  • -
  • Seeding recommendation.
  • -
  • Member types, i.e. should members be individuals or organizations.
  • -
  • Member management strategy, i.e. how do members join and get promoted, if applicable.
  • -
  • How much, if at all, members should get paid in salary.
  • -
  • Any special origins this Collective should have outside its self. For example, the Fellowship -can whitelist calls for referenda via the WhitelistOrigin.
  • -
-

This information could all be in a single document or, for example, a GitHub repository.

-

After publication, members should seek feedback from the community and Technical Fellowship, and -make any revisions needed. When the collective believes the proposal is ready, they should bring a -remark with the text APPROVE_COLLECTIVE("{collective name}, {commitment}") to a Root origin -referendum. The proposer should provide instructions for generating commitment. The passing of -this referendum would be unequivocal direction to the Fellowship that this collective should be -part of the Polkadot runtime.

-

Note: There is no need for a REJECT referendum. Proposals that have not been approved are simply -not included in the runtime.

-

Removing Collectives

-

If someone believes that an existing collective is not acting in the interest of the network or in -accordance with its charter, they should likewise have a means to instruct the Fellowship to -remove that collective from Polkadot.

-

An on-chain remark from the Root origin with the text -REMOVE_COLLECTIVE("{collective name}, {para ID}, [{pallet indices}]") would instruct the -Fellowship to remove the collective via the listed pallet indices on paraId. Should someone want -to construct such a remark, they should have a reasonable expectation that a member of the -Fellowship would help them identify the pallet indices associated with a given collective, whether -or not the Fellowship member agrees with removal.

-

Collective removal may also come with other governance calls, for example voiding any scheduled -Treasury spends that would fund the given collective.

-

Drawbacks

-

Passing a Root origin referendum is slow. However, given the network's investment (in terms of code -maintenance and salaries) in a new collective, this is an appropriate step.

-

Testing, Security, and Privacy

-

No impacts.

-

Performance, Ergonomics, and Compatibility

-

Generally all new collectives will be in the Collectives parachain. Thus, performance impacts -should strictly be limited to this parachain and not affect others. As the majority of logic for -collectives is generalized and reusable, we expect most collectives to be instances of similar -subsets of modules. That is, new collectives should generally be compatible with UIs and other -services that provide collective-related functionality, with little modifications to support new -ones.

-

Prior Art and References

-

The launch of the Technical Fellowship, see the -initial forum post.

-

Unresolved Questions

-

None at this time.

- -
- - -
-
- - - -
- - - - - - - - - - - - - - - - - - -
- - diff --git a/mdbook/text/0013-prepare-blockbuilder-and-core-runtime-apis-for-mbms.html b/mdbook/text/0013-prepare-blockbuilder-and-core-runtime-apis-for-mbms.html deleted file mode 100644 index b98838301..000000000 --- a/mdbook/text/0013-prepare-blockbuilder-and-core-runtime-apis-for-mbms.html +++ /dev/null @@ -1,306 +0,0 @@ - - - - - - - 0013 - Polkadot Fellowship RFCs - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - - - - - - - - - - - - - - - - -
- -
- - - - - - - - -
-
-

RFC-0013: Prepare Core runtime API for MBMs

-
- - - -
Start DateJuly 24, 2023
DescriptionPrepare the Core Runtime API for Multi-Block-Migrations
AuthorsOliver Tale-Yazdi
-
-

Summary

-

Introduces breaking changes to the Core runtime API by letting Core::initialize_block return an enum. The versions of Core is bumped from 4 to 5.

-

Motivation

-

The main feature that motivates this RFC are Multi-Block-Migrations (MBM); these make it possible to split a migration over multiple blocks.
-Further it would be nice to not hinder the possibility of implementing a new hook poll, that runs at the beginning of the block when there are no MBMs and has access to AllPalletsWithSystem. This hook can then be used to replace the use of on_initialize and on_finalize for non-deadline critical logic.
-In a similar fashion, it should not hinder the future addition of a System::PostInherents callback that always runs after all inherents were applied.

-

Stakeholders

-
    -
  • Substrate Maintainers: They have to implement this, including tests, audit and -maintenance burden.
  • -
  • Polkadot Runtime developers: They will have to adapt the runtime files to this breaking change.
  • -
  • Polkadot Parachain Teams: They have to adapt to the breaking changes but then eventually have -multi-block migrations available.
  • -
-

Explanation

-

Core::initialize_block

-

This runtime API function is changed from returning () to ExtrinsicInclusionMode:

-
fn initialize_block(header: &<Block as BlockT>::Header)
-+  -> ExtrinsicInclusionMode;
-
-

With ExtrinsicInclusionMode is defined as:

-
#![allow(unused)]
-fn main() {
-enum ExtrinsicInclusionMode {
-  /// All extrinsics are allowed in this block.
-  AllExtrinsics,
-  /// Only inherents are allowed in this block.
-  OnlyInherents,
-}
-}
-

A block author MUST respect the ExtrinsicInclusionMode that is returned by initialize_block. The runtime MUST reject blocks that have non-inherent extrinsics in them while OnlyInherents was returned.

-

Coming back to the motivations and how they can be implemented with this runtime API change:

-

1. Multi-Block-Migrations: The runtime is being put into lock-down mode for the duration of the migration process by returning OnlyInherents from initialize_block. This ensures that no user provided transaction can interfere with the migration process. It is absolutely necessary to ensure this, otherwise a transaction could call into un-migrated storage and violate storage invariants.

-

2. poll is possible by using apply_extrinsic as entry-point and not hindered by this approach. It would not be possible to use a pallet inherent like System::last_inherent to achieve this for two reasons: First is that pallets do not have access to AllPalletsWithSystem which is required to invoke the poll hook on all pallets. Second is that the runtime does currently not enforce an order of inherents.

-

3. System::PostInherents can be done in the same manner as poll.

-

Drawbacks

-

The previous drawback of cementing the order of inherents has been addressed and removed by redesigning the approach. No further drawbacks have been identified thus far.

-

Testing, Security, and Privacy

-

The new logic of initialize_block can be tested by checking that the block-builder will skip transactions when OnlyInherents is returned.

-

Security: n/a

-

Privacy: n/a

-

Performance, Ergonomics, and Compatibility

-

Performance

-

The performance overhead is minimal in the sense that no clutter was added after fulfilling the -requirements. The only performance difference is that initialize_block also returns an enum that needs to be passed through the WASM boundary. This should be negligible.

-

Ergonomics

-

The new interface allows for more extensible runtime logic. In the future, this will be utilized for -multi-block-migrations which should be a huge ergonomic advantage for parachain developers.

-

Compatibility

-

The advice here is OPTIONAL and outside of the RFC. To not degrade -user experience, it is recommended to ensure that an updated node can still import historic blocks.

-

Prior Art and References

-

The RFC is currently being implemented in polkadot-sdk#1781 (formerly substrate#14275). Related issues and merge -requests:

- -

Unresolved Questions

-

Please suggest a better name for BlockExecutiveMode. We already tried: RuntimeExecutiveMode, -ExtrinsicInclusionMode. The names of the modes Normal and Minimal were also called -AllExtrinsics and OnlyInherents, so if you have naming preferences; please post them.
-=> renamed to ExtrinsicInclusionMode

-

Is post_inherents more consistent instead of last_inherent? Then we should change it.
-=> renamed to last_inherent

- -

The long-term future here is to move the block building logic into the runtime. Currently there is a tight dance between the block author and the runtime; the author has to call into different runtime functions in quick succession and exact order. Any misstep causes the block to be invalid.
-This can be unified and simplified by moving both parts into the runtime.

- -
- - -
-
- - - -
- - - - - - - - - - - - - - - - - - -
- - diff --git a/mdbook/text/0014-improve-locking-mechanism-for-parachains.html b/mdbook/text/0014-improve-locking-mechanism-for-parachains.html deleted file mode 100644 index 33efabc07..000000000 --- a/mdbook/text/0014-improve-locking-mechanism-for-parachains.html +++ /dev/null @@ -1,347 +0,0 @@ - - - - - - - 0014 - Polkadot Fellowship RFCs - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - - - - - - - - - - - - - - - - -
- -
- - - - - - - - -
-
-

RFC-0014: Improve locking mechanism for parachains

-
- - - -
Start DateJuly 25, 2023
DescriptionImprove locking mechanism for parachains
AuthorsBryan Chen
-
-

Summary

-

This RFC proposes a set of changes to the parachain lock mechanism. The goal is to allow a parachain manager to self-service the parachain without root track governance action.

-

This is achieved by remove existing lock conditions and only lock a parachain when:

-
    -
  • A parachain manager explicitly lock the parachain
  • -
  • OR a parachain block is produced successfully
  • -
-

Motivation

-

The manager of a parachain has permission to manage the parachain when the parachain is unlocked. Parachains are by default locked when onboarded to a slot. This requires the parachain wasm/genesis must be valid, otherwise a root track governance action on relaychain is required to update the parachain.

-

The current reliance on root track governance actions for managing parachains can be time-consuming and burdensome. This RFC aims to address this technical difficulty by allowing parachain managers to take self-service actions, rather than relying on general public voting.

-

The key scenarios this RFC seeks to improve are:

-
    -
  1. Rescue a parachain with invalid wasm/genesis.
  2. -
-

While we have various resources and templates to build a new parachain, it is still not a trivial task. It is very easy to make a mistake and resulting an invalid wasm/genesis. With lack of tools to help detect those issues1, it is very likely that the issues are only discovered after the parachain is onboarded on a slot. In this case, the parachain is locked and the parachain team has to go through a lengthy governance process to rescue the parachain.

-
    -
  1. Perform lease renewal for an existing parachain.
  2. -
-

One way to perform lease renewal for a parachain is by doing a least swap with another parachain with a longer lease. This requires the other parachain must be operational and able to perform XCM transact call into relaychain to dispatch the swap call. Combined with the overhead of setting up a new parachain, this is an time consuming and expensive process. Ideally, the parachain manager should be able to perform the lease swap call without having a running parachain2.

-

Requirements

-
    -
  • A parachain manager SHOULD be able to rescue a parachain by updating the wasm/genesis without root track governance action.
  • -
  • A parachain manager MUST NOT be able to update the wasm/genesis if the parachain is locked.
  • -
  • A parachain SHOULD be locked when it successfully produced the first block.
  • -
  • A parachain manager MUST be able to perform lease swap without having a running parachain.
  • -
-

Stakeholders

-
    -
  • Parachain teams
  • -
  • Parachain users
  • -
-

Explanation

-

Status quo

-

A parachain can either be locked or unlocked3. With parachain locked, the parachain manager does not have any privileges. With parachain unlocked, the parachain manager can perform following actions with the paras_registrar pallet:

-
    -
  • deregister: Deregister a Para Id, freeing all data and returning any deposit.
  • -
  • swap: Initiate or confirm lease swap with another parachain.
  • -
  • add_lock: Lock the parachain.
  • -
  • schedule_code_upgrade: Schedule a parachain upgrade to update parachain wasm.
  • -
  • set_current_head: Set the parachain's current head.
  • -
-

Currently, a parachain can be locked with following conditions:

-
    -
  • From add_lock call, which can be dispatched by relaychain Root origin, the parachain, or the parachain manager.
  • -
  • When a parachain is onboarded on a slot4.
  • -
  • When a crowdloan is created.
  • -
-

Only the relaychain Root origin or the parachain itself can unlock the lock5.

-

This creates an issue that if the parachain is unable to produce block, the parachain manager is unable to do anything and have to rely on relaychain Root origin to manage the parachain.

-

Proposed changes

-

This RFC proposes to change the lock and unlock conditions.

-

A parachain can be locked only with following conditions:

-
    -
  • Relaychain governance MUST be able to lock any parachain.
  • -
  • A parachain MUST be able to lock its own lock.
  • -
  • A parachain manager SHOULD be able to lock the parachain.
  • -
  • A parachain SHOULD be locked when it successfully produced a block for the first time.
  • -
-

A parachain can be unlocked only with following conditions:

-
    -
  • Relaychain governance MUST be able to unlock any parachain.
  • -
  • A parachain MUST be able to unlock its own lock.
  • -
-

Note that create crowdloan MUST NOT lock the parachain and onboard a parachain SHOULD NOT lock it until a new block is successfully produced.

-

Migration

-

A one off migration is proposed in order to apply this change retrospectively so that existing parachains can also be benefited from this RFC. This migration will unlock parachains that confirms with following conditions:

-
    -
  • Parachain is locked.
  • -
  • Parachain never produced a block. Including from expired leases.
  • -
  • Parachain manager never explicitly lock the parachain.
  • -
-

Drawbacks

-

Parachain locks are designed in such way to ensure the decentralization of parachains. If parachains are not locked when it should be, it could introduce centralization risk for new parachains.

-

For example, one possible scenario is that a collective may decide to launch a parachain fully decentralized. However, if the parachain is unable to produce block, the parachain manager will be able to replace the wasm and genesis without the consent of the collective.

-

It is considered this risk is tolerable as it requires the wasm/genesis to be invalid at first place. It is not yet practically possible to develop a parachain without any centralized risk currently.

-

Another case is that a parachain team may decide to use crowdloan to help secure a slot lease. Previously, creating a crowdloan will lock a parachain. This means crowdloan participants will know exactly the genesis of the parachain for the crowdloan they are participating. However, this actually providers little assurance to crowdloan participants. For example, if the genesis block is determined before a crowdloan is started, it is not possible to have onchain mechanism to enforce reward distributions for crowdloan participants. They always have to rely on the parachain team to fulfill the promise after the parachain is alive.

-

Existing operational parachains will not be impacted.

-

Testing, Security, and Privacy

-

The implementation of this RFC will be tested on testnets (Rococo and Westend) first.

-

An audit maybe required to ensure the implementation does not introduce unwanted side effects.

-

There is no privacy related concerns.

-

Performance

-

This RFC should not introduce any performance impact.

-

Ergonomics

-

This RFC should improve the developer experiences for new and existing parachain teams

-

Compatibility

-

This RFC is fully compatibility with existing interfaces.

-

Prior Art and References

-
    -
  • Parachain Slot Extension Story: https://github.com/paritytech/polkadot/issues/4758
  • -
  • Allow parachain to renew lease without actually run another parachain: https://github.com/paritytech/polkadot/issues/6685
  • -
  • Always treat parachain that never produced block for a significant amount of time as unlocked: https://github.com/paritytech/polkadot/issues/7539
  • -
-

Unresolved Questions

-

None at this stage.

- -

This RFC is only intended to be a short term solution. Slots will be removed in future and lock mechanism is likely going to be replaced with a more generalized parachain manage & recovery system in future. Therefore long term impacts of this RFC are not considered.

-
1 -

https://github.com/paritytech/cumulus/issues/377

-
-
2 -

https://github.com/paritytech/polkadot/issues/6685

-
-
3 -

https://github.com/paritytech/polkadot/blob/994af3de79af25544bf39644844cbe70a7b4d695/runtime/common/src/paras_registrar.rs#L51-L52C15

-
-
4 -

https://github.com/paritytech/polkadot/blob/994af3de79af25544bf39644844cbe70a7b4d695/runtime/common/src/paras_registrar.rs#L473-L475

-
-
5 -

https://github.com/paritytech/polkadot/blob/994af3de79af25544bf39644844cbe70a7b4d695/runtime/common/src/paras_registrar.rs#L333-L340

-
- -
- - -
-
- - - -
- - - - - - - - - - - - - - - - - - -
- - diff --git a/mdbook/text/0022-adopt-encointer-runtime.html b/mdbook/text/0022-adopt-encointer-runtime.html deleted file mode 100644 index 864811931..000000000 --- a/mdbook/text/0022-adopt-encointer-runtime.html +++ /dev/null @@ -1,268 +0,0 @@ - - - - - - - 0022 - Polkadot Fellowship RFCs - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - - - - - - - - - - - - - - - - -
- -
- - - - - - - - -
-
-

RFC-0022: Adopt Encointer Runtime

-
- - - -
Start DateAug 22nd 2023
DescriptionPermanently move the Encointer runtime into the Fellowship runtimes repo.
Authors@brenzi for Encointer Association, 8000 Zurich, Switzerland
-
-

Summary

-

Encointer is a system chain on Kusama since Jan 2022 and has been developed and maintained by the Encointer association. This RFC proposes to treat Encointer like any other system chain and include it in the fellowship repo with this PR.

-

Motivation

-

Encointer does not seek to be in control of its runtime repository. As a decentralized system, the fellowship has a more suitable structure to maintain a system chain runtime repo than the Encointer association does.

-

Also, Encointer aims to update its runtime in batches with other system chains in order to have consistency for interoperability across system chains.

-

Stakeholders

-
    -
  • Fellowship: Will continue to take upon them the review and auditing work for the Encointer runtime, but the process is streamlined with other system chains and therefore less time-consuming compared to the separate repo and CI process we currently have.
  • -
  • Kusama Network: Tokenholders can easily see the changes of all system chains in one place.
  • -
  • Encointer Association: Further decentralization of the Encointer Network necessities like devops.
  • -
  • Encointer devs: Being able to work directly in the Fellowship runtimes repo to streamline and synergize with other developers.
  • -
-

Explanation

-

Our PR has all details about our runtime and how we would move it into the fellowship repo.

-

Noteworthy: All Encointer-specific pallets will still be located in encointer's repo for the time being: https://github.com/encointer/pallets

-

It will still be the duty of the Encointer team to keep its runtime up to date and provide adequate test fixtures. Frequent dependency bumps with Polkadot releases would be beneficial for interoperability and could be streamlined with other system chains but that will not be a duty of fellowship. Whenever possible, all system chains could be upgraded jointly (including Encointer) with a batch referendum.

-

Further notes:

-
    -
  • Encointer will publish all its crates crates.io
  • -
  • Encointer does not carry out external auditing of its runtime nor pallets. It would be beneficial but not a requirement from our side if Encointer could join the auditing process of other system chains.
  • -
-

Drawbacks

-

Other than all other system chains, development and maintenance of the Encointer Network is mainly financed by the KSM Treasury and possibly the DOT Treasury in the future. Encointer is dedicated to maintaining its network and runtime code for as long as possible, but there is a dependency on funding which is not in the hands of the fellowship. The only risk in the context of funding, however, is that the Encointer runtime will see less frequent updates if there's less funding.

-

Testing, Security, and Privacy

-

No changes to the existing system are proposed. Only changes to how maintenance is organized.

-

Performance, Ergonomics, and Compatibility

-

No changes

-

Prior Art and References

-

Existing Encointer runtime repo

-

Unresolved Questions

-

None identified

- -

More info on Encointer: encointer.org

- -
- - -
-
- - - -
- - - - - - - - - - - - - - - - - - -
- - diff --git a/mdbook/text/0032-minimal-relay.html b/mdbook/text/0032-minimal-relay.html deleted file mode 100644 index 1e438af9a..000000000 --- a/mdbook/text/0032-minimal-relay.html +++ /dev/null @@ -1,436 +0,0 @@ - - - - - - - 0032 - Polkadot Fellowship RFCs - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - - - - - - - - - - - - - - - - -
- -
- - - - - - - - -
-
-

RFC-0032: Minimal Relay

-
- - - -
Start Date20 September 2023
DescriptionProposal to minimise Relay Chain functionality.
AuthorsJoe Petrowski, Gavin Wood
-
-

Summary

-

The Relay Chain contains most of the core logic for the Polkadot network. While this was necessary -prior to the launch of parachains and development of XCM, most of this logic can exist in -parachains. This is a proposal to migrate several subsystems into system parachains.

-

Motivation

-

Polkadot's scaling approach allows many distinct state machines (known generally as parachains) to -operate with common guarantees about the validity and security of their state transitions. Polkadot -provides these common guarantees by executing the state transitions on a strict subset (a backing -group) of the Relay Chain's validator set.

-

However, state transitions on the Relay Chain need to be executed by all validators. If any of -those state transitions can occur on parachains, then the resources of the complement of a single -backing group could be used to offer more cores. As in, they could be offering more coretime (a.k.a. -blockspace) to the network.

-

By minimising state transition logic on the Relay Chain by migrating it into "system chains" -- a -set of parachains that, with the Relay Chain, make up the Polkadot protocol -- the Polkadot -Ubiquitous Computer can maximise its primary offering: secure blockspace.

-

Stakeholders

-
    -
  • Parachains that interact with affected logic on the Relay Chain;
  • -
  • Core protocol and XCM format developers;
  • -
  • Tooling, block explorer, and UI developers.
  • -
-

Explanation

-

The following pallets and subsystems are good candidates to migrate from the Relay Chain:

-
    -
  • Identity
  • -
  • Balances
  • -
  • Staking -
      -
    • Staking
    • -
    • Election Provider
    • -
    • Bags List
    • -
    • NIS
    • -
    • Nomination Pools
    • -
    • Fast Unstake
    • -
    -
  • -
  • Governance -
      -
    • Treasury and Bounties
    • -
    • Conviction Voting
    • -
    • Referenda
    • -
    -
  • -
-

Note: The Auctions and Crowdloan pallets will be replaced by Coretime, its system chain and -interface described in RFC-1 and RFC-5, respectively.

-

Migrations

-

Some subsystems are simpler to move than others. For example, migrating Identity can be done by -simply preventing state changes in the Relay Chain, using the Identity-related state as the genesis -for a new chain, and launching that new chain with the genesis and logic (pallet) needed.

-

Other subsystems cannot experience any downtime like this because they are essential to the -network's functioning, like Staking and Governance. However, these can likely coexist with a -similarly-permissioned system chain for some time, much like how "Gov1" and "OpenGov" coexisted at -the latter's introduction.

-

Specific migration plans will be included in release notes of runtimes from the Polkadot Fellowship -when beginning the work of migrating a particular subsystem.

-

Interfaces

-

The Relay Chain, in many cases, will still need to interact with these subsystems, especially -Staking and Governance. These subsystems will require making some APIs available either via -dispatchable calls accessible to XCM Transact or possibly XCM Instructions in future versions.

-

For example, Staking provides a pallet-API to register points (e.g. for block production) and -offences (e.g. equivocation). With Staking in a system chain, that chain would need to allow the -Relay Chain to update validator points periodically so that it can correctly calculate rewards.

-

A pub-sub protocol may also lend itself to these types of interactions.

-

Functional Architecture

-

This RFC proposes that system chains form individual components within the system's architecture and -that these components are chosen as functional groups. This approach allows synchronous -composibility where it is most valuable, but isolates logic in such a way that provides flexibility -for optimal resource allocation (see Resource Allocation). For the -subsystems discussed in this RFC, namely Identity, Governance, and Staking, this would mean:

-
    -
  • People Chain, for identity and personhood logic, providing functionality related to the attributes -of single actors;
  • -
  • Governance Chain, for governance and system collectives, providing functionality for pluralities -to express their voices within the system;
  • -
  • Staking Chain, for Polkadot's staking system, including elections, nominations, reward -distribution, slashing, and non-interactive staking; and
  • -
  • Asset Hub, for fungible and non-fungible assets, including DOT.
  • -
-

The Collectives chain and Asset Hub already exist, so implementation of this RFC would mean two new -chains (People and Staking), with Governance moving to the currently-known-as Collectives chain -and Asset Hub being increasingly used for DOT over the Relay Chain.

-

Note that one functional group will likely include many pallets, as we do not know how pallet -configurations and interfaces will evolve over time.

-

Resource Allocation

-

The system should minimise wasted blockspace. These three (and other) subsystems may not each -consistently require a dedicated core. However, core scheduling is far more agile than functional -grouping. While migrating functionality from one chain to another can be a multi-month endeavour, -cores can be rescheduled almost on-the-fly.

-

Migrations are also breaking changes to some use cases, for example other parachains that need to -route XCM programs to particular chains. It is thus preferable to do them a single time in migrating -off the Relay Chain, reducing the risk of needing parachain splits in the future.

-

Therefore, chain boundaries should be based on functional grouping where synchronous composibility -is most valuable; and efficient resource allocation should be managed by the core scheduling -protocol.

-

Many of these system chains (including Asset Hub) could often share a single core in a semi-round -robin fashion (the coretime may not be uniform). When needed, for example during NPoS elections or -slashing events, the scheduler could allocate a dedicated core to the chain in need of more -throughput.

-

Deployment

-

Actual migrations should happen based on some prioritization. This RFC proposes to migrate Identity, -Staking, and Governance as the systems to work on first. A brief discussion on the factors involved -in each one:

-

Identity

-

Identity will be one of the simpler pallets to migrate into a system chain, as its logic is largely -self-contained and it does not "share" balances with other subsystems. As in, any DOT is held in -reserve as a storage deposit and cannot be simultaneously used the way locked DOT can be locked for -multiple purposes.

-

Therefore, migration can take place as follows:

-
    -
  1. The pallet can be put in a locked state, blocking most calls to the pallet and preventing updates -to identity info.
  2. -
  3. The frozen state will form the genesis of a new system parachain.
  4. -
  5. Functions will be added to the pallet that allow migrating the deposit to the parachain. The -parachain deposit is on the order of 1/100th of the Relay Chain's. Therefore, this will result in -freeing up Relay State as well as most of each user's reserved balance.
  6. -
  7. The pallet and any leftover state can be removed from the Relay Chain.
  8. -
-

User interfaces that render Identity information will need to source their data from the new system -parachain.

-

Note: In the future, it may make sense to decommission Kusama's Identity chain and do all account -identities via Polkadot's. However, the Kusama chain will serve as a dress rehearsal for Polkadot.

-

Staking

-

Migrating the staking subsystem will likely be the most complex technical undertaking, as the -Staking system cannot stop (the system MUST always have a validator set) nor run in parallel (the -system MUST have only one validator set) and the subsystem itself is made up of subsystems in the -runtime and the node. For example, if offences are reported to the Staking parachain, validator -nodes will need to submit their reports there.

-

Handling balances also introduces complications. The same balance can be used for staking and -governance. Ideally, all balances stay on Asset Hub, and only report "credits" to system chains like -Staking and Governance. However, staking mutates balances by issuing new DOT on era changes and for -rewards. Allowing DOT directly on the Staking parachain would simplify staking changes.

-

Given the complexity, it would be pragmatic to include the Balances pallet in the Staking parachain -in its first version. Any other systems that use overlapping locks, most notably governance, will -need to recognise DOT held on both Asset Hub and the Staking parachain.

-

There is more discussion about staking in a parachain in Moving Staking off the Relay -Chain.

-

Governance

-

Migrating governance into a parachain will be less complicated than staking. Most of the primitives -needed for the migration already exist. The Treasury supports spending assets on remote chains and -collectives like the Polkadot Technical Fellowship already function in a parachain. That is, XCM -already provides the ability to express system origins across chains.

-

Therefore, actually moving the governance logic into a parachain will be simple. It can run in -parallel with the Relay Chain's governance, which can be removed when the parachain has demonstrated -sufficient functionality. It's possible that the Relay Chain maintain a Root-level emergency track -for situations like parachains -halting.

-

The only complication arises from the fact that both Asset Hub and the Staking parachain will have -DOT balances; therefore, the Governance chain will need to be able to credit users' voting power -based on balances from both locations. This is not expected to be difficult to handle.

-

Kusama

-

Although Polkadot and Kusama both have system chains running, they have to date only been used for -introducing new features or bodies, for example fungible assets or the Technical Fellowship. There -has not yet been a migration of logic/state from the Relay Chain into a parachain. Given its more -realistic network conditions than testnets, Kusama is the best stage for rehearsal.

-

In the case of identity, Polkadot's system may be sufficient for the ecosystem. Therefore, Kusama -should be used to test the migration of logic and state from Relay Chain to parachain, but these -features may be (at the will of Kusama's governance) dropped from Kusama entirely after a successful -migration on Polkadot.

-

For Governance, Polkadot already has the Collectives parachain, which would become the Governance -parachain. The entire group of DOT holders is itself a collective (the legislative body), and -governance provides the means to express voice. Launching a Kusama Governance chain would be -sensible to rehearse a migration.

-

The Staking subsystem is perhaps where Kusama would provide the most value in its canary capacity. -Staking is the subsystem most constrained by PoV limits. Ensuring that elections, payouts, session -changes, offences/slashes, etc. work in a parachain on Kusama -- with its larger validator set -- -will give confidence to the chain's robustness on Polkadot.

-

Drawbacks

-

These subsystems will have reduced resources in cores than on the Relay Chain. Staking in particular -may require some optimizations to deal with constraints.

-

Testing, Security, and Privacy

-

Standard audit/review requirements apply. More powerful multi-chain integration test tools would be -useful in developement.

-

Performance, Ergonomics, and Compatibility

-

Describe the impact of the proposal on the exposed functionality of Polkadot.

-

Performance

-

This is an optimization. The removal of public/user transactions on the Relay Chain ensures that its -primary resources are allocated to system performance.

-

Ergonomics

-

This proposal alters very little for coretime users (e.g. parachain developers). Application -developers will need to interact with multiple chains, making ergonomic light client tools -particularly important for application development.

-

For existing parachains that interact with these subsystems, they will need to configure their -runtimes to recognize the new locations in the network.

-

Compatibility

-

Implementing this proposal will require some changes to pallet APIs and/or a pub-sub protocol. -Application developers will need to interact with multiple chains in the network.

-

Prior Art and References

- -

Unresolved Questions

-

There remain some implementation questions, like how to use balances for both Staking and -Governance. See, for example, Moving Staking off the Relay -Chain.

- -

Ideally the Relay Chain becomes transactionless, such that not even balances are represented there. -With Staking and Governance off the Relay Chain, this is not an unreasonable next step.

-

With Identity on Polkadot, Kusama may opt to drop its People Chain.

- -
- - -
-
- - - -
- - - - - - - - - - - - - - - - - - -
- - diff --git a/mdbook/text/0042-extrinsics-state-version.html b/mdbook/text/0042-extrinsics-state-version.html deleted file mode 100644 index e1583b0a0..000000000 --- a/mdbook/text/0042-extrinsics-state-version.html +++ /dev/null @@ -1,304 +0,0 @@ - - - - - - - 0042 - Polkadot Fellowship RFCs - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - - - - - - - - - - - - - - - - -
- -
- - - - - - - - -
-
-

RFC-0042: Add System version that replaces StateVersion on RuntimeVersion

-
- - - -
Start Date25th October 2023
DescriptionAdd System Version and remove State Version
AuthorsVedhavyas Singareddi
-
-

Summary

-

At the moment, we have system_version field on RuntimeVersion that derives which state version is used for the -Storage. -We have a use case where we want extrinsics root is derived using StateVersion::V1. Without defining a new field -under RuntimeVersion, -we would like to propose adding system_version that can be used to derive both storage and extrinsic state version.

-

Motivation

-

Since the extrinsic state version is always StateVersion::V0, deriving extrinsic root requires full extrinsic data. -This would be problematic when we need to verify the extrinsics root if the extrinsic sizes are bigger. This problem is -further explored in https://github.com/polkadot-fellows/RFCs/issues/19

-

For Subspace project, we have an enshrined rollups called Domain with optimistic verification and Fraud proofs are -used to detect malicious behavior. -One of the Fraud proof variant is to derive Domain block extrinsic root on Subspace's consensus chain. -Since StateVersion::V0 requires full extrinsic data, we are forced to pass all the extrinsics through the Fraud proof. -One of the main challenge here is some extrinsics could be big enough that this variant of Fraud proof may not be -included in the Consensus block due to Block's weight restriction. -If the extrinsic root is derived using StateVersion::V1, then we do not need to pass the full extrinsic data but -rather at maximum, 32 byte of extrinsic data.

-

Stakeholders

-
    -
  • Technical Fellowship, in its role of maintaining system runtimes.
  • -
-

Explanation

-

In order to use project specific StateVersion for extrinsic roots, we proposed -an implementation that introduced -parameter to frame_system::Config but that unfortunately did not feel correct. -So we would like to propose adding this change to -the RuntimeVersion -object. The system version, if introduced, will be used to derive both storage and extrinsic state version. -If system version is 0, then both Storage and Extrinsic State version would use V0. -If system version is 1, then Storage State version would use V1 and Extrinsic State version would use V0. -If system version is 2, then both Storage and Extrinsic State version would use V1.

-

If implemented, the new RuntimeVersion definition would look something similar to

-
#![allow(unused)]
-fn main() {
-/// Runtime version (Rococo).
-#[sp_version::runtime_version]
-pub const VERSION: RuntimeVersion = RuntimeVersion {
-		spec_name: create_runtime_str!("rococo"),
-		impl_name: create_runtime_str!("parity-rococo-v2.0"),
-		authoring_version: 0,
-		spec_version: 10020,
-		impl_version: 0,
-		apis: RUNTIME_API_VERSIONS,
-		transaction_version: 22,
-		system_version: 1,
-	};
-}
-

Drawbacks

-

There should be no drawbacks as it would replace state_version with same behavior but documentation should be updated -so that chains know which system_version to use.

-

Testing, Security, and Privacy

-

AFAIK, should not have any impact on the security or privacy.

-

Performance, Ergonomics, and Compatibility

-

These changes should be compatible for existing chains if they use state_version value for system_verision.

-

Performance

-

I do not believe there is any performance hit with this change.

-

Ergonomics

-

This does not break any exposed Apis.

-

Compatibility

-

This change should not break any compatibility.

-

Prior Art and References

-

We proposed introducing a similar change by introducing a -parameter to frame_system::Config but did not feel that -is the correct way of introducing this change.

-

Unresolved Questions

-

I do not have any specific questions about this change at the moment.

- -

IMO, this change is pretty self-contained and there won't be any future work necessary.

- -
- - -
-
- - - -
- - - - - - - - - - - - - - - - - - -
- - diff --git a/mdbook/text/0043-storage-proof-size-hostfunction.html b/mdbook/text/0043-storage-proof-size-hostfunction.html deleted file mode 100644 index 6a83e6b9e..000000000 --- a/mdbook/text/0043-storage-proof-size-hostfunction.html +++ /dev/null @@ -1,271 +0,0 @@ - - - - - - - 0043 - Polkadot Fellowship RFCs - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - - - - - - - - - - - - - - - - -
- -
- - - - - - - - -
-
-

RFC-0043: Introduce storage_proof_size Host Function for Improved Parachain Block Utilization

-
- - - -
Start Date30 October 2023
DescriptionHost function to provide the storage proof size to runtimes.
AuthorsSebastian Kunert
-
-

Summary

-

This RFC proposes a new host function for parachains, storage_proof_size. It shall provide the size of the currently recorded storage proof to the runtime. Runtime authors can use the proof size to improve block utilization by retroactively reclaiming unused storage weight.

-

Motivation

-

The number of extrinsics that are included in a parachain block is limited by two constraints: execution time and proof size. FRAME weights cover both concepts, and block-builders use them to decide how many extrinsics to include in a block. However, these weights are calculated ahead of time by benchmarking on a machine with reference hardware. The execution-time properties of the state-trie and its storage items are unknown at benchmarking time. Therefore, we make some assumptions about the state-trie:

-
    -
  • Trie Depth: We assume a trie depth to account for intermediary nodes.
  • -
  • Storage Item Size: We make a pessimistic assumption based on the MaxEncodedLen trait.
  • -
-

These pessimistic assumptions lead to an overestimation of storage weight, negatively impacting block utilization on parachains.

-

In addition, the current model does not account for multiple accesses to the same storage items. While these repetitive accesses will not increase storage-proof size, the runtime-side weight monitoring will account for them multiple times. Since the proof size is completely opaque to the runtime, we can not implement retroactive storage weight correction.

-

A solution must provide a way for the runtime to track the exact storage-proof size consumed on a per-extrinsic basis.

-

Stakeholders

-
    -
  • Parachain Teams: They MUST include this host function in their runtime and node.
  • -
  • Light-client Implementors: They SHOULD include this host function in their runtime and node.
  • -
-

Explanation

-

This RFC proposes a new host function that exposes the storage-proof size to the runtime. As a result, runtimes can implement storage weight reclaiming mechanisms that improve block utilization.

-

This RFC proposes the following host function signature:

-
#![allow(unused)]
-fn main() {
-fn ext_storage_proof_size_version_1() -> u64;
-}
-

The host function MUST return an unsigned 64-bit integer value representing the current proof size. In block-execution and block-import contexts, this function MUST return the current size of the proof. To achieve this, parachain node implementors need to enable proof recording for block imports. In other contexts, this function MUST return 18446744073709551615 (u64::MAX), which represents disabled proof recording.

-

Performance, Ergonomics, and Compatibility

-

Performance

-

Parachain nodes need to enable proof recording during block import to correctly implement the proposed host function. Benchmarking conducted with balance transfers has shown a performance reduction of around 0.6% when proof recording is enabled.

-

Ergonomics

-

The host function proposed in this RFC allows parachain runtime developers to keep track of the proof size. Typical usage patterns would be to keep track of the overall proof size or the difference between subsequent calls to the host function.

-

Compatibility

-

Parachain teams will need to include this host function to upgrade.

-

Prior Art and References

- - -
- - -
-
- - - -
- - - - - - - - - - - - - - - - - - -
- - diff --git a/mdbook/text/0045-nft-deposits-asset-hub.html b/mdbook/text/0045-nft-deposits-asset-hub.html deleted file mode 100644 index d38f813e8..000000000 --- a/mdbook/text/0045-nft-deposits-asset-hub.html +++ /dev/null @@ -1,433 +0,0 @@ - - - - - - - 0045 - Polkadot Fellowship RFCs - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - - - - - - - - - - - - - - - - -
- -
- - - - - - - - -
-
-

RFC-0045: Lowering NFT Deposits on Asset Hub

-
- - - -
Start Date2 November 2023
DescriptionA proposal to reduce the minimum deposit required for collection creation on the Polkadot and Kusama Asset Hubs.
AuthorsAurora Poppyseed, Just_Luuuu, Viki Val, Joe Petrowski
-
-

Summary

-

This RFC proposes changing the current deposit requirements on the Polkadot and Kusama Asset Hub for -creating an NFT collection, minting an individual NFT, and lowering its corresponding metadata and -attribute deposits. The objective is to lower the barrier to entry for NFT creators, fostering a -more inclusive and vibrant ecosystem while maintaining network integrity and preventing spam.

-

Motivation

-

The current deposit of 10 DOT for collection creation (along with 0.01 DOT for item deposit and 0.2 -DOT for metadata and attribute deposits) on the Polkadot Asset Hub and 0.1 KSM on Kusama Asset Hub -presents a significant financial barrier for many NFT creators. By lowering the deposit -requirements, we aim to encourage more NFT creators to participate in the Polkadot NFT ecosystem, -thereby enriching the diversity and vibrancy of the community and its offerings.

-

The initial introduction of a 10 DOT deposit was an arbitrary starting point that does not consider -the actual storage footprint of an NFT collection. This proposal aims to adjust the deposit first to -a value based on the deposit function, which calculates a deposit based on the number of keys -introduced to storage and the size of corresponding values stored.

-

Further, it suggests a direction for a future of calculating deposits variably based on adoption -and/or market conditions. There is a discussion on tradeoffs of setting deposits too high or too -low.

-

Requirements

-
    -
  • Deposits SHOULD be derived from deposit function, adjusted by correspoding pricing mechansim.
  • -
-

Stakeholders

-
    -
  • NFT Creators: Primary beneficiaries of the proposed change, particularly those who found the -current deposit requirements prohibitive.
  • -
  • NFT Platforms: As the facilitator of artists' relations, NFT marketplaces have a vested -interest in onboarding new users and making their platforms more accessible.
  • -
  • dApp Developers: Making the blockspace more accessible will encourage developers to create and -build unique dApps in the Polkadot ecosystem.
  • -
  • Polkadot Community: Stands to benefit from an influx of artists, creators, and diverse NFT -collections, enhancing the overall ecosystem.
  • -
-

Previous discussions have been held within the Polkadot -Forum, with -artists expressing their concerns about the deposit amounts.

-

Explanation

-

This RFC proposes a revision of the deposit constants in the configuration of the NFTs pallet on the -Polkadot Asset Hub. The new deposit amounts would be determined by a standard deposit formula.

-

As of v1.1.1, the Collection Deposit is 10 DOT and the Item Deposit is 0.01 DOT (see -here).

-

Based on the storage footprint of these items, this RFC proposes changing them to:

-
#![allow(unused)]
-fn main() {
-pub const NftsCollectionDeposit: Balance = system_para_deposit(1, 130);
-pub const NftsItemDeposit: Balance = system_para_deposit(1, 164);
-}
-

This results in the following deposits (calculted using this -repository):

-

Polkadot

-
- - - - -
NameCurrent Rate (DOT)Calculated with Function (DOT)
collectionDeposit100.20064
itemDeposit0.010.20081
metadataDepositBase0.201290.20076
attributeDepositBase0.20.2
-
-

Similarly, the prices for Kusama were calculated as:

-

Kusama:

-
- - - - -
NameCurrent Rate (KSM)Calculated with Function (KSM)
collectionDeposit0.10.006688
itemDeposit0.0010.000167
metadataDepositBase0.0067096666170.0006709666617
attributeDepositBase0.006666666660.000666666666
-
-

Enhanced Approach to Further Lower Barriers for Entry

-

This RFC proposes further lowering these deposits below the rate normally charged for such a storage -footprint. This is based on the economic argument that sub-rate deposits are a subsididy for growth -and adoption of a specific technology. If the NFT functionality on Polkadot gains adoption, it makes -it more attractive for future entrants, who would be willing to pay the non-subsidized rate because -of the existing community.

-

Proposed Rate Adjustments

-
#![allow(unused)]
-fn main() {
-parameter_types! {
-	pub const NftsCollectionDeposit: Balance = system_para_deposit(1, 130);
-	pub const NftsItemDeposit: Balance = system_para_deposit(1, 164) / 40;
-	pub const NftsMetadataDepositBase: Balance = system_para_deposit(1, 129) / 10;
-	pub const NftsAttributeDepositBase: Balance = system_para_deposit(1, 0) / 10;
-	pub const NftsDepositPerByte: Balance = system_para_deposit(0, 1);
-}
-}
-

This adjustment would result in the following DOT and KSM deposit values:

-
- - - - -
NameProposed Rate PolkadotProposed Rate Kusama
collectionDeposit0.20064 DOT0.006688 KSM
itemDeposit0.005 DOT0.000167 KSM
metadataDepositBase0.002 DOT0.0006709666617 KSM
attributeDepositBase0.002 DOT0.000666666666 KSM
-
-

Short- and Long-Term Plans

-

The plan presented above is recommended as an immediate step to make Polkadot a more attractive -place to launch NFTs, although one would note that a forty fold reduction in the Item Deposit is -just as arbitrary as the value it was replacing. As explained earlier, this is meant as a subsidy to -gain more momentum for NFTs on Polkadot.

-

In the long term, an implementation should account for what should happen to the deposit rates -assuming that the subsidy is successful and attracts a lot of deployments. Many options are -discussed in the Addendum.

-

The deposit should be calculated as a function of the number of existing collections with maximum -DOT and stablecoin values limiting the amount. With asset rates available via the Asset Conversion -pallet, the system could take the lower value required. A sigmoid curve would make sense for this -application to avoid sudden rate changes, as in:

-

$$ minDeposit + \frac{\mathrm{min(DotDeposit, StableDeposit) - minDeposit} }{\mathrm{1 + e^{a - b * x}} }$$

-

where the constant a moves the inflection to lower or higher x values, the constant b adjusts -the rate of the deposit increase, and the independent variable x is the number of collections or -items, depending on application.

-

Drawbacks

-

Modifying deposit requirements necessitates a balanced assessment of the potential drawbacks. -Highlighted below are cogent points extracted from the discourse on the Polkadot Forum -conversation, -which provide critical perspectives on the implications of such changes.

-

Adjusting NFT deposit requirements on Polkadot and Kusama Asset Hubs involves key challenges:

-
    -
  1. -

    State Growth and Technical Concerns: Lowering deposit requirements can lead to increased -blockchain state size, potentially causing state bloat. This growth needs to be managed to -prevent strain on the network's resources and maintain operational efficiency. As stated earlier, -the deposit levels proposed here are intentionally low with the thesis that future participants -would pay the standard rate.

    -
  2. -
  3. -

    Network Security and Market Response: Adapting to the cryptocurrency market's volatility is -crucial. The mechanism for setting deposit amounts must be responsive yet stable, avoiding undue -complexity for users.

    -
  4. -
  5. -

    Economic Impact on Previous Stakeholders: The change could have varied economic effects on -previous (before the change) creators, platform operators, and investors. Balancing these -interests is essential to ensure the adjustment benefits the ecosystem without negatively -impacting its value dynamics. However in the particular case of Polkadot and Kusama Asset Hub -this does not pose a concern since there are very few collections currently and thus previous -stakeholders wouldn't be much affected. As of date 9th January 2024 there are 42 collections on -Polkadot Asset Hub and 191 on Kusama Asset Hub with a relatively low volume.

    -
  6. -
-

Testing, Security, and Privacy

-

Security concerns

-

As noted above, state bloat is a security concern. In the case of abuse, governance could adapt by -increasing deposit rates and/or using forceDestroy on collections agreed to be spam.

-

Performance, Ergonomics, and Compatibility

-

Performance

-

The primary performance consideration stems from the potential for state bloat due to increased -activity from lower deposit requirements. It's vital to monitor and manage this to avoid any -negative impact on the chain's performance. Strategies for mitigating state bloat, including -efficient data management and periodic reviews of storage requirements, will be essential.

-

Ergonomics

-

The proposed change aims to enhance the user experience for artists, traders, and utilizers of -Kusama and Polkadot Asset Hubs, making Polkadot and Kusama more accessible and user-friendly.

-

Compatibility

-

The change does not impact compatibility as a redeposit function is already implemented.

-

Unresolved Questions

-

If this RFC is accepted, there should not be any unresolved questions regarding how to adapt the -implementation of deposits for NFT collections.

-

Addendum

-

Several innovative proposals have been considered to enhance the network's adaptability and manage -deposit requirements more effectively. The RFC recommends a mixture of the function-based model and -the stablecoin model, but some tradeoffs of each are maintained here for those interested.

-

Enhanced Weak Governance Origin Model

-

The concept of a weak governance origin, controlled by a consortium like a system collective, has -been proposed. This model would allow for dynamic adjustments of NFT deposit requirements in -response to market conditions, adhering to storage deposit norms.

-
    -
  • Responsiveness: To address concerns about delayed responses, the model could incorporate -automated triggers based on predefined market indicators, ensuring timely adjustments.
  • -
  • Stability vs. Flexibility: Balancing stability with the need for flexibility is challenging. -To mitigate the issue of frequent changes in DOT-based deposits, a mechanism for gradual and -predictable adjustments could be introduced.
  • -
  • Scalability: The model's scalability is a concern, given the numerous deposits across the -system. A more centralized approach to deposit management might be needed to avoid constant, -decentralized adjustments.
  • -
-

Function-Based Pricing Model

-

Another proposal is to use a mathematical function to regulate deposit prices, initially allowing -low prices to encourage participation, followed by a gradual increase to prevent network bloat.

-
    -
  • Choice of Function: A logarithmic or sigmoid function is favored over an exponential one, as -these functions increase prices at a rate that encourages participation while preventing -prohibitive costs.
  • -
  • Adjustment of Constants: To finely tune the pricing rise, one of the function's constants -could correlate with the total number of NFTs on Asset Hub. This would align the deposit -requirements with the actual usage and growth of the network.
  • -
-

Linking Deposit to USD(x) Value

-

This approach suggests pegging the deposit value to a stable currency like the USD, introducing -predictability and stability for network users.

-
    -
  • Market Dynamics: One perspective is that fluctuations in native currency value naturally -balance user participation and pricing, deterring network spam while encouraging higher-value -collections. Conversely, there's an argument for allowing broader participation if the DOT/KSM -value increases.
  • -
  • Complexity and Risks: Implementing a USD-based pricing system could add complexity and -potential risks. The implementation needs to be carefully designed to avoid unintended -consequences, such as excessive reliance on external financial systems or currencies.
  • -
-

Each of these proposals offers unique advantages and challenges. The optimal approach may involve a -combination of these ideas, carefully adjusted to address the specific needs and dynamics of the -Polkadot and Kusama networks.

- -
- - -
-
- - - -
- - - - - - - - - - - - - - - - - - -
- - diff --git a/mdbook/text/0047-assignment-of-availability-chunks.html b/mdbook/text/0047-assignment-of-availability-chunks.html deleted file mode 100644 index 479c75db8..000000000 --- a/mdbook/text/0047-assignment-of-availability-chunks.html +++ /dev/null @@ -1,478 +0,0 @@ - - - - - - - 0047 - Polkadot Fellowship RFCs - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - - - - - - - - - - - - - - - - -
- -
- - - - - - - - -
-
-

RFC-0047: Assignment of availability chunks to validators

-
- - - -
Start Date03 November 2023
DescriptionAn evenly-distributing indirection layer between availability chunks and validators.
AuthorsAlin Dima
-
-

Summary

-

Propose a way of permuting the availability chunk indices assigned to validators, in the context of -recovering available data from systematic chunks, with the -purpose of fairly distributing network bandwidth usage.

-

Motivation

-

Currently, the ValidatorIndex is always identical to the ChunkIndex. Since the validator array is only shuffled once -per session, naively using the ValidatorIndex as the ChunkIndex would pose an unreasonable stress on the first N/3 -validators during an entire session, when favouring availability recovery from systematic chunks.

-

Therefore, the relay chain node needs a deterministic way of evenly distributing the first ~(N_VALIDATORS / 3) -systematic availability chunks to different validators, based on the relay chain block and core. -The main purpose is to ensure fair distribution of network bandwidth usage for availability recovery in general and in -particular for systematic chunk holders.

-

Stakeholders

-

Relay chain node core developers.

-

Explanation

-

Systematic erasure codes

-

An erasure coding algorithm is considered systematic if it preserves the original unencoded data as part of the -resulting code. -The implementation of the erasure coding algorithm used for polkadot's availability data is systematic. -Roughly speaking, the first N_VALIDATORS/3 chunks of data can be cheaply concatenated to retrieve the original data, -without running the resource-intensive and time-consuming reconstruction algorithm.

-

You can find the concatenation procedure of systematic chunks for polkadot's erasure coding algorithm -here

-

In a nutshell, it performs a column-wise concatenation with 2-byte chunks. -The output could be zero-padded at the end, so scale decoding must be aware of the expected length in bytes and ignore -trailing zeros (this assertion is already being made for regular reconstruction).

-

Availability recovery at present

-

According to the polkadot protocol spec:

-
-

A validator should request chunks by picking peers randomly and must recover at least f+1 chunks, where -n=3f+k and k in {1,2,3}.

-
-

For parity's polkadot node implementation, the process was further optimised. At this moment, it works differently based -on the estimated size of the available data:

-

(a) for small PoVs (up to 128 Kib), sequentially try requesting the unencoded data from the backing group, in a random -order. If this fails, fallback to option (b).

-

(b) for large PoVs (over 128 Kib), launch N parallel requests for the erasure coded chunks (currently, N has an upper -limit of 50), until enough chunks were recovered. Validators are tried in a random order. Then, reconstruct the -original data.

-

All options require that after reconstruction, validators then re-encode the data and re-create the erasure chunks trie -in order to check the erasure root.

-

Availability recovery from systematic chunks

-

As part of the effort of -increasing polkadot's resource efficiency, scalability and performance, -work is under way to modify the Availability Recovery protocol by leveraging systematic chunks. See -this comment for preliminary -performance results.

-

In this scheme, the relay chain node will first attempt to retrieve the ~N/3 systematic chunks from the validators that -should hold them, before falling back to recovering from regular chunks, as before.

-

A re-encoding step is still needed for verifying the erasure root, so the erasure coding overhead cannot be completely -brought down to 0.

-

Not being able to retrieve even one systematic chunk would make systematic reconstruction impossible. Therefore, backers -can be used as a backup to retrieve a couple of missing systematic chunks, before falling back to retrieving regular -chunks.

-

Chunk assignment function

-

Properties

-

The function that decides the chunk index for a validator will be parameterized by at least -(validator_index, core_index) -and have the following properties:

-
    -
  1. deterministic
  2. -
  3. relatively quick to compute and resource-efficient.
  4. -
  5. when considering a fixed core_index, the function should describe a permutation of the chunk indices
  6. -
  7. the validators that map to the first N/3 chunk indices should have as little overlap as possible for different cores.
  8. -
-

In other words, we want a uniformly distributed, deterministic mapping from ValidatorIndex to ChunkIndex per core.

-

It's desirable to not embed this function in the runtime, for performance and complexity reasons. -However, this means that the function needs to be kept very simple and with minimal or no external dependencies. -Any change to this function could result in parachains being stalled and needs to be coordinated via a runtime upgrade -or governance call.

-

Proposed function

-

Pseudocode:

-
#![allow(unused)]
-fn main() {
-pub fn get_chunk_index(
-  n_validators: u32,
-  validator_index: ValidatorIndex,
-  core_index: CoreIndex
-) -> ChunkIndex {
-  let threshold = systematic_threshold(n_validators); // Roughly n_validators/3
-  let core_start_pos = core_index * threshold;
-
-  (core_start_pos + validator_index) % n_validators
-}
-}
-

Network protocol

-

The request-response /req_chunk protocol will be bumped to a new version (from v1 to v2). -For v1, the request and response payloads are:

-
#![allow(unused)]
-fn main() {
-/// Request an availability chunk.
-pub struct ChunkFetchingRequest {
-	/// Hash of candidate we want a chunk for.
-	pub candidate_hash: CandidateHash,
-	/// The index of the chunk to fetch.
-	pub index: ValidatorIndex,
-}
-
-/// Receive a requested erasure chunk.
-pub enum ChunkFetchingResponse {
-	/// The requested chunk data.
-	Chunk(ChunkResponse),
-	/// Node was not in possession of the requested chunk.
-	NoSuchChunk,
-}
-
-/// This omits the chunk's index because it is already known by
-/// the requester and by not transmitting it, we ensure the requester is going to use his index
-/// value for validating the response, thus making sure he got what he requested.
-pub struct ChunkResponse {
-	/// The erasure-encoded chunk of data belonging to the candidate block.
-	pub chunk: Vec<u8>,
-	/// Proof for this chunk's branch in the Merkle tree.
-	pub proof: Proof,
-}
-}
-

Version 2 will add an index field to ChunkResponse:

-
#![allow(unused)]
-fn main() {
-#[derive(Debug, Clone, Encode, Decode)]
-pub struct ChunkResponse {
-	/// The erasure-encoded chunk of data belonging to the candidate block.
-	pub chunk: Vec<u8>,
-	/// Proof for this chunk's branch in the Merkle tree.
-	pub proof: Proof,
-	/// Chunk index.
-	pub index: ChunkIndex
-}
-}
-

An important thing to note is that in version 1, the ValidatorIndex value is always equal to the ChunkIndex. -Until the chunk rotation feature is enabled, this will also be true for version 2. However, after the feature is -enabled, this will generally not be true.

-

The requester will send the request to validator with index V. The responder will map the V validator index to the -C chunk index and respond with the C-th chunk. This mapping can be seamless, by having each validator store their -chunk by ValidatorIndex (just as before).

-

The protocol implementation MAY check the returned ChunkIndex against the expected mapping to ensure that -it received the right chunk. -In practice, this is desirable during availability-distribution and systematic chunk recovery. However, regular -recovery may not check this index, which is particularly useful when participating in disputes that don't allow -for easy access to the validator->chunk mapping. See Appendix A for more details.

-

In any case, the requester MUST verify the chunk's proof using the provided index.

-

During availability-recovery, given that the requester may not know (if the mapping is not available) whether the -received chunk corresponds to the requested validator index, it has to keep track of received chunk indices and ignore -duplicates. Such duplicates should be considered the same as an invalid/garbage response (drop it and move on to the -next validator - we can't punish via reputation changes, because we don't know which validator misbehaved).

-

Upgrade path

-

Step 1: Enabling new network protocol

-

In the beginning, both /req_chunk/1 and /req_chunk/2 will be supported, until all validators and -collators have upgraded to use the new version. V1 will be considered deprecated. During this step, the mapping will -still be 1:1 (ValidatorIndex == ChunkIndex), regardless of protocol. -Once all nodes are upgraded, a new release will be cut that removes the v1 protocol. Only once all nodes have upgraded -to this version will step 2 commence.

-

Step 2: Enabling the new validator->chunk mapping

-

Considering that the Validator->Chunk mapping is critical to para consensus, the change needs to be enacted atomically -via governance, only after all validators have upgraded the node to a version that is aware of this mapping, -functionality-wise. -It needs to be explicitly stated that after the governance enactment, validators that run older client versions that -don't support this mapping will not be able to participate in parachain consensus.

-

Additionally, an error will be logged when starting a validator with an older version, after the feature was enabled.

-

On the other hand, collators will not be required to upgrade in this step (but are still require to upgrade for step 1), -as regular chunk recovery will work as before, granted that version 1 of the networking protocol has been removed. -Note that collators only perform availability-recovery in rare, adversarial scenarios, so it is fine to not optimise for -this case and let them upgrade at their own pace.

-

To support enabling this feature via the runtime, we will use the NodeFeatures bitfield of the HostConfiguration -struct (added in https://github.com/paritytech/polkadot-sdk/pull/2177). Adding and enabling a feature -with this scheme does not require a runtime upgrade, but only a referendum that issues a -Configuration::set_node_feature extrinsic. Once the feature is enabled and new configuration is live, the -validator->chunk mapping ceases to be a 1:1 mapping and systematic recovery may begin.

-

Drawbacks

-
    -
  • Getting access to the core_index that used to be occupied by a candidate in some parts of the dispute protocol is -very complicated (See appendix A). This RFC assumes that availability-recovery processes initiated during -disputes will only use regular recovery, as before. This is acceptable since disputes are rare occurrences in practice -and is something that can be optimised later, if need be. Adding the core_index to the CandidateReceipt would -mitigate this problem and will likely be needed in the future for CoreJam and/or Elastic scaling. -Related discussion about updating CandidateReceipt
  • -
  • It's a breaking change that requires all validators and collators to upgrade their node version at least once.
  • -
-

Testing, Security, and Privacy

-

Extensive testing will be conducted - both automated and manual. -This proposal doesn't affect security or privacy.

-

Performance, Ergonomics, and Compatibility

-

Performance

-

This is a necessary data availability optimisation, as reed-solomon erasure coding has proven to be a top consumer of -CPU time in polkadot as we scale up the parachain block size and number of availability cores.

-

With this optimisation, preliminary performance results show that CPU time used for reed-solomon coding/decoding can be -halved and total POV recovery time decrease by 80% for large POVs. See more -here.

-

Ergonomics

-

Not applicable.

-

Compatibility

-

This is a breaking change. See upgrade path section above. -All validators and collators need to have upgraded their node versions before the feature will be enabled via a -governance call.

-

Prior Art and References

-

See comments on the tracking issue and the -in-progress PR

-

Unresolved Questions

-

Not applicable.

- -

This enables future optimisations for the performance of availability recovery, such as retrieving batched systematic -chunks from backers/approval-checkers.

-

Appendix A

-

This appendix details the intricacies of getting access to the core index of a candidate in parity's polkadot node.

-

Here, core_index refers to the index of the core that a candidate was occupying while it was pending availability -(from backing to inclusion).

-

Availability-recovery can currently be triggered by the following phases in the polkadot protocol:

-
    -
  1. During the approval voting process.
  2. -
  3. By other collators of the same parachain.
  4. -
  5. During disputes.
  6. -
-

Getting the right core index for a candidate can be troublesome. Here's a breakdown of how different parts of the -node implementation can get access to it:

-
    -
  1. -

    The approval-voting process for a candidate begins after observing that the candidate was included. Therefore, the -node has easy access to the block where the candidate got included (and also the core that it occupied).

    -
  2. -
  3. -

    The pov_recovery task of the collators starts availability recovery in response to noticing a candidate getting -backed, which enables easy access to the core index the candidate started occupying.

    -
  4. -
  5. -

    Disputes may be initiated on a number of occasions:

    -

    3.a. is initiated by the validator as a result of finding an invalid candidate while participating in the -approval-voting protocol. In this case, availability-recovery is not needed, since the validator already issued their -vote.

    -

    3.b is initiated by the validator noticing dispute votes recorded on-chain. In this case, we can safely -assume that the backing event for that candidate has been recorded and kept in memory.

    -

    3.c is initiated as a result of getting a dispute statement from another validator. It is possible that the dispute -is happening on a fork that was not yet imported by this validator, so the subsystem may not have seen this candidate -being backed.

    -
  6. -
-

A naive attempt of solving 3.c would be to add a new version for the disputes request-response networking protocol. -Blindly passing the core index in the network payload would not work, since there is no way of validating that -the reported core_index was indeed the one occupied by the candidate at the respective relay parent.

-

Another attempt could be to include in the message the relay block hash where the candidate was included. -This information would be used in order to query the runtime API and retrieve the core index that the candidate was -occupying. However, considering it's part of an unimported fork, the validator cannot call a runtime API on that block.

-

Adding the core_index to the CandidateReceipt would solve this problem and would enable systematic recovery for all -dispute scenarios.

- -
- - -
-
- - - -
- - - - - - - - - - - - - - - - - - -
- - diff --git a/mdbook/text/0048-session-keys-runtime-api.html b/mdbook/text/0048-session-keys-runtime-api.html deleted file mode 100644 index 85a84a69e..000000000 --- a/mdbook/text/0048-session-keys-runtime-api.html +++ /dev/null @@ -1,317 +0,0 @@ - - - - - - - 0048 - Polkadot Fellowship RFCs - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - - - - - - - - - - - - - - - - -
- -
- - - - - - - - -
-
-

RFC-0048: Generate ownership proof for SessionKeys

-
- - - -
Start Date13 November 2023
DescriptionChange SessionKeys runtime api to support generating an ownership proof for the on chain registration.
AuthorsBastian Köcher
-
-

Summary

-

This RFC proposes to changes the SessionKeys::generate_session_keys runtime api interface. This runtime api is used by validator operators to -generate new session keys on a node. The public session keys are then registered manually on chain by the validator operator. -Before this RFC it was not possible by the on chain logic to ensure that the account setting the public session keys is also in -possession of the private session keys. To solve this the RFC proposes to pass the account id of the account doing the -registration on chain to generate_session_keys. Further this RFC proposes to change the return value of the generate_session_keys -function also to not only return the public session keys, but also the proof of ownership for the private session keys. The -validator operator will then need to send the public session keys and the proof together when registering new session keys on chain.

-

Motivation

-

When submitting the new public session keys to the on chain logic there doesn't exist any verification of possession of the private session keys. -This means that users can basically register any kind of public session keys on chain. While the on chain logic ensures that there are -no duplicate keys, someone could try to prevent others from registering new session keys by setting them first. While this wouldn't bring -the "attacker" any kind of advantage, more like disadvantages (potential slashes on their account), it could prevent someone from -e.g. changing its session key in the event of a private session key leak.

-

After this RFC this kind of attack would not be possible anymore, because the on chain logic can verify that the sending account -is in ownership of the private session keys.

-

Stakeholders

-
    -
  • Polkadot runtime implementors
  • -
  • Polkadot node implementors
  • -
  • Validator operators
  • -
-

Explanation

-

We are first going to explain the proof format being used:

-
#![allow(unused)]
-fn main() {
-type Proof = (Signature, Signature, ..);
-}
-

The proof being a SCALE encoded tuple over all signatures of each private session -key signing the account_id. The actual type of each signature depends on the -corresponding session key cryptographic algorithm. The order of the signatures in -the proof is the same as the order of the session keys in the SessionKeys type -declared in the runtime.

-

The version of the SessionKeys needs to be bumped to 1 to reflect the changes to the -signature of SessionKeys_generate_session_keys:

-
#![allow(unused)]
-fn main() {
-pub struct OpaqueGeneratedSessionKeys {
-	pub keys: Vec<u8>,
-	pub proof: Vec<u8>,
-}
-
-fn SessionKeys_generate_session_keys(account_id: Vec<u8>, seed: Option<Vec<u8>>) -> OpaqueGeneratedSessionKeys;
-}
-

The default calling convention for runtime apis is applied, meaning the parameters -passed as SCALE encoded array and the length of the encoded array. The return value -being the SCALE encoded return value as u64 (array_ptr | length << 32). So, the -actual exported function signature looks like:

-
#![allow(unused)]
-fn main() {
-fn SessionKeys_generate_session_keys(array: *const u8, len: usize) -> u64;
-}
-

The on chain logic for setting the SessionKeys needs to be changed as well. It -already gets the proof passed as Vec<u8>. This proof needs to be decoded to -the actual Proof type as explained above. The proof and the SCALE encoded -account_id of the sender are used to verify the ownership of the SessionKeys.

-

Drawbacks

-

Validator operators need to pass the their account id when rotating their session keys in a node. -This will require updating some high level docs and making users familiar with the slightly changed ergonomics.

-

Testing, Security, and Privacy

-

Testing of the new changes only requires passing an appropriate owner for the current testing context. -The changes to the proof generation and verification got audited to ensure they are correct.

-

Performance, Ergonomics, and Compatibility

-

Performance

-

The session key generation is an offchain process and thus, doesn't influence the performance of the -chain. Verifying the proof is done on chain as part of the transaction logic for setting the session keys. -The verification of the proof is a signature verification number of individual session keys times. As setting -the session keys is happening quite rarely, it should not influence the overall system performance.

-

Ergonomics

-

The interfaces have been optimized to make it as easy as possible to generate the ownership proof.

-

Compatibility

-

Introduces a new version of the SessionKeys runtime api. Thus, nodes should be updated before -a runtime is enacted that contains these changes otherwise they will fail to generate session keys. -The RPC that exists around this runtime api needs to be updated to support passing the account id -and for returning the ownership proof alongside the public session keys.

-

UIs would need to be updated to support the new RPC and the changed on chain logic.

-

Prior Art and References

-

None.

-

Unresolved Questions

-

None.

- -

Substrate implementation of the RFC.

- -
- - -
-
- - - -
- - - - - - - - - - - - - - - - - - -
- - diff --git a/mdbook/text/0050-fellowship-salaries.html b/mdbook/text/0050-fellowship-salaries.html deleted file mode 100644 index 3029d20b8..000000000 --- a/mdbook/text/0050-fellowship-salaries.html +++ /dev/null @@ -1,335 +0,0 @@ - - - - - - - 0050 - Polkadot Fellowship RFCs - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - - - - - - - - - - - - - - - - -
- -
- - - - - - - - -
-
-

RFC-0050: Fellowship Salaries

-
- - - -
Start Date15 November 2023
DescriptionProposal to set rank-based Fellowship salary levels.
AuthorsJoe Petrowski, Gavin Wood
-
-

Summary

-

The Fellowship Manifesto states that members should receive a monthly allowance on par with gross -income in OECD countries. This RFC proposes concrete amounts.

-

Motivation

-

One motivation for the Technical Fellowship is to provide an incentive mechanism that can induct and -retain technical talent for the continued progress of the network.

-

In order for members to uphold their commitment to the network, they should receive support to -ensure that their needs are met such that they have the time to dedicate to their work on Polkadot. -Given the high expectations of Fellows, it is reasonable to consider contributions and requirements -on par with a full-time job. Providing a livable wage to those making such contributions makes it -pragmatic to work full-time on Polkadot.

-

Note: Goals of the Fellowship, expectations for each Dan, and conditions for promotion and demotion -are all explained in the Manifesto. This RFC is only to propose concrete values for allowances.

-

Stakeholders

-
    -
  • Fellowship members
  • -
  • Polkadot Treasury
  • -
-

Explanation

-

This RFC proposes agreeing on salaries relative to a single level, the III Dan. As such, changes to -the amount or asset used would only be on a single value, and all others would adjust relatively. A -III Dan is someone whose contributions match the expectations of a full-time individual contributor. -The salary at this level should be reasonably close to averages in OECD countries.

-
- - - - - - - - - -
DanFactor
I0.125
II0.25
III1
IV1.5
V2.0
VI2.5
VII2.5
VIII2.5
IX2.5
-
-

Note that there is a sizable increase between II Dan (Proficient) and III Dan (Fellow). By the third -Dan, it is generally expected that one is working on Polkadot as their primary focus in a full-time -capacity.

-

Salary Asset

-

Although the Manifesto (Section 8) specifies a monthly allowance in DOT, this RFC proposes the use -of USDT instead. The allowance is meant to provide members stability in meeting their day-to-day -needs and recognize contributions. Using USDT provides more stability and less speculation.

-

This RFC proposes that a III Dan earn 80,000 USDT per year. The salary at this level is commensurate -with average salaries in OECD countries (note: 77,000 USD in the U.S., with an average engineer at -100,000 USD). The other ranks would thus earn:

-
- - - - - - - - - -
DanAnnual Salary
I10,000
II20,000
III80,000
IV120,000
V160,000
VI200,000
VII200,000
VIII200,000
IX200,000
-
-

The salary levels for Architects (IV, V, and VI Dan) are typical of senior engineers.

-

Allowances will be managed by the Salary pallet.

-

Projections

-

Based on the current membership, the maximum yearly and monthly costs are shown below:

-
- - - - - - - - - -
DanSalaryMembersYearlyMonthly
I10,00027270,00022,500
II20,00011220,00018,333
III80,0008640,00053,333
IV120,0003360,00030,000
V160,0005800,00066,667
VI200,0003600,00050,000
> VI200,000000
Total2,890,000240,833
-
-

Note that these are the maximum amounts; members may choose to take a passive (lower) level. On the -other hand, more people will likely join the Fellowship in the coming years.

-

Updates

-

Updates to these levels, whether relative ratios, the asset used, or the amount, shall be done via -RFC.

-

Drawbacks

-

By not using DOT for payment, the protocol relies on the stability of other assets and the ability -to acquire them. However, the asset of choice can be changed in the future.

-

Testing, Security, and Privacy

-

N/A.

-

Performance, Ergonomics, and Compatibility

-

Performance

-

N/A

-

Ergonomics

-

N/A

-

Compatibility

-

N/A

-

Prior Art and References

- -

Unresolved Questions

-

None at present.

- -
- - -
-
- - - -
- - - - - - - - - - - - - - - - - - -
- - diff --git a/mdbook/text/0056-one-transaction-per-notification.html b/mdbook/text/0056-one-transaction-per-notification.html deleted file mode 100644 index b1132aa96..000000000 --- a/mdbook/text/0056-one-transaction-per-notification.html +++ /dev/null @@ -1,292 +0,0 @@ - - - - - - - 0056 - Polkadot Fellowship RFCs - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - - - - - - - - - - - - - - - - -
- -
- - - - - - - - -
-
-

RFC-0056: Enforce only one transaction per notification

-
- - - -
Start Date2023-11-30
DescriptionModify the transactions notifications protocol to always send only one transaction at a time
AuthorsPierre Krieger
-
-

Summary

-

When two peers connect to each other, they open (amongst other things) a so-called "notifications protocol" substream dedicated to gossiping transactions to each other.

-

Each notification on this substream currently consists in a SCALE-encoded Vec<Transaction> where Transaction is defined in the runtime.

-

This RFC proposes to modify the format of the notification to become (Compact(1), Transaction). This maintains backwards compatibility, as this new format decodes as a Vec of length equal to 1.

-

Motivation

-

There exists three motivations behind this change:

-
    -
  • -

    It is technically impossible to decode a SCALE-encoded Vec<Transaction> into a list of SCALE-encoded transactions without knowing how to decode a Transaction. That's because a Vec<Transaction> consists in several Transactions one after the other in memory, without any delimiter that indicates the end of a transaction and the start of the next. Unfortunately, the format of a Transaction is runtime-specific. This means that the code that receives notifications is necessarily tied to a specific runtime, and it is not possible to write runtime-agnostic code.

    -
  • -
  • -

    Notifications protocols are already designed to be optimized to send many items. Currently, when it comes to transactions, each item is a Vec<Transaction> that consists in multiple sub-items of type Transaction. This two-steps hierarchy is completely unnecessary, and was originally written at a time when the networking protocol of Substrate didn't have proper multiplexing.

    -
  • -
  • -

    It makes the implementation way more straight-forward by not having to repeat code related to back-pressure. See explanations below.

    -
  • -
-

Stakeholders

-

Low-level developers.

-

Explanation

-

To give an example, if you send one notification with three transactions, the bytes that are sent on the wire are:

-
concat(
-    leb128(total-size-in-bytes-of-the-rest),
-    scale(compact(3)), scale(transaction1), scale(transaction2), scale(transaction3)
-)
-
-

But you can also send three notifications of one transaction each, in which case it is:

-
concat(
-    leb128(size(scale(transaction1)) + 1), scale(compact(1)), scale(transaction1),
-    leb128(size(scale(transaction2)) + 1), scale(compact(1)), scale(transaction2),
-    leb128(size(scale(transaction3)) + 1), scale(compact(1)), scale(transaction3)
-)
-
-

Right now the sender can choose which of the two encoding to use. This RFC proposes to make the second encoding mandatory.

-

The format of the notification would become a SCALE-encoded (Compact(1), Transaction). -A SCALE-compact encoded 1 is one byte of value 4. In other words, the format of the notification would become concat(&[4], scale_encoded_transaction). -This is equivalent to forcing the Vec<Transaction> to always have a length of 1, and I expect the Substrate implementation to simply modify the sending side to add a for loop that sends one notification per item in the Vec.

-

As explained in the motivation section, this allows extracting scale(transaction) items without having to know how to decode them.

-

By "flattening" the two-steps hierarchy, an implementation only needs to back-pressure individual notifications rather than back-pressure notifications and transactions within notifications.

-

Drawbacks

-

This RFC chooses to maintain backwards compatibility at the cost of introducing a very small wart (the Compact(1)).

-

An alternative could be to introduce a new version of the transactions notifications protocol that sends one Transaction per notification, but this is significantly more complicated to implement and can always be done later in case the Compact(1) is bothersome.

-

Testing, Security, and Privacy

-

Irrelevant.

-

Performance, Ergonomics, and Compatibility

-

Performance

-

Irrelevant.

-

Ergonomics

-

Irrelevant.

-

Compatibility

-

The change is backwards compatible if done in two steps: modify the sender to always send one transaction per notification, then, after a while, modify the receiver to enforce the new format.

-

Prior Art and References

-

Irrelevant.

-

Unresolved Questions

-

None.

- -

None. This is a simple isolated change.

- -
- - -
-
- - - -
- - - - - - - - - - - - - - - - - - -
- - diff --git a/mdbook/text/0059-nodes-capabilities-discovery.html b/mdbook/text/0059-nodes-capabilities-discovery.html deleted file mode 100644 index 5260d49b1..000000000 --- a/mdbook/text/0059-nodes-capabilities-discovery.html +++ /dev/null @@ -1,311 +0,0 @@ - - - - - - - 0059 - Polkadot Fellowship RFCs - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - - - - - - - - - - - - - - - - -
- -
- - - - - - - - -
-
-

RFC-0059: Add a discovery mechanism for nodes based on their capabilities

-
- - - -
Start Date2023-12-18
DescriptionNodes having certain capabilities register themselves in the DHT to be discoverable
AuthorsPierre Krieger
-
-

Summary

-

This RFC proposes to make the mechanism of RFC #8 more generic by introducing the concept of "capabilities".

-

Implementations can implement certain "capabilities", such as serving old block headers or being a parachain bootnode.

-

The discovery mechanism of RFC #8 is extended to be able to discover nodes of specific capabilities.

-

Motivation

-

The Polkadot peer-to-peer network is made of nodes. Not all these nodes are equal. Some nodes store only the headers of recent blocks, some nodes store all the block headers and bodies since the genesis, some nodes store the storage of all blocks since the genesis, and so on.

-

It is currently not possible to know ahead of time (without connecting to it and asking) which nodes have which data available, and it is not easily possible to build a list of nodes that have a specific piece of data available.

-

If you want to download for example the header of block 500, you have to connect to a randomly-chosen node, ask it for block 500, and if it says that it doesn't have the block, disconnect and try another randomly-chosen node. -In certain situations such as downloading the storage of old blocks, nodes that have the information are relatively rare, and finding through trial and error a node that has the data can take a long time.

-

This RFC attempts to solve this problem by giving the possibility to build a list of nodes that are capable of serving specific data.

-

Stakeholders

-

Low-level client developers. -People interested in accessing the archive of the chain.

-

Explanation

-

Reading RFC #8 first might help with comprehension, as this RFC is very similar.

-

Please keep in mind while reading that everything below applies for both relay chains and parachains, except mentioned otherwise.

-

Capabilities

-

This RFC defines a list of so-called capabilities:

-
    -
  • Head of chain provider. An implementation with this capability must be able to serve to other nodes block headers, block bodies, justifications, calls proofs, and storage proofs of "recent" (see below) blocks, and, for relay chains, to serve to other nodes warp sync proofs where the starting block is a session change block and must participate in Grandpa and Beefy gossip.
  • -
  • History provider. An implementation with this capability must be able to serve to other nodes block headers and block bodies of any block since the genesis, and must be able to serve to other nodes justifications of any session change block since the genesis up until and including their currently finalized block.
  • -
  • Archive provider. This capability is a superset of History provider. In addition to the requirements of History provider, an implementation with this capability must be able to serve call proofs and storage proof requests of any block since the genesis up until and including their currently finalized block.
  • -
  • Parachain bootnode (only for relay chains). An implementation with this capability must be able to serve the network request described in RFC 8.
  • -
-

More capabilities might be added in the future.

-

In the context of the head of chain provider, the word "recent" means: any not-finalized-yet block that is equal to or an ancestor of a block that it has announced through a block announce, and any finalized block whose height is superior to its current finalized block minus 16. -This does not include blocks that have been pruned because they're not a descendant of its current finalized block. In other words, blocks that aren't a descendant of the current finalized block can be thrown away. -A gap of blocks is required due to race conditions: when a node finalizes a block, it takes some time for its peers to be made aware of this, during which they might send requests concerning older blocks. The choice of the number of blocks in this gap is arbitrary.

-

Substrate is currently by default a head of chain provider provider. After it has finished warp syncing, it downloads the list of old blocks, after which it becomes a history provider. -If Substrate is instead configured as an archive node, then it downloads all blocks since the genesis and builds their state, after which it becomes an archive provider, history provider, and head of chain provider. -If blocks pruning is enabled and the chain is a relay chain, then Substrate unfortunately doesn't implement any of these capabilities, not even head of chain provider. This is considered as a bug that should be fixed, see https://github.com/paritytech/polkadot-sdk/issues/2733.

-

DHT provider registration

-

This RFC heavily relies on the functionalities of the Kademlia DHT already in use by Polkadot. You can find a link to the specification here.

-

Implementations that have the history provider capability should register themselves as providers under the key sha256(concat("history", randomness)).

-

Implementations that have the archive provider capability should register themselves as providers under the key sha256(concat("archive", randomness)).

-

Implementations that have the parachain bootnode capability should register themselves as provider under the key sha256(concat(scale_compact(para_id), randomness)), as described in RFC 8.

-

"Register themselves as providers" consists in sending ADD_PROVIDER requests to nodes close to the key, as described in the Content provider advertisement section of the specification.

-

The value of randomness can be found in the randomness field when calling the BabeApi_currentEpoch function.

-

In order to avoid downtimes when the key changes, nodes should also register themselves as a secondary key that uses a value of randomness equal to the randomness field when calling BabeApi_nextEpoch.

-

Implementers should be aware that their implementation of Kademlia might already hash the key before XOR'ing it. The key is not meant to be hashed twice.

-

Implementations must not register themselves if they don't fulfill the capability yet. For example, a node configured to be an archive node but that is still building its archive state in the background must register itself only after it has finished building its archive.

-

Secondary DHTs

-

Implementations that have the history provider capability must also participate in a secondary DHT that comprises only of nodes with that capability. The protocol name of that secondary DHT must be /<genesis-hash>/kad/history.

-

Similarly, implementations that have the archive provider capability must also participate in a secondary DHT that comprises only of nodes with that capability and whose protocol name is /<genesis-hash>/kad/archive.

-

Just like implementations must not register themselves if they don't fulfill their capability yet, they must also not participate in the secondary DHT if they don't fulfill their capability yet.

-

Head of the chain providers

-

Implementations that have the head of the chain provider capability do not register themselves as providers, but instead are the nodes that participate in the main DHT. In other words, they are the nodes that serve requests of the /<genesis_hash>/kad protocol.

-

Any implementation that isn't a head of the chain provider (read: light clients) must not participate in the main DHT. This is already presently the case.

-

Implementations must not participate in the main DHT if they don't fulfill the capability yet. For example, a node that is still in the process of warp syncing must not participate in the main DHT. However, assuming that warp syncing doesn't last more than a few seconds, it is acceptable to ignore this requirement in order to avoid complicating implementations too much.

-

Drawbacks

-

None that I can see.

-

Testing, Security, and Privacy

-

The content of this section is basically the same as the one in RFC 8.

-

This mechanism doesn't add or remove any security by itself, as it relies on existing mechanisms.

-

Due to the way Kademlia works, it would become the responsibility of the 20 Polkadot nodes whose sha256(peer_id) is closest to the key (described in the explanations section) to store the list of nodes that have specific capabilities. -Furthermore, when a large number of providers are registered, only the providers closest to the key are kept, up to a certain implementation-defined limit.

-

For this reason, an attacker can abuse this mechanism by randomly generating libp2p PeerIds until they find the 20 entries closest to the key representing the target capability. They are then in control of the list of nodes with that capability. While doing this can in no way be actually harmful, it could lead to eclipse attacks.

-

Because the key changes periodically and isn't predictable, and assuming that the Polkadot DHT is sufficiently large, it is not realistic for an attack like this to be maintained in the long term.

-

Performance, Ergonomics, and Compatibility

-

Performance

-

The DHT mechanism generally has a low overhead, especially given that publishing providers is done only every 24 hours.

-

Doing a Kademlia iterative query then sending a provider record shouldn't take more than around 50 kiB in total of bandwidth for the parachain bootnode.

-

Assuming 1000 nodes with a specific capability, the 20 Polkadot full nodes corresponding to that capability will each receive a sudden spike of a few megabytes of networking traffic when the key rotates. Again, this is relatively negligible. If this becomes a problem, one can add a random delay before a node registers itself to be the provider of the key corresponding to BabeApi_next_epoch.

-

Maybe the biggest uncertainty is the traffic that the 20 Polkadot full nodes will receive from light clients that desire knowing the nodes with a capability. If this every becomes a problem, this value of 20 is an arbitrary constant that can be increased for more redundancy.

-

Ergonomics

-

Irrelevant.

-

Compatibility

-

Irrelevant.

-

Prior Art and References

-

Unknown.

-

Unresolved Questions

-

While it fundamentally doesn't change much to this RFC, using BabeApi_currentEpoch and BabeApi_nextEpoch might be inappropriate. I'm not familiar enough with good practices within the runtime to have an opinion here. Should it be an entirely new pallet?

- -

This RFC would make it possible to reliably discover archive nodes, which would make it possible to reliably send archive node requests, something that isn't currently possible. This could solve the problem of finding archive RPC node providers by migrating archive-related request to using the native peer-to-peer protocol rather than JSON-RPC.

-

If we ever decide to break backwards compatibility, we could divide the "history" and "archive" capabilities in two, between nodes capable of serving older blocks and nodes capable of serving newer blocks. -We could even add to the peer-to-peer network nodes that are only capable of serving older blocks (by reading from a database) but do not participate in the head of the chain, and that just exist for historical purposes.

- -
- - -
-
- - - -
- - - - - - - - - - - - - - - - - - -
- - diff --git a/mdbook/text/0078-merkleized-metadata.html b/mdbook/text/0078-merkleized-metadata.html deleted file mode 100644 index ff4a1f731..000000000 --- a/mdbook/text/0078-merkleized-metadata.html +++ /dev/null @@ -1,564 +0,0 @@ - - - - - - - 0078 - Polkadot Fellowship RFCs - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - - - - - - - - - - - - - - - - -
- -
- - - - - - - - -
-
-

RFC-0078: Merkleized Metadata

-
- - - -
Start Date22 February 2024
DescriptionInclude merkleized metadata hash in extrinsic signature for trust-less metadata verification.
AuthorsZondax AG, Parity Technologies
-
-

Summary

-

To interact with chains in the Polkadot ecosystem it is required to know how transactions are encoded and how to read state. For doing this, Polkadot-SDK, the framework used by most of the chains in the Polkadot ecosystem, exposes metadata about the runtime to the outside. UIs, wallets, and others can use this metadata to interact with these chains. This makes the metadata a crucial piece of the transaction encoding as users are relying on the interacting software to encode the transactions in the correct format.

-

It gets even more important when the user signs the transaction in an offline wallet, as the device by its nature cannot get access to the metadata without relying on the online wallet to provide it. This makes it so that the offline wallet needs to trust an online party, deeming the security assumptions of the offline devices, mute.

-

This RFC proposes a way for offline wallets to leverage metadata, within the constraints of these. The design idea is that the metadata is chunked and these chunks are put into a merkle tree. The root hash of this merkle tree represents the metadata. The offline wallets can use the root hash to decode transactions by getting proofs for the individual chunks of the metadata. This root hash is also included in the signed data of the transaction (but not sent as part of the transaction). The runtime is then including its known metadata root hash when verifying the transaction. If the metadata root hash known by the runtime differs from the one that the offline wallet used, it very likely means that the online wallet provided some fake data and the verification of the transaction fails.

-

Users depend on offline wallets to correctly display decoded transactions before signing. With merkleized metadata, they can be assured of the transaction's legitimacy, as incorrect transactions will be rejected by the runtime.

-

Motivation

-

Polkadot's innovative design (both relay chain and parachains) present the ability to developers to upgrade their network as frequently as they need. These systems manage to have integrations working after the upgrades with the help of FRAME Metadata. This Metadata, which is in the order of half a MiB for most Polkadot-SDK chains, completely describes chain interfaces and properties. Securing this metadata is key for users to be able to interact with the Polkadot-SDK chain in the expected way.

-

On the other hand, offline wallets provide a secure way for Blockchain users to hold their own keys (some do a better job than others). These devices seldomly get upgraded, usually account for one particular network and hold very small internal memories. Currently in the Polkadot ecosystem there is no secure way of having these offline devices know the latest Metadata of the Polkadot-SDK chain they are interacting with. This results in a plethora of similar yet slightly different offline wallets for all different Polkadot-SDK chains, as well as the impediment of keeping these regularly updated, thus not fully leveraging Polkadot-SDK’s unique forkless upgrade feature.

-

The two main reasons why this is not possible today are:

-
    -
  1. Metadata is too large for offline devices. Currently Polkadot-SDK metadata is on average 500 KiB, which is more than what the mostly adopted offline devices can hold.
  2. -
  3. Metadata is not authenticated. Even if there was enough space on offline devices to hold the metadata, the user would be trusting the entity providing this metadata to the hardware wallet. In the Polkadot ecosystem, this is how currently Polkadot Vault works.
  4. -
-

This RFC proposes a solution to make FRAME Metadata compatible with offline signers in a secure way. As it leverages FRAME Metadata, it does not only ensure that offline devices can always keep up to date with every FRAME based chain, but also that every offline wallet will be compatible with all FRAME based chains, avoiding the need of per-chain implementations.

-

Requirements

-
    -
  1. Metadata's integrity MUST be preserved. If any compromise were to happen, extrinsics sent with compromised metadata SHOULD fail.
  2. -
  3. Metadata information that could be used in signable extrinsic decoding MAY be included in digest, yet its inclusion MUST be indicated in signed extensions.
  4. -
  5. Digest MUST be deterministic with respect to metadata.
  6. -
  7. Digest MUST be cryptographically strong against pre-image, both first (finding an input that results in given digest) and second (finding an input that results in same digest as some other input given).
  8. -
  9. Extra-metadata information necessary for extrinsic decoding and constant within runtime version MUST be included in digest.
  10. -
  11. It SHOULD be possible to quickly withdraw offline signing mechanism without access to cold signing devices.
  12. -
  13. Digest format SHOULD be versioned.
  14. -
  15. Work necessary for proving metadata authenticity MAY be omitted at discretion of signer device design (to support automation tools).
  16. -
-

Reduce metadata size

-

Metadata should be stripped from parts that are not necessary to parse a signable extrinsic, then it should be separated into a finite set of self-descriptive chunks. Thus, a subset of chunks necessary for signable extrinsic decoding and rendering could be sent, possibly in small portions (ultimately, one at a time), to cold devices together with the proof.

-
    -
  1. Single chunk with proof payload size SHOULD fit within few kB;
  2. -
  3. Chunks handling mechanism SHOULD support chunks being sent in any order without memory utilization overhead;
  4. -
  5. Unused enum variants MUST be stripped (this has great impact on transmitted metadata size; examples: era enum, enum with all calls for call batching).
  6. -
-

Stakeholders

-
    -
  • Runtime implementors
  • -
  • UI/wallet implementors
  • -
  • Offline wallet implementors
  • -
-

The idea for this RFC was brought up by runtime implementors and was extensively discussed with offline wallet implementors. It was designed in such a way that it can work easily with the existing offline wallet solutions in the Polkadot ecosystem.

-

Explanation

-

The FRAME metadata provides a wide range of information about a FRAME based runtime. It contains information about the pallets, the calls per pallet, the storage entries per pallet, runtime APIs, and type information about most of the types that are used in the runtime. For decoding extrinsics on an offline wallet, what is mainly required is type information. Most of the other information in the FRAME metadata is actually not required for decoding extrinsics and thus it can be removed. Therefore, the following is a proposal on a custom representation of the metadata and how this custom metadata is chunked, ensuring that only the needed chunks required for decoding a particular extrinsic are sent to the offline wallet. The necessary information to transform the FRAME metadata type information into the type information presented in this RFC will be provided. However, not every single detail on how to convert from FRAME metadata into the RFC type information is described.

-

First, the MetadataDigest is introduced. After that, ExtrinsicMetadata is covered and finally the actual format of the type information. Then pruning of unrelated type information is covered and how to generate the TypeRefs. In the latest step, merkle tree calculation is explained.

-

Metadata digest

-

The metadata digest is the compact representation of the metadata. The hash of this digest is the metadata hash. Below the type declaration of the Hash type and the MetadataDigest itself can be found:

-
#![allow(unused)]
-fn main() {
-type Hash = [u8; 32];
-
-enum MetadataDigest {
-    #[index = 1]
-    V1 {
-        type_information_tree_root: Hash,
-        extrinsic_metadata_hash: Hash,
-        spec_version: u32,
-        spec_name: String,
-        base58_prefix: u16,
-        decimals: u8,
-        token_symbol: String,
-    },
-}
-}
-

The Hash is 32 bytes long and blake3 is used for calculating it. The hash of the MetadataDigest is calculated by blake3(SCALE(MetadataDigest)). Therefore, MetadataDigest is at first SCALE encoded, and then those bytes are hashed.

-

The MetadataDigest itself is represented as an enum. This is done to make it future proof, because a SCALE encoded enum is prefixed by the index of the variant. This index represents the version of the digest. As seen above, there is no index zero and it starts directly with one. Version one of the digest contains the following elements:

-
    -
  • type_information_tree_root: The root of the merkleized type information tree.
  • -
  • extrinsic_metadata_hash: The hash of the extrinsic metadata.
  • -
  • spec_version: The spec_version of the runtime as found in the RuntimeVersion when generating the metadata. While this information can also be found in the metadata, it is hidden in a big blob of data. To avoid transferring this big blob of data, we directly add this information here.
  • -
  • spec_name: Similar to spec_version, but being the spec_name found in the RuntimeVersion.
  • -
  • ss58_prefix: The SS58 prefix used for address encoding.
  • -
  • decimals: The number of decimals for the token.
  • -
  • token_symbol: The symbol of the token.
  • -
-

Extrinsic metadata

-

For decoding an extrinsic, more information on what types are being used is required. The actual format of the extrinsic is the format as described in the Polkadot specification. The metadata for an extrinsic is as follows:

-
#![allow(unused)]
-fn main() {
-struct ExtrinsicMetadata {
-    version: u8,
-    address_ty: TypeRef,
-    call_ty: TypeRef,
-    signature_ty: TypeRef,
-    signed_extensions: Vec<SignedExtensionMetadata>,
-}
-
-struct SignedExtensionMetadata {
-    identifier: String,
-    included_in_extrinsic: TypeRef,
-    included_in_signed_data: TypeRef,
-}
-}
-

To begin with, TypeRef. This is a unique identifier for a type as found in the type information. Using this TypeRef, it is possible to look up the type in the type information tree. More details on this process can be found in the section Generating TypeRef.

-

The actual ExtrinsicMetadata contains the following information:

-
    -
  • version: The version of the extrinsic format. As of writing this, the latest version is 4.
  • -
  • address_ty: The address type used by the chain.
  • -
  • call_ty: The call type used by the chain. The call in FRAME based runtimes represents the type of transaction being executed on chain. It references the actual function to execute and the parameters of this function.
  • -
  • signature_ty: The signature type used by the chain.
  • -
  • signed_extensions: FRAME based runtimes can extend the base extrinsic with extra information. This extra information that is put into an extrinsic is called "signed extensions". These extensions offer the runtime developer the possibility to include data directly into the extrinsic, like nonce, tip, amongst others. This means that the this data is sent alongside the extrinsic to the runtime. The other possibility these extensions offer is to include extra information only in the signed data that is signed by the sender. This means that this data needs to be known by both sides, the signing side and the verification side. An example for this kind of data is the genesis hash that ensures that extrinsics are unique per chain. Another example is the metadata hash itself that will also be included in the signed data. The offline wallets need to know which signed extensions are present in the chain and this is communicated to them using this field.
  • -
-

The SignedExtensionMetadata provides information about a signed extension:

-
    -
  • identifier: The identifier of the signed extension. An identifier is required to be unique in the Polkadot ecosystem as otherwise extrinsics are maybe built incorrectly.
  • -
  • included_in_extrinsic: The type that will be included in the extrinsic by this signed extension.
  • -
  • included_in_signed_data: The type that will be included in the signed data by this signed extension.
  • -
-

Type Information

-

As SCALE is not self descriptive like JSON, a decoder always needs to know the format of the type to decode it properly. This is where the type information comes into play. The format of the extrinsic is fixed as described above and ExtrinsicMetadata provides information on which type information is required for which part of the extrinsic. So, offline wallets only need access to the actual type information. It is a requirement that the type information can be chunked into logical pieces to reduce the amount of data that is sent to the offline wallets for decoding the extrinsics. So, the type information is structured in the following way:

-
#![allow(unused)]
-fn main() {
-struct Type {
-    path: Vec<String>,
-    type_def: TypeDef,
-    type_id: Compact<u32>,
-}
-
-enum TypeDef {
-    Composite(Vec<Field>),
-    Enumeration(EnumerationVariant),
-    Sequence(TypeRef),
-    Array(Array),
-    Tuple(Vec<TypeRef>),
-    BitSequence(BitSequence),
-}
-
-struct Field {
-    name: Option<String>,
-    ty: TypeRef,
-    type_name: Option<String>,
-}
-
-struct Array {
-    len: u32,
-    type_param: TypeRef,
-}
-
-struct BitSequence {
-    num_bytes: u8,
-    least_significant_bit_first: bool,
-}
-
-struct EnumerationVariant {
-    name: String,
-    fields: Vec<Field>,
-    index: Compact<u32>,
-}
-
-enum TypeRef {
-    Bool,
-    Char,
-    Str,
-    U8,
-    U16,
-    U32,
-    U64,
-    U128,
-    U256,
-    I8,
-    I16,
-    I32,
-    I64,
-    I128,
-    I256,
-    CompactU8,
-    CompactU16,
-    CompactU32,
-    CompactU64,
-    CompactU128,
-    CompactU256,
-    Void,
-    PerId(Compact<u32>),
-}
-}
-

The Type declares the structure of a type. The type has the following fields:

-
    -
  • path: A path declares the position of a type locally to the place where it is defined. The path is not globally unique, this means that there can be multiple types with the same path.
  • -
  • type_def: The high-level type definition, e.g. the type is a composition of fields where each field has a type, the type is a composition of different types as tuple etc.
  • -
  • type_id: The unique identifier of this type.
  • -
-

Every Type is composed of multiple different types. Each of these "sub types" can reference either a full Type again or reference one of the primitive types. This is where TypeRef becomes relevant as the type referencing information. To reference a Type in the type information, a unique identifier is used. As primitive types can be represented using a single byte, they are not put as separate types into the type information. Instead the primitive types are directly part of TypeRef to not require the overhead of referencing them in an extra Type. The special primitive type Void represents a type that encodes to nothing and can be decoded from nothing. As FRAME doesn't support Compact as primitive type it requires a more involved implementation to convert a FRAME type to a Compact primitive type. SCALE only supports u8, u16, u32, u64 and u128 as Compact which maps onto the primitive type declaration in the RFC. One special case is a Compact that wraps an empty Tuple which is expressed as primitive type Void.

-

The TypeDef variants have the following meaning:

-
    -
  • Composite: A struct like type that is composed of multiple different fields. Each Field can have its own type. The order of the fields is significant. A Composite with no fields is expressed as primitive type Void.
  • -
  • Enumeration: Stores a EnumerationVariant. A EnumerationVariant is a struct that is described by a name, an index and a vector of Fields, each of which can have it's own type. Typically Enumerations have more than just one variant, and in those cases Enumeration will appear multiple times, each time with a different variant, in the type information. Enumerations can become quite large, yet usually for decoding a type only one variant is required, therefore this design brings optimizations and helps reduce the size of the proof. An Enumeration with no variants is expressed as primitive type Void.
  • -
  • Sequence: A vector like type wrapping the given type.
  • -
  • BitSequence: A vector storing bits. num_bytes represents the size in bytes of the internal storage. If least_significant_bit_first is true the least significant bit is first, otherwise the most significant bit is first.
  • -
  • Array: A fixed-length array of a specific type.
  • -
  • Tuple: A composition of multiple types. A Tuple that is composed of no types is expressed as primitive type Void.
  • -
-

Using the type information together with the SCALE specification provides enough information on how to decode types.

-

Prune unrelated Types

-

The FRAME metadata contains not only the type information for decoding extrinsics, but it also contains type information about storage types. The scope of the RFC is only about decoding transactions on offline wallets. Thus, a lot of type information can be pruned. To know which type information are required to decode all possible extrinsics, ExtrinsicMetadata has been defined. The extrinsic metadata contains all the types that define the layout of an extrinsic. Therefore, all the types that are accessible from the types declared in the extrinsic metadata can be collected. To collect all accessible types, it requires to recursively iterate over all types starting from the types in ExtrinsicMetadata. Note that some types are accessible, but they don't appear in the final type information and thus, can be pruned as well. These are for example inner types of Compact or the types referenced by BitSequence. The result of collecting these accessible types is a list of all the types that are required to decode each possible extrinsic.

-

Generating TypeRef

-

Each TypeRef basically references one of the following types:

-
    -
  • One of the primitive types. All primitive types can be represented by 1 byte and thus, they are directly part of the TypeRef itself to remove an extra level of indirection.
  • -
  • A Type using its unique identifier.
  • -
-

In FRAME metadata a primitive type is represented like any other type. So, the first step is to remove all the primitive only types from the list of types that were generated in the previous section. The resulting list of types is sorted using the id provided by FRAME metadata. In the last step the TypeRefs are created. Each reference to a primitive type is replaced by one of the corresponding TypeRef primitive type variants and every other reference is replaced by the type's unique identifier. The unique identifier of a type is the index of the type in our sorted list. For Enumerations all variants have the same unique identifier, while they are represented as multiple type information. All variants need to have the same unique identifier as the reference doesn't know which variant will appear in the actual encoded data.

-
#![allow(unused)]
-fn main() {
-let pruned_types = get_pruned_types();
-
-for ty in pruned_types {
-    if ty.is_primitive_type() {
-        pruned_types.remove(ty);
-    }
-}
-
-pruned_types.sort(|(left, right)|
-    if left.frame_metadata_id() == right.frame_metadata_id() {
-        left.variant_index() < right.variant_index()
-    } else {
-        left.frame_metadata_id() < right.frame_metadata_id()
-    }
-);
-
-fn generate_type_ref(ty, ty_list) -> TypeRef {
-    if ty.is_primitive_type() {
-        TypeRef::primtive_from_ty(ty)
-    }
-
-    TypeRef::from_id(
-        // Determine the id by using the position of the type in the
-        // list of unique frame metadata ids.
-        ty_list.position_by_frame_metadata_id(ty.frame_metadata_id())
-    )
-}
-
-fn replace_all_sub_types_with_type_refs(ty, ty_list) -> Type {
-    for sub_ty in ty.sub_types() {
-        replace_all_sub_types_with_type_refs(sub_ty, ty_list);
-        sub_ty = generate_type_ref(sub_ty, ty_list)
-    }
-
-    ty
-}
-
-let final_ty_list = Vec::new();
-for ty in pruned_types {
-    final_ty_list.push(replace_all_sub_types_with_type_refs(ty, ty_list))
-}
-}
-

Building the Merkle Tree Root

-

A complete binary merkle tree with blake3 as the hashing function is proposed. For building the merkle tree root, the initial data has to be hashed as a first step. This initial data is referred to as the leaves of the merkle tree. The leaves need to be sorted to make the tree root deterministic. The type information is sorted using their unique identifiers and for the Enumeration, variants are sort using their index. After sorting and hashing all leaves, two leaves have to be combined to one hash. The combination of these of two hashes is referred to as a node.

-
#![allow(unused)]
-fn main() {
-let nodes = leaves;
-while nodes.len() > 1 {
-    let right = nodes.pop_back();
-    let left = nodes.pop_back();
-    nodes.push_front(blake3::hash(scale::encode((left, right))));
-}
-
-let merkle_tree_root = if nodes.is_empty() { [0u8; 32] } else { nodes.back() };
-}
-

The merkle_tree_root in the end is the last node left in the list of nodes. If there are no nodes in the list left, it means that the initial data set was empty. In this case, all zeros hash is used to represent the empty tree.

-

Building a tree with 5 leaves (numbered 0 to 4):

-
nodes: 0 1 2 3 4
-
-nodes: [3, 4] 0 1 2
-
-nodes: [1, 2] [3, 4] 0
-
-nodes: [[3, 4], 0] [1, 2]
-
-nodes: [[[3, 4], 0], [1, 2]]
-
-

The resulting tree visualized:

-
     [root]
-     /    \
-    *      *
-   / \    / \
-  *   0  1   2
- / \
-3   4
-
-

Building a tree with 6 leaves (numbered 0 to 5):

-
nodes: 0 1 2 3 4 5
-
-nodes: [4, 5] 0 1 2 3
-
-nodes: [2, 3] [4, 5] 0 1
-
-nodes: [0, 1] [2, 3] [4, 5]
-
-nodes: [[2, 3], [4, 5]] [0, 1]
-
-nodes: [[[2, 3], [4, 5]], [0, 1]]
-
-

The resulting tree visualized:

-
       [root]
-      /      \
-     *        *
-   /   \     / \
-  *     *   0   1
- / \   / \
-2   3 4   5
-
-

Inclusion in an Extrinsic

-

To ensure that the offline wallet used the correct metadata to show the extrinsic to the user the metadata hash needs to be included in the extrinsic. The metadata hash is generated by hashing the SCALE encoded MetadataDigest:

-
#![allow(unused)]
-fn main() {
-blake3::hash(SCALE::encode(MetadataDigest::V1 { .. }))
-}
-

For the runtime the metadata hash is generated at compile time. Wallets will have to generate the hash using the FRAME metadata.

-

The signing side should control whether it wants to add the metadata hash or if it wants to omit it. To accomplish this it is required to add one extra byte to the extrinsic itself. If this byte is 0 the metadata hash is not required and if the byte is 1 the metadata hash is added using V1 of the MetadataDigest. This leaves room for future versions of the MetadataDigest format. When the metadata hash should be included, it is only added to the data that is signed. This brings the advantage of not requiring to include 32 bytes into the extrinsic itself, because the runtime knows the metadata hash as well and can add it to the signed data as well if required. This is similar to the genesis hash, while this isn't added conditionally to the signed data.

-

Drawbacks

-

The chunking may not be the optimal case for every kind of offline wallet.

-

Testing, Security, and Privacy

-

All implementations are required to strictly follow the RFC to generate the metadata hash. This includes which hash function to use and how to construct the metadata types tree. So, all implementations are following the same security criteria. As the chains will calculate the metadata hash at compile time, the build process needs to be trusted. However, this is already a solved problem in the Polkadot ecosystem by using reproducible builds. So, anyone can rebuild a chain runtime to ensure that a proposal is actually containing the changes as advertised.

-

Implementations can also be tested easily against each other by taking some metadata and ensuring that they all come to the same metadata hash.

-

Privacy of users should also not be impacted. This assumes that wallets will generate the metadata hash locally and don't leak any information to third party services about which chunks a user will send to their offline wallet. Besides that, there is no leak of private information as getting the raw metadata from the chain is an operation that is done by almost everyone.

-

Performance, Ergonomics, and Compatibility

-

Performance

-

There should be no measurable impact on performance to Polkadot or any other chain using this feature. The metadata root hash is calculated at compile time and at runtime it is optionally used when checking the signature of a transaction. This means that at runtime no performance heavy operations are done.

-

Ergonomics & Compatibility

-

The proposal alters the way a transaction is built, signed, and verified. So, this imposes some required changes to any kind of developer who wants to construct transactions for Polkadot or any chain using this feature. As the developer can pass 0 for disabling the verification of the metadata root hash, it can be easily ignored.

-

Prior Art and References

-

RFC 46 produced by the Alzymologist team is a previous work reference that goes in this direction as well.

-

On other ecosystems, there are other solutions to the problem of trusted signing. Cosmos for example has a standardized way of transforming a transaction into some textual representation and this textual representation is included in the signed data. Basically achieving the same as what the RFC proposes, but it requires that for every transaction applied in a block, every node in the network always has to generate this textual representation to ensure the transaction signature is valid.

-

Unresolved Questions

-

None.

- -
    -
  • Does it work with all kind of offline wallets?
  • -
  • Generic types currently appear multiple times in the metadata with each instantiation. It could be may be useful to have generic type only once in the metadata and declare the generic parameters at their instantiation.
  • -
  • The metadata doesn't contain any kind of semantic information. This means that the offline wallet for example doesn't know what is a balance etc. The current solution for this problem is to match on the type name, but this isn't a sustainable solution.
  • -
  • MetadataDigest only provides one token and decimal. However, chains support a lot of chains support multiple tokens for paying fees etc. Probably more a question of having semantic information as mentioned above.
  • -
- -
- - -
-
- - - -
- - - - - - - - - - - - - - - - - - -
- - diff --git a/mdbook/text/0084-general-transaction-extrinsic-format.html b/mdbook/text/0084-general-transaction-extrinsic-format.html deleted file mode 100644 index 4d5f1ba5f..000000000 --- a/mdbook/text/0084-general-transaction-extrinsic-format.html +++ /dev/null @@ -1,271 +0,0 @@ - - - - - - - 0084 - Polkadot Fellowship RFCs - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - - - - - - - - - - - - - - - - -
- -
- - - - - - - - -
-
-

RFC-0084: General transactions in extrinsic format

-
- - - -
Start Date12 March 2024
DescriptionSupport more extrinsic types by updating the extrinsic format
AuthorsGeorge Pisaltu
-
-

Summary

-

This RFC proposes a change to the extrinsic format to incorporate a new transaction type, the "general" transaction.

-

Motivation

-

"General" transactions, a new type of transaction that this RFC aims to support, are transactions which obey the runtime's extensions and have according extension data yet do not have hard-coded signatures. They are first described in Extrinsic Horizon and supported in 3685. They enable users to authorize origins in new, more flexible ways (e.g. ZK proofs, mutations over pre-authenticated origins). As of now, all transactions are limited to the account signing model for origin authorization and any additional origin changes happen in extrinsic logic, which cannot leverage the validation process of extensions.

-

An example of a use case for such an extension would be sponsoring the transaction fee for some other user. A new extension would be put in place to verify that a part of the initial payload was signed by the author under who the extrinsic should run and change the origin, but the payment for the whole transaction should be handled under a sponsor's account. A POC for this can be found in 3712.

-

The new "general" transaction type would coexist with both current transaction types for a while and, therefore, the current number of supported transaction types, capped at 2, is insufficient. A new extrinsic type must be introduced alongside the current signed and unsigned types. Currently, an encoded extrinsic's first byte indicate the type of extrinsic using the most significant bit - 0 for unsigned, 1 for signed - and the 7 following bits indicate the extrinsic format version, which has been equal to 4 for a long time.

-

By taking one bit from the extrinsic format version encoding, we can support 2 additional extrinsic types while also having a minimal impact on our capability to extend and change the extrinsic format in the future.

-

Stakeholders

-
    -
  • Runtime users
  • -
  • Runtime devs
  • -
  • Wallet devs
  • -
-

Explanation

-

An extrinsic is currently encoded as one byte to identify the extrinsic type and version. This RFC aims to change the interpretation of this byte regarding the reserved bits for the extrinsic type and version. In the following explanation, bits represented using T make up the extrinsic type and bits represented using V make up the extrinsic version.

-

Currently, the bit allocation within the leading encoded byte is 0bTVVV_VVVV. In practice in the Polkadot ecosystem, the leading byte would be 0bT000_0100 as the version has been equal to 4 for a long time.

-

This RFC proposes for the bit allocation to change to 0bTTVV_VVVV. As a result, the extrinsic format version will be bumped to 5 and the extrinsic type bit representation would change as follows:

-
- - - - -
bitstype
00unsigned
10signed
01reserved
11reserved
-
-

Drawbacks

-

This change would reduce the maximum possible transaction version from the current 127 to 63. In order to bypass the new, lower limit, the extrinsic format would have to change again.

-

Testing, Security, and Privacy

-

There is no impact on testing, security or privacy.

-

Performance, Ergonomics, and Compatibility

-

This change would allow Polkadot to support new types of transactions, with the specific "general" transaction type in mind at the time of writing this proposal.

-

Performance

-

There is no performance impact.

-

Ergonomics

-

The impact to developers and end-users is minimal as it would just be a bitmask update on their part for parsing the extrinsic type along with the version.

-

Compatibility

-

This change breaks backwards compatiblity because any transaction that is neither signed nor unsigned, but a new transaction type, would be interpreted as having a future extrinsic format version.

-

Prior Art and References

-

The original design was originally proposed in the TransactionExtension PR, which is also the motivation behind this effort.

-

Unresolved Questions

-

None.

- -

Following this change, the "general" transaction type will be introduced as part of the Extrinsic Horizon effort, which will shape future work.

- -
- - -
-
- - - -
- - - - - - - - - - - - - - - - - - -
- - diff --git a/text/0001-agile-coretime.html b/text/0001-agile-coretime.html deleted file mode 100644 index ad3d589e6..000000000 --- a/text/0001-agile-coretime.html +++ /dev/null @@ -1,736 +0,0 @@ - - - - - - - 0001 - Polkadot Fellowship RFCs - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - - - - - - - - - - - - - - - - -
- -
- - - - - - - - -
-
-

RFC-1: Agile Coretime

-
- - - -
Start Date30 June 2023
DescriptionAgile periodic-sale-based model for assigning Coretime on the Polkadot Ubiquitous Computer.
AuthorsGavin Wood
-
-

Summary

-

This proposes a periodic, sale-based method for assigning Polkadot Coretime, the analogue of "block space" within the Polkadot Network. The method takes into account the need for long-term capital expenditure planning for teams building on Polkadot, yet also provides a means to allow Polkadot to capture long-term value in the resource which it sells. It supports the possibility of building rich and dynamic secondary markets to optimize resource allocation and largely avoids the need for parameterization.

-

Motivation

-

Present System

-

The Polkadot Ubiquitous Computer, or just Polkadot UC, represents the public service provided by the Polkadot Network. It is a trust-free, WebAssembly-based, multicore, internet-native omnipresent virtual machine which is highly resilient to interference and corruption.

-

The present system of allocating the limited resources of the Polkadot Ubiquitous Computer is through a process known as parachain slot auctions. This is a parachain-centric paradigm whereby a single core is long-term allocated to a single parachain which itself implies a Substrate/Cumulus-based chain secured and connected via the Relay-chain. Slot auctions are on-chain candle auctions which proceed for several days and result in the core being assigned to the parachain for six months at a time up to 24 months in advance. Practically speaking, we only see two year periods being bid upon and leased.

-

Funds behind the bids made in the slot auctions are merely locked, they are not consumed or paid and become unlocked and returned to the bidder on expiry of the lease period. A means of sharing the deposit trustlessly known as a crowdloan is available allowing token holders to contribute to the overall deposit of a chain without any counterparty risk.

-

Problems

-

The present system is based on a model of one-core-per-parachain. This is a legacy interpretation of the Polkadot platform and is not a reflection of its present capabilities. By restricting ownership and usage to this model, more dynamic and resource-efficient means of utilizing the Polkadot Ubiquitous Computer are lost.

-

More specifically, it is impossible to lease out cores at anything less than six months, and apparently unrealistic to do so at anything less than two years. This removes the ability to dynamically manage the underlying resource, and generally experimentation, iteration and innovation suffer. It bakes into the platform an assumption of permanence for anything deployed into it and restricts the market's ability to find a more optimal allocation of the finite resource.

-

There is no ability to determine capital requirements for hosting a parachain beyond two years from the point of its initial deployment onto Polkadot. While it would be unreasonable to have perfect and indefinite cost predictions for any real-world platform, not having any clarity whatsoever beyond "market rates" two years hence can be a very off-putting prospect for teams to buy into.

-

However, quite possibly the most substantial problem is both a perceived and often real high barrier to entry of the Polkadot ecosystem. By forcing innovators to either raise seven-figure sums through investors or appeal to the wider token-holding community, Polkadot makes it difficult for a small band of innovators to deploy their technology into Polkadot. While not being actually permissioned, it is also far from the barrierless, permissionless ideal which an innovation platform such as Polkadot should be striving for.

-

Requirements

-
    -
  1. The solution SHOULD provide an acceptable value-capture mechanism for the Polkadot network.
  2. -
  3. The solution SHOULD allow parachains and other projects deployed on to the Polkadot UC to make long-term capital expenditure predictions for the cost of ongoing deployment.
  4. -
  5. The solution SHOULD minimize the barriers to entry in the ecosystem.
  6. -
  7. The solution SHOULD work well when the Polkadot UC has up to 1,000 cores.
  8. -
  9. The solution SHOULD work when the number of cores which the Polkadot UC can support changes over time.
  10. -
  11. The solution SHOULD facilitate the optimal allocation of work to cores of the Polkadot UC, including by facilitating the trade of regular core assignment at various intervals and for various spans.
  12. -
  13. The solution SHOULD avoid creating additional dependencies on functionality which the Relay-chain need not strictly provide for the delivery of the Polkadot UC.
  14. -
-

Furthermore, the design SHOULD be implementable and deployable in a timely fashion; three months from the acceptance of this RFC should not be unreasonable.

-

Stakeholders

-

Primary stakeholder sets are:

-
    -
  • Protocol researchers and developers, largely represented by the Polkadot Fellowship and Parity Technologies' Engineering division.
  • -
  • Polkadot Parachain teams both present and future, and their users.
  • -
  • Polkadot DOT token holders.
  • -
-

Socialization:

-

The essensials of this proposal were presented at Polkadot Decoded 2023 Copenhagen on the Main Stage. A small amount of socialization at the Parachain Summit preceeded it and some substantial discussion followed it. Parity Ecosystem team is currently soliciting views from ecosystem teams who would be key stakeholders.

-

Explanation

-

Overview

-

Upon implementation of this proposal, the parachain-centric slot auctions and associated crowdloans cease. Instead, Coretime on the Polkadot UC is sold by the Polkadot System in two separate formats: Bulk Coretime and Instantaneous Coretime.

-

When a Polkadot Core is utilized, we say it is dedicated to a Task rather than a "parachain". The Task to which a Core is dedicated may change at every Relay-chain block and while one predominant type of Task is to secure a Cumulus-based blockchain (i.e. a parachain), other types of Tasks are envisioned.

-

Bulk Coretime is sold periodically on a specialised system chain known as the Coretime-chain and allocated in advance of its usage, whereas Instantaneous Coretime is sold on the Relay-chain immediately prior to usage on a block-by-block basis.

-

This proposal does not fix what should be done with revenue from sales of Coretime and leaves it for a further RFC process.

-

Owners of Bulk Coretime are tracked on the Coretime-chain and the ownership status and properties of the owned Coretime are exposed over XCM as a non-fungible asset.

-

At the request of the owner, the Coretime-chain allows a single Bulk Coretime asset, known as a Region, to be used in various ways including transferal to another owner, allocated to a particular task (e.g. a parachain) or placed in the Instantaneous Coretime Pool. Regions can also be split out, either into non-overlapping sub-spans or exactly-overlapping spans with less regularity.

-

The Coretime-Chain periodically instructs the Relay-chain to assign its cores to alternative tasks as and when Core allocations change due to new Regions coming into effect.

-

Renewal and Migration

-

There is a renewal system which allows a Bulk Coretime assignment of a single core to be renewed unchanged with a known price increase from month to month. Renewals are processed in a period prior to regular purchases, effectively giving them precedence over a fixed number of cores available.

-

Renewals are only enabled when a core's assignment does not include an Instantaneous Coretime allocation and has not been split into shorter segments.

-

Thus, renewals are designed to ensure only that committed parachains get some guarantees about price for predicting future costs. This price-capped renewal system only allows cores to be reused for their same tasks from month to month. In any other context, Bulk Coretime would need to be purchased regularly.

-

As a migration mechanism, pre-existing leases (from the legacy lease/slots/crowdloan framework) are initialized into the Coretime-chain and cores assigned to them prior to Bulk Coretime sales. In the sale where the lease expires, the system offers a renewal, as above, to allow a priority sale of Bulk Coretime and ensure that the Parachain suffers no downtime when transitioning from the legacy framework.

-

Instantaneous Coretime

-

Processing of Instantaneous Coretime happens in part on the Polkadot Relay-chain. Credit is purchased on the Coretime-chain for regular DOT tokens, and this results in a DOT-denominated Instantaneous Coretime Credit account on the Relay-chain being credited for the same amount.

-

Though the Instantaneous Coretime Credit account records a balance for an account identifier (very likely controlled by a collator), it is non-transferable and non-refundable. It can only be consumed in order to purchase some Instantaneous Coretime with immediate availability.

-

The Relay-chain reports this usage back to the Coretime-chain in order to allow it to reward the providers of the underlying Coretime, either the Polkadot System or owners of Bulk Coretime who contributed to the Instantaneous Coretime Pool.

-

Specifically the Relay-chain is expected to be responsible for:

-
    -
  • holding non-transferable, non-refundable DOT-denominated Instantaneous Coretime Credit balance information.
  • -
  • setting and adjusting the price of Instantaneous Coretime based on usage.
  • -
  • allowing collators to consume their Instantaneous Coretime Credit at the current pricing in exchange for the ability to schedule one PoV for near-immediate usage.
  • -
  • ensuring the Coretime-Chain has timely accounting information on Instantaneous Coretime Sales revenue.
  • -
-

Coretime-chain

-

The Coretime-chain is a new system parachain. It has the responsibility of providing the Relay-chain via UMP with information of:

-
    -
  • The number of cores which should be made available.
  • -
  • Which tasks should be running on which cores and in what ratios.
  • -
  • Accounting information for Instantaneous Coretime Credit.
  • -
-

It also expects information from the Relay-chain via DMP:

-
    -
  • The number of cores available to be scheduled.
  • -
  • Account information on Instantaneous Coretime Sales.
  • -
-

The specific interface is properly described in RFC-5.

-

Detail

-

Parameters

-

This proposal includes a number of parameters which need not necessarily be fixed. Their usage is explained below, but their values are suggested or specified in the later section Parameter Values.

-

Reservations and Leases

-

The Coretime-chain includes some governance-set reservations of Coretime; these cover every System-chain. Additionally, governance is expected to initialize details of the pre-existing leased chains.

-

Regions

-

A Region is an assignable period of Coretime with a known regularity.

-

All Regions are associated with a unique Core Index, to identify which core the assignment of which ownership of the Region controls.

-

All Regions are also associated with a Core Mask, an 80-bit bitmap, to denote the regularity at which it may be scheduled on the core. If all bits are set in the Core Mask value, it is said to be Complete. 80 is selected since this results in the size of the datatype used to identify any Region of Polkadot Coretime to be a very convenient 128-bit. Additionally, if TIMESLICE (the number of Relay-chain blocks in a Timeslice) is 80, then a single bit in the Core Mask bitmap represents exactly one Core for one Relay-chain block in one Timeslice.

-

All Regions have a span. Region spans are quantized into periods of TIMESLICE blocks; BULK_PERIOD divides into TIMESLICE a whole number of times.

-

The Timeslice type is a u32 which can be multiplied by TIMESLICE to give a BlockNumber value representing the same quantity in terms of Relay-chain blocks.

-

Regions can be tasked to a TaskId (aka ParaId) or pooled into the Instantaneous Coretime Pool. This process can be Provisional or Final. If done only provisionally or not at all then they are fresh and have an Owner which is able to manipulate them further including reassignment. Once Final, then all ownership information is discarded and they cannot be manipulated further. Renewal is not possible when only provisionally tasked/pooled.

-

Bulk Sales

-

A sale of Bulk Coretime occurs on the Coretime-chain every BULK_PERIOD blocks.

-

In every sale, a BULK_LIMIT of individual Regions are offered for sale.

-

Each Region offered for sale has a different Core Index, ensuring that they each represent an independently allocatable resource on the Polkadot UC.

-

The Regions offered for sale have the same span: they last exactly BULK_PERIOD blocks, and begin immediately following the span of the previous Sale's Regions. The Regions offered for sale also have the complete, non-interlaced, Core Mask.

-

The Sale Period ends immediately as soon as span of the Coretime Regions that are being sold begins. At this point, the next Sale Price is set according to the previous Sale Price together with the number of Regions sold compared to the desired and maximum amount of Regions to be sold. See Price Setting for additional detail on this point.

-

Following the end of the previous Sale Period, there is an Interlude Period lasting INTERLUDE_PERIOD of blocks. After this period is elapsed, regular purchasing begins with the Purchasing Period.

-

This is designed to give at least two weeks worth of time for the purchased regions to be partitioned, interlaced, traded and allocated.

-

The Interlude

-

The Interlude period is a period prior to Regular Purchasing where renewals are allowed to happen. This has the effect of ensuring existing long-term tasks/parachains have a chance to secure their Bulk Coretime for a well-known price prior to general sales.

-

Regular Purchasing

-

Any account may purchase Regions of Bulk Coretime if they have the appropriate funds in place during the Purchasing Period, which is from INTERLUDE_PERIOD blocks after the end of the previous sale until the beginning of the Region of the Bulk Coretime which is for sale as long as there are Regions of Bulk Coretime left for sale (i.e. no more than BULK_LIMIT have already been sold in the Bulk Coretime Sale). The Purchasing Period is thus roughly BULK_PERIOD - INTERLUDE_PERIOD blocks in length.

-

The Sale Price varies during an initial portion of the Purchasing Period called the Leadin Period and then stays stable for the remainder. This initial portion is LEADIN_PERIOD blocks in duration. During the Leadin Period the price decreases towards the Sale Price, which it lands at by the end of the Leadin Period. The actual curve by which the price starts and descends to the Sale Price is outside the scope of this RFC, though a basic suggestion is provided in the Price Setting Notes, below.

-

Renewals

-

At any time when there are remaining Regions of Bulk Coretime to be sold, including during the Interlude Period, then certain Bulk Coretime assignmnents may be Renewed. This is similar to a purchase in that funds must be paid and it consumes one of the Regions of Bulk Coretime which would otherwise be placed for purchase. However there are two key differences.

-

Firstly, the price paid is the minimum of RENEWAL_PRICE_CAP more than what the purchase/renewal price was in the previous renewal and the current (or initial, if yet to begin) regular Sale Price.

-

Secondly, the purchased Region comes preassigned with exactly the same workload as before. It cannot be traded, repartitioned, interlaced or exchanged. As such unlike regular purchasing the Region never has an owner.

-

Renewal is only possible for either cores which have been assigned as a result of a previous renewal, which are migrating from legacy slot leases, or which fill their Bulk Coretime with an unsegmented, fully and finally assigned workload which does not include placement in the Instantaneous Coretime Pool. The renewed workload will be the same as this initial workload.

-

Manipulation

-

Regions may be manipulated in various ways by its owner:

-
    -
  1. Transferred in ownership.
  2. -
  3. Partitioned into quantized, non-overlapping segments of Bulk Coretime with the same ownership.
  4. -
  5. Interlaced into multiple Regions over the same period whose eventual assignments take turns to be scheduled.
  6. -
  7. Assigned to a single, specific task (identified by TaskId aka ParaId). This may be either provisional or final.
  8. -
  9. Pooled into the Instantaneous Coretime Pool, in return for a pro-rata amount of the revenue from the Instantaneous Coretime Sales over its period.
  10. -
-

Enactment

-

Specific functions of the Coretime-chain

-

Several functions of the Coretime-chain SHALL be exposed through dispatchables and/or a nonfungible trait implementation integrated into XCM:

-

1. transfer

-

Regions may have their ownership transferred.

-

A transfer(region: RegionId, new_owner: AccountId) dispatchable shall have the effect of altering the current owner of the Region identified by region from the signed origin to new_owner.

-

An implementation of the nonfungible trait SHOULD include equivalent functionality. RegionId SHOULD be used for the AssetInstance value.

-

2. partition

-

Regions may be split apart into two non-overlapping interior Regions of the same Core Mask which together concatenate to the original Region.

-

A partition(region: RegionId, pivot: Timeslice) dispatchable SHALL have the effect of removing the Region identified by region and adding two new Regions of the same owner and Core Mask. One new Region will begin at the same point of the old Region but end at pivot timeslices into the Region, whereas the other will begin at this point and end at the end point of the original Region.

-

Also:

-
    -
  • owner field of region must the equal to the Signed origin.
  • -
  • pivot must equal neither the begin nor end fields of the region.
  • -
-

3. interlace

-

Regions may be decomposed into two Regions of the same span whose eventual assignments take turns on the core by virtue of having complementary Core Masks.

-

An interlace(region: RegionId, mask: CoreMask) dispatchable shall have the effect of removing the Region identified by region and creating two new Regions. The new Regions will each have the same span and owner of the original Region, but one Region will have a Core Mask equal to mask and the other will have Core Mask equal to the XOR of mask and the Core Mask of the original Region.

-

Also:

-
    -
  • owner field of region must the equal to the Signed origin.
  • -
  • mask must have some bits set AND must not equal the Core Mask of the old Region AND must only have bits set which are also set in the old Region's' Core Mask.
  • -
-

4. assign

-

Regions may be assigned to a core.

-

A assign(region: RegionId, target: TaskId, finality: Finality) dispatchable shall have the effect of placing an item in the workplan corresponding to the region's properties and assigned to the target task.

-

If the region's end has already passed (taking into account any advance notice requirements) then this operation is a no-op. If the region's begining has already passed, then it is effectively altered to become the next schedulable timeslice.

-

finality may have the value of either Final or Provisional. If Final, then the operation is free, the region record is removed entirely from storage and renewal may be possible: if the Region's span is the entire BULK_PERIOD, then the Coretime-chain records in storage that the allocation happened during this period in order to facilitate the possibility for a renewal. (Renewal only becomes possible when the full Core Mask of a core is finally assigned for the full BULK_PERIOD.)

-

Also:

-
    -
  • owner field of region must the equal to the Signed origin.
  • -
-

5. pool

-

Regions may be consumed in exchange for a pro rata portion of the Instantaneous Coretime Sales Revenue from its period and regularity.

-

A pool(region: RegionId, beneficiary: AccountId, finality: Finality) dispatchable shall have the effect of placing an item in the workplan corresponding to the region's properties and assigned to the Instantaneous Coretime Pool. The details of the region will be recorded in order to allow for a pro rata share of the Instantaneous Coretime Sales Revenue at the time of the Region relative to any other providers in the Pool.

-

If the region's end has already passed (taking into account any advance notice requirements) then this operation is a no-op. If the region's begining has already passed, then it is effectively altered to become the next schedulable timeslice.

-

finality may have the value of either Final or Provisional. If Final, then the operation is free and the region record is removed entirely from storage.

-

Also:

-
    -
  • owner field of region must the equal to the Signed origin.
  • -
-

6. Purchases

-

A dispatchable purchase(price_limit: Balance) shall be provided. Any account may call purchase to purchase Bulk Coretime at the maximum price of price_limit.

-

This may be called successfully only:

-
    -
  1. during the regular Purchasing Period;
  2. -
  3. when the caller is a Signed origin and their account balance is reducible by the current sale price;
  4. -
  5. when the current sale price is no greater than price_limit; and
  6. -
  7. when the number of cores already sold is less than BULK_LIMIT.
  8. -
-

If successful, the caller's account balance is reduced by the current sale price and a new Region item for the following Bulk Coretime span is issued with the owner equal to the caller's account.

-

7. Renewals

-

A dispatchable renew(core: CoreIndex) shall be provided. Any account may call renew to purchase Bulk Coretime and renew an active allocation for the given core.

-

This may be called during the Interlude Period as well as the regular Purchasing Period and has the same effect as purchase followed by assign, except that:

-
    -
  1. The price of the sale is the Renewal Price (see next).
  2. -
  3. The Region is allocated exactly the given core is currently allocated for the present Region.
  4. -
-

Renewal is only valid where a Region's span is assigned to Tasks (not placed in the Instantaneous Coretime Pool) for the entire unsplit BULK_PERIOD over all of the Core Mask and with Finality. There are thus three possibilities of a renewal being allowed:

-
    -
  1. Purchased unsplit Coretime with final assignment to tasks over the full Core Mask.
  2. -
  3. Renewed Coretime.
  4. -
  5. A legacy lease which is ending.
  6. -
-

Renewal Price

-

The Renewal Price is the minimum of the current regular Sale Price (or the initial Sale Price if in the Interlude Period) and:

-
    -
  • If the workload being renewed came to be through the Purchase and Assignment of Bulk Coretime, then the price paid during that Purchase operation.
  • -
  • If the workload being renewed was previously renewed, then the price paid during this previous Renewal operation plus RENEWAL_PRICE_CAP.
  • -
  • If the workload being renewed is a migation from a legacy slot auction lease, then the nominal price for a Regular Purchase (outside of the Lead-in Period) of the Sale during which the legacy lease expires.
  • -
-

8. Instantaneous Coretime Credits

-

A dispatchable purchase_credit(amount: Balance, beneficiary: RelayChainAccountId) shall be provided. Any account with at least amount spendable funds may call this. This increases the Instantaneous Coretime Credit balance on the Relay-chain of the beneficiary by the given amount.

-

This Credit is consumable on the Relay-chain as part of the Task scheduling system and its specifics are out of the scope of this proposal. When consumed, revenue is recorded and provided to the Coretime-chain for proper distribution. The API for doing this is specified in RFC-5.

-

Notes on the Instantaneous Coretime Market

-

For an efficient market to form around the provision of Bulk-purchased Cores into the pool of cores available for Instantaneous Coretime purchase, it is crucial to ensure that price changes for the purchase of Instantaneous Coretime are reflected well in the revenues of private Coretime providers during the same period.

-

In order to ensure this, then it is crucial that Instantaneous Coretime, once purchased, cannot be held indefinitely prior to eventual use since, if this were the case, a nefarious collator could purchase Coretime when cheap and utilize it some time later when expensive and deprive private Coretime providers of their revenue.

-

It must therefore be assumed that Instantaneous Coretime, once purchased, has a definite and short "shelf-life", after which it becomes unusable. This incentivizes collators to avoid purchasing Coretime unless they expect to utilize it imminently and thus helps create an efficient market-feedback mechanism whereby a higher price will actually result in material revenues for private Coretime providers who contribute to the pool of Cores available to service Instantaneous Coretime purchases.

-

Notes on Economics

-

The specific pricing mechanisms are out of scope for the present proposal. Proposals on economics should be properly described and discussed in another RFC. However, for the sake of completeness, I provide some basic illustration of how price setting could potentially work.

-

Bulk Price Progression

-

The present proposal assumes the existence of a price-setting mechanism which takes into account several parameters:

-
    -
  • OLD_PRICE: The price of the previous sale.
  • -
  • BULK_TARGET: the target number of cores to be purchased as Bulk Coretime Regions or renewed during the previous sale.
  • -
  • BULK_LIMIT: the maximum number of cores which could have been purchased/renewed during the previous sale.
  • -
  • CORES_SOLD: the actual number of cores purchased/renewed in the previous sale.
  • -
  • SELLOUT_PRICE: the price at which the most recent Bulk Coretime was purchased (not renewed) prior to selling more cores than BULK_TARGET (or immediately after, if none were purchased before). This may not have a value if no Bulk Coretime was purchased.
  • -
-

In general we would expect the price to increase the closer CORES_SOLD gets to BULK_LIMIT and to decrease the closer it gets to zero. If it is exactly equal to BULK_TARGET, then we would expect the price to remain the same.

-

In the edge case that no cores were purchased yet more cores were sold (through renewals) than the target, then we would also avoid altering the price.

-

A simple example of this would be the formula:

-
IF SELLOUT_PRICE == NULL AND CORES_SOLD > BULK_TARGET THEN
-    RETURN OLD_PRICE
-END IF
-EFFECTIVE_PRICE := IF CORES_SOLD > BULK_TARGET THEN
-    SELLOUT_PRICE
-ELSE
-    OLD_PRICE
-END IF
-NEW_PRICE := IF CORES_SOLD < BULK_TARGET THEN
-    EFFECTIVE_PRICE * MAX(CORES_SOLD, 1) / BULK_TARGET
-ELSE
-    EFFECTIVE_PRICE + EFFECTIVE_PRICE *
-        (CORES_SOLD - BULK_TARGET) / (BULK_LIMIT - BULK_TARGET)
-END IF
-
-

This exists only as a trivial example to demonstrate a basic solution exists, and should not be intended as a concrete proposal.

-

Intra-Leadin Price-decrease

-

During the Leadin Period of a sale, the effective price starts higher than the Sale Price and falls to end at the Sale Price at the end of the Leadin Period. The price can thus be defined as a simple factor above one on which the Sale Price is multiplied. A function which returns this factor would accept a factor between zero and one specifying the portion of the Leadin Period which has passed.

-

Thus we assume SALE_PRICE, then we can define PRICE as:

-
PRICE := SALE_PRICE * FACTOR((NOW - LEADIN_BEGIN) / LEADIN_PERIOD)
-
-

We can define a very simple progression where the price decreases monotonically from double the Sale Price at the beginning of the Leadin Period.

-
FACTOR(T) := 2 - T
-
-

Parameter Values

-

Parameters are either suggested or specified. If suggested, it is non-binding and the proposal should not be judged on the value since other RFCs and/or the governance mechanism of Polkadot is expected to specify/maintain it. If specified, then the proposal should be judged on the merit of the value as-is.

-
- - - - - - - -
NameValue
BULK_PERIOD28 * DAYSspecified
INTERLUDE_PERIOD7 * DAYSspecified
LEADIN_PERIOD7 * DAYSspecified
TIMESLICE8 * MINUTESspecified
BULK_TARGET30suggested
BULK_LIMIT45suggested
RENEWAL_PRICE_CAPPerbill::from_percent(2)suggested
-
-

Instantaneous Price Progression

-

This proposal assumes the existence of a Relay-chain-based price-setting mechanism for the Instantaneous Coretime Market which alters from block to block, taking into account several parameters: the last price, the size of the Instantaneous Coretime Pool (in terms of cores per Relay-chain block) and the amount of Instantaneous Coretime waiting for processing (in terms of Core-blocks queued).

-

The ideal situation is to have the size of the Instantaneous Coretime Pool be equal to some factor of the Instantaneous Coretime waiting. This allows all Instantaneous Coretime sales to be processed with some limited latency while giving limited flexibility over ordering to the Relay-chain apparatus which is needed for efficient operation.

-

If we set a factor of three, and thus aim to retain a queue of Instantaneous Coretime Sales which can be processed within three Relay-chain blocks, then we would increase the price if the queue goes above three times the amount of cores available, and decrease if it goes under.

-

Let us assume the values OLD_PRICE, FACTOR, QUEUE_SIZE and POOL_SIZE. A simple definition of the NEW_PRICE would be thus:

-
NEW_PRICE := IF QUEUE_SIZE < POOL_SIZE * FACTOR THEN
-    OLD_PRICE * 0.95
-ELSE
-    OLD_PRICE / 0.95
-END IF
-
-

This exists only as a trivial example to demonstrate a basic solution exists, and should not be intended as a concrete proposal.

-

Notes on Types

-

This exists only as a short illustration of a potential technical implementation and should not be treated as anything more.

-

Regions

-

This data schema achieves a number of goals:

-
    -
  • Coretime can be individually traded at a level of a single usage of a single core.
  • -
  • Coretime Regions, of arbitrary span and up to 1/80th interlacing can be exposed as NFTs and exchanged.
  • -
  • Any Coretime Region can be contributed to the Instantaneous Coretime Pool.
  • -
  • Unlimited number of individual Coretime contributors to the Instantaneous Coretime Pool. (Effectively limited only in number of cores and interlacing level; with current values this would allow 80,000 individual payees per timeslice).
  • -
  • All keys are self-describing.
  • -
  • Workload to communicate core (re-)assignments is well-bounded and low in weight.
  • -
  • All mandatory bookkeeping workload is well-bounded in weight.
  • -
-
#![allow(unused)]
-fn main() {
-type Timeslice = u32; // 80 block amounts.
-type CoreIndex = u16;
-type CoreMask = [u8; 10]; // 80-bit bitmap.
-
-// 128-bit (16 bytes)
-struct RegionId {
-    begin: Timeslice,
-    core: CoreIndex,
-    mask: CoreMask,
-}
-// 296-bit (37 bytes)
-struct RegionRecord {
-    end: Timeslice,
-    owner: AccountId,
-}
-
-map Regions = Map<RegionId, RegionRecord>;
-
-// 40-bit (5 bytes). Could be 32-bit with a more specialised type.
-enum CoreTask {
-    Off,
-    Assigned { target: TaskId },
-    InstaPool,
-}
-// 120-bit (15 bytes). Could be 14 bytes with a specialised 32-bit `CoreTask`.
-struct ScheduleItem {
-    mask: CoreMask, // 80 bit
-    task: CoreTask, // 40 bit
-}
-
-/// The work we plan on having each core do at a particular time in the future.
-type Workplan = Map<(Timeslice, CoreIndex), BoundedVec<ScheduleItem, 80>>;
-/// The current workload of each core. This gets updated with workplan as timeslices pass.
-type Workload = Map<CoreIndex, BoundedVec<ScheduleItem, 80>>;
-
-enum Contributor {
-    System,
-    Private(AccountId),
-}
-
-struct ContributionRecord {
-    begin: Timeslice,
-    end: Timeslice,
-    core: CoreIndex,
-    mask: CoreMask,
-    payee: Contributor,
-}
-type InstaPoolContribution = Map<ContributionRecord, ()>;
-
-type SignedTotalMaskBits = u32;
-type InstaPoolIo = Map<Timeslice, SignedTotalMaskBits>;
-
-type PoolSize = Value<TotalMaskBits>;
-
-/// Counter for the total CoreMask which could be dedicated to a pool. `u32` so we don't ever get
-/// an overflow.
-type TotalMaskBits = u32;
-struct InstaPoolHistoryRecord {
-    total_contributions: TotalMaskBits,
-    maybe_payout: Option<Balance>,
-}
-/// Total InstaPool rewards for each Timeslice and the number of core Mask which contributed.
-type InstaPoolHistory = Map<Timeslice, InstaPoolHistoryRecord>;
-}
-

CoreMask tracks unique "parts" of a single core. It is used with interlacing in order to give a unique identifier to each component of any possible interlacing configuration of a core, allowing for simple self-describing keys for all core ownership and allocation information. It also allows for each core's workload to be tracked and updated progressively, keeping ongoing compute costs well-bounded and low.

-

Regions are issued into the Regions map and can be transferred, partitioned and interlaced as the owner desires. Regions can only be tasked if they begin after the current scheduling deadline (if they have missed this, then the region can be auto-trimmed until it is).

-

Once tasked, they are removed from there and a record is placed in Workplan. In addition, if they are contributed to the Instantaneous Coretime Pool, then an entry is placing in InstaPoolContribution and InstaPoolIo.

-

Each timeslice, InstaPoolIo is used to update the current value of PoolSize. A new entry in InstaPoolHistory is inserted, with the total_contributions field of InstaPoolHistoryRecord being informed by the PoolSize value. Each core's has its Workload mutated according to its Workplan for the upcoming timeslice.

-

When Instantaneous Coretime Market Revenues are reported for a particular timeslice from the Relay-chain, this information gets placed in the maybe_payout field of the relevant record of InstaPoolHistory.

-

Payments can be requested made for any records in InstaPoolContribution whose begin is the key for a value in InstaPoolHistory whose maybe_payout is Some. In this case, the total_contributions is reduced by the ContributionRecord's mask and a pro rata amount paid. The ContributionRecord is mutated by incrementing begin, or removed if begin becomes equal to end.

-

Example:

-
#![allow(unused)]
-fn main() {
-// Simple example with a `u16` `CoreMask` and bulk sold in 100 timeslices.
-Regions:
-{ core: 0u16, begin: 100, mask: 0b1111_1111_1111_1111u16 } => { end: 200u32, owner: Alice };
-// First split @ 50
-Regions:
-{ core: 0u16, begin: 100, mask: 0b1111_1111_1111_1111u16 } => { end: 150u32, owner: Alice };
-{ core: 0u16, begin: 150, mask: 0b1111_1111_1111_1111u16 } => { end: 200u32, owner: Alice };
-// Share half of first 50 blocks
-Regions:
-{ core: 0u16, begin: 100, mask: 0b1111_1111_0000_0000u16 } => { end: 150u32, owner: Alice };
-{ core: 0u16, begin: 100, mask: 0b0000_0000_1111_1111u16 } => { end: 150u32, owner: Alice };
-{ core: 0u16, begin: 150, mask: 0b1111_1111_1111_1111u16 } => { end: 200u32, owner: Alice };
-// Sell half of them to Bob
-Regions:
-{ core: 0u16, begin: 100, mask: 0b1111_1111_0000_0000u16 } => { end: 150u32, owner: Alice };
-{ core: 0u16, begin: 100, mask: 0b0000_0000_1111_1111u16 } => { end: 150u32, owner: Bob };
-{ core: 0u16, begin: 150, mask: 0b1111_1111_1111_1111u16 } => { end: 200u32, owner: Alice };
-// Bob splits first 10 and assigns them to himself.
-Regions:
-{ core: 0u16, begin: 100, mask: 0b1111_1111_0000_0000u16 } => { end: 150u32, owner: Alice };
-{ core: 0u16, begin: 100, mask: 0b0000_0000_1111_1111u16 } => { end: 110u32, owner: Bob };
-{ core: 0u16, begin: 110, mask: 0b0000_0000_1111_1111u16 } => { end: 150u32, owner: Bob };
-{ core: 0u16, begin: 150, mask: 0b1111_1111_1111_1111u16 } => { end: 200u32, owner: Alice };
-// Bob shares first 10 3 ways and sells smaller shares to Charlie and Dave
-Regions:
-{ core: 0u16, begin: 100, mask: 0b1111_1111_0000_0000u16 } => { end: 150u32, owner: Alice };
-{ core: 0u16, begin: 100, mask: 0b0000_0000_1100_0000u16 } => { end: 110u32, owner: Charlie };
-{ core: 0u16, begin: 100, mask: 0b0000_0000_0011_0000u16 } => { end: 110u32, owner: Dave };
-{ core: 0u16, begin: 100, mask: 0b0000_0000_0000_1111u16 } => { end: 110u32, owner: Bob };
-{ core: 0u16, begin: 110, mask: 0b0000_0000_1111_1111u16 } => { end: 150u32, owner: Bob };
-{ core: 0u16, begin: 150, mask: 0b1111_1111_1111_1111u16 } => { end: 200u32, owner: Alice };
-// Bob assigns to his para B, Charlie and Dave assign to their paras C and D; Alice assigns first 50 to A
-Regions:
-{ core: 0u16, begin: 150, mask: 0b1111_1111_1111_1111u16 } => { end: 200u32, owner: Alice };
-Workplan:
-(100, 0) => vec![
-    { mask: 0b1111_1111_0000_0000u16, task: Assigned(A) },
-    { mask: 0b0000_0000_1100_0000u16, task: Assigned(C) },
-    { mask: 0b0000_0000_0011_0000u16, task: Assigned(D) },
-    { mask: 0b0000_0000_0000_1111u16, task: Assigned(B) },
-]
-(110, 0) => vec![{ mask: 0b0000_0000_1111_1111u16, task: Assigned(B) }]
-// Alice assigns her remaining 50 timeslices to the InstaPool paying herself:
-Regions: (empty)
-Workplan:
-(100, 0) => vec![
-    { mask: 0b1111_1111_0000_0000u16, task: Assigned(A) },
-    { mask: 0b0000_0000_1100_0000u16, task: Assigned(C) },
-    { mask: 0b0000_0000_0011_0000u16, task: Assigned(D) },
-    { mask: 0b0000_0000_0000_1111u16, task: Assigned(B) },
-]
-(110, 0) => vec![{ mask: 0b0000_0000_1111_1111u16, task: Assigned(B) }]
-(150, 0) => vec![{ mask: 0b1111_1111_1111_1111u16, task: InstaPool }]
-InstaPoolContribution:
-{ begin: 150, end: 200, core: 0, mask: 0b1111_1111_1111_1111u16, payee: Alice }
-InstaPoolIo:
-150 => 16
-200 => -16
-// Actual notifications to relay chain.
-// Assumes:
-// - Timeslice is 10 blocks.
-// - Timeslice 0 begins at block #1000.
-// - Relay needs 10 blocks notice of change.
-//
-Workload: 0 => vec![]
-PoolSize: 0
-
-// Block 990:
-Relay <= assign_core(core: 0u16, begin: 1000, assignment: vec![(A, 8), (C, 2), (D, 2), (B, 4)])
-Workload: 0 => vec![
-    { mask: 0b1111_1111_0000_0000u16, task: Assigned(A) },
-    { mask: 0b0000_0000_1100_0000u16, task: Assigned(C) },
-    { mask: 0b0000_0000_0011_0000u16, task: Assigned(D) },
-    { mask: 0b0000_0000_0000_1111u16, task: Assigned(B) },
-]
-PoolSize: 0
-
-// Block 1090:
-Relay <= assign_core(core: 0u16, begin: 1100, assignment: vec![(A, 8), (B, 8)])
-Workload: 0 => vec![
-    { mask: 0b1111_1111_0000_0000u16, task: Assigned(A) },
-    { mask: 0b0000_0000_1111_1111u16, task: Assigned(B) },
-]
-PoolSize: 0
-
-// Block 1490:
-Relay <= assign_core(core: 0u16, begin: 1500, assignment: vec![(Pool, 16)])
-Workload: 0 => vec![
-    { mask: 0b1111_1111_1111_1111u16, task: InstaPool },
-]
-PoolSize: 16
-InstaPoolIo:
-200 => -16
-InstaPoolHistory:
-150 => { total_contributions: 16, maybe_payout: None }
-
-// Sometime after block 1500:
-InstaPoolHistory:
-150 => { total_contributions: 16, maybe_payout: Some(P) }
-
-// Sometime after block 1990:
-InstaPoolIo: (empty)
-PoolSize: 0
-InstaPoolHistory:
-150 => { total_contributions: 16, maybe_payout: Some(P0) }
-151 => { total_contributions: 16, maybe_payout: Some(P1) }
-152 => { total_contributions: 16, maybe_payout: Some(P2) }
-...
-199 => { total_contributions: 16, maybe_payout: Some(P49) }
-
-// Sometime later still Alice calls for a payout
-InstaPoolContribution: (empty)
-InstaPoolHistory: (empty)
-// Alice gets rewarded P0 + P1 + ... P49.
-}
-

Rollout

-

Rollout of this proposal comes in several phases:

-
    -
  1. Finalise the specifics of implementation; this may be done through a design document or through a well-documented prototype implementation.
  2. -
  3. Implement the design, including all associated aspects such as unit tests, benchmarks and any support software needed.
  4. -
  5. If any new parachain is required, launch of this.
  6. -
  7. Formal audit of the implementation and any manual testing.
  8. -
  9. Announcement to the various stakeholders of the imminent changes.
  10. -
  11. Software integration and release.
  12. -
  13. Governance upgrade proposal(s).
  14. -
  15. Monitoring of the upgrade process.
  16. -
-

Performance, Ergonomics and Compatibility

-

No specific considerations.

-

Parachains already deployed into the Polkadot UC must have a clear plan of action to migrate to an agile Coretime market.

-

While this proposal does not introduce documentable features per se, adequate documentation must be provided to potential purchasers of Polkadot Coretime. This SHOULD include any alterations to the Polkadot-SDK software collection.

-

Testing, Security and Privacy

-

Regular testing through unit tests, integration tests, manual testnet tests, zombie-net tests and fuzzing SHOULD be conducted.

-

A regular security review SHOULD be conducted prior to deployment through a review by the Web3 Foundation economic research group.

-

Any final implementation MUST pass a professional external security audit.

-

The proposal introduces no new privacy concerns.

- -

RFC-3 proposes a means of implementing the high-level allocations within the Relay-chain.

-

RFC-5 proposes the API for interacting with Relay-chain.

-

Additional work should specify the interface for the instantaneous market revenue so that the Coretime-chain can ensure Bulk Coretime placed in the instantaneous market is properly compensated.

-

Drawbacks, Alternatives and Unknowns

-

Unknowns include the economic and resource parameterisations:

-
    -
  • The initial price of Bulk Coretime.
  • -
  • The price-change algorithm between Bulk Coretime sales.
  • -
  • The price increase per Bulk Coretime period for renewals.
  • -
  • The price decrease graph in the Leadin period for Bulk Coretime sales.
  • -
  • The initial price of Instantaneous Coretime.
  • -
  • The price-change algorithm for Instantaneous Coretime sales.
  • -
  • The percentage of cores to be sold as Bulk Coretime.
  • -
  • The fate of revenue collected.
  • -
-

Prior Art and References

-

Robert Habermeier initially wrote on the subject of Polkadot blockspace-centric in the article Polkadot Blockspace over Blockchains. While not going into details, the article served as an early reframing piece for moving beyond one-slot-per-chain models and building out secondary market infrastructure for resource allocation.

- -
- - -
-
- - - -
- - - - - - - - - - - - - - - - - - - - -
- - diff --git a/text/0005-coretime-interface.html b/text/0005-coretime-interface.html deleted file mode 100644 index 1d49ef9fb..000000000 --- a/text/0005-coretime-interface.html +++ /dev/null @@ -1,361 +0,0 @@ - - - - - - - 0005 - Polkadot Fellowship RFCs - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - - - - - - - - - - - - - - - - -
- -
- - - - - - - - -
-
-

RFC-5: Coretime Interface

-
- - - -
Start Date06 July 2023
DescriptionInterface for manipulating the usage of cores on the Polkadot Ubiquitous Computer.
AuthorsGavin Wood, Robert Habermeier
-
-

Summary

-

In the Agile Coretime model of the Polkadot Ubiquitous Computer, as proposed in RFC-1 and RFC-3, it is necessary for the allocating parachain (envisioned to be one or more pallets on a specialised Brokerage System Chain) to communicate the core assignments to the Relay-chain, which is responsible for ensuring those assignments are properly enacted.

-

This is a proposal for the interface which will exist around the Relay-chain in order to communicate this information and instructions.

-

Motivation

-

The background motivation for this interface is splitting out coretime allocation functions and secondary markets from the Relay-chain onto System parachains. A well-understood and general interface is necessary for ensuring the Relay-chain receives coretime allocation instructions from one or more System chains without introducing dependencies on the implementation details of either side.

-

Requirements

-
    -
  • The interface MUST allow the Relay-chain to be scheduled on a low-latency basis.
  • -
  • Individual cores MUST be schedulable, both in full to a single task (a ParaId or the Instantaneous Coretime Pool) or to many unique tasks in differing ratios.
  • -
  • Typical usage of the interface SHOULD NOT overload the VMP message system.
  • -
  • The interface MUST allow for the allocating chain to be notified of all accounting information relevant for making accurate rewards for contributing to the Instantaneous Coretime Pool.
  • -
  • The interface MUST allow for Instantaneous Coretime Market Credits to be communicated.
  • -
  • The interface MUST allow for the allocating chain to instruct changes to the number of cores which it is able to allocate.
  • -
  • The interface MUST allow for the allocating chain to be notified of changes to the number of cores which are able to be allocated by the allocating chain.
  • -
-

Stakeholders

-

Primary stakeholder sets are:

-
    -
  • Developers of the Relay-chain core-management logic.
  • -
  • Developers of the Brokerage System Chain and its pallets.
  • -
-

Socialization:

-

This content of this RFC was discussed in the Polkdot Fellows channel.

-

Explanation

-

The interface has two sections: The messages which the Relay-chain is able to receive from the allocating parachain (the UMP message types), and messages which the Relay-chain is able to send to the allocating parachain (the DMP message types). These messages are expected to be able to be implemented in a well-known pallet and called with the XCM Transact instruction.

-

Future work may include these messages being introduced into the XCM standard.

-

UMP Message Types

-

request_core_count

-

Prototype:

-
fn request_core_count(
-    count: u16,
-)
-
-

Requests the Relay-chain to alter the number of schedulable cores to count. Under normal operation, the Relay-chain SHOULD send a notify_core_count(count) message back.

-

request_revenue_info_at

-

Prototype:

-
fn request_revenue_at(
-    when: BlockNumber,
-)
-
-

Requests that the Relay-chain send a notify_revenue message back at or soon after Relay-chain block number when whose until parameter is equal to when.

-

The period in to the past which when is allowed to be may be limited; if so the limit should be understood on a channel outside of this proposal. In the case that the request cannot be serviced because when is too old a block then a notify_revenue message must still be returned, but its revenue field may be None.

-

credit_account

-

Prototype:

-
fn credit_account(
-    who: AccountId,
-    amount: Balance,
-)
-
-

Instructs the Relay-chain to add the amount of DOT to the Instantaneous Coretime Market Credit account of who.

-

It is expected that Instantaneous Coretime Market Credit on the Relay-chain is NOT transferrable and only redeemable when used to assign cores in the Instantaneous Coretime Pool.

-

assign_core

-

Prototype:

-
type PartsOf57600 = u16;
-enum CoreAssignment {
-    InstantaneousPool,
-    Task(ParaId),
-}
-fn assign_core(
-    core: CoreIndex,
-    begin: BlockNumber,
-    assignment: Vec<(CoreAssignment, PartsOf57600)>,
-    end_hint: Option<BlockNumber>,
-)
-
-

Requirements:

-
assert!(core < core_count);
-assert!(targets.iter().map(|x| x.0).is_sorted());
-assert_eq!(targets.iter().map(|x| x.0).unique().count(), targets.len());
-assert_eq!(targets.iter().map(|x| x.1).sum(), 57600);
-
-

Where:

-
    -
  • core_count is assumed to be the sole parameter in the last received notify_core_count message.
  • -
-

Instructs the Relay-chain to ensure that the core indexed as core is utilised for a number of assignments in specific ratios given by assignment starting as soon after begin as possible. Core assignments take the form of a CoreAssignment value which can either task the core to a ParaId value or indicate that the core should be used in the Instantaneous Pool. Each assignment comes with a ratio value, represented as the numerator of the fraction with a denominator of 57,600.

-

If end_hint is Some and the inner is greater than the current block number, then the Relay-chain should optimize in the expectation of receiving a new assign_core(core, ...) message at or prior to the block number of the inner value. Specific functionality should remain unchanged regardless of the end_hint value.

-

On the choice of denominator: 57,600 is a very composite number which factors into: 2 ** 8, 3 ** 2, 5 ** 2. By using it as the denominator we allow for various useful fractions to be perfectly represented including thirds, quarters, fifths, tenths, 80ths, percent and 256ths.

-

DMP Message Types

-

notify_core_count

-

Prototype:

-
fn notify_core_count(
-    count: u16,
-)
-
-

Indicate that from this block onwards, the range of acceptable values of the core parameter of assign_core message is [0, count). assign_core will be a no-op if provided with a value for core outside of this range.

-

notify_revenue_info

-

Prototype:

-
fn notify_revenue_info(
-    until: BlockNumber,
-    revenue: Option<Balance>,
-)
-
-

Provide the amount of revenue accumulated from Instantaneous Coretime Sales from Relay-chain block number last_until to until, not including until itself. last_until is defined as being the until argument of the last notify_revenue message sent, or zero for the first call. If revenue is None, this indicates that the information is no longer available.

-

This explicitly disregards the possibility of multiple parachains requesting and being notified of revenue information. The Relay-chain must be configured to ensure that only a single revenue information destination exists.

-

Realistic Limits of the Usage

-

For request_revenue_info, a successful request should be possible if when is no less than the Relay-chain block number on arrival of the message less 100,000.

-

For assign_core, a successful request should be possible if begin is no less than the Relay-chain block number on arrival of the message plus 10 and workload contains no more than 100 items.

-

Performance, Ergonomics and Compatibility

-

No specific considerations.

-

Testing, Security and Privacy

-

Standard Polkadot testing and security auditing applies.

-

The proposal introduces no new privacy concerns.

- -

RFC-1 proposes a means of determining allocation of Coretime using this interface.

-

RFC-3 proposes a means of implementing the high-level allocations within the Relay-chain.

-

Drawbacks, Alternatives and Unknowns

-

None at present.

-

Prior Art and References

-

None.

- -
- - -
-
- - - -
- - - - - - - - - - - - - - - - - - - - -
- - diff --git a/text/0007-system-collator-selection.html b/text/0007-system-collator-selection.html deleted file mode 100644 index 9ca1343ae..000000000 --- a/text/0007-system-collator-selection.html +++ /dev/null @@ -1,374 +0,0 @@ - - - - - - - 0007 - Polkadot Fellowship RFCs - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - - - - - - - - - - - - - - - - -
- -
- - - - - - - - -
-
-

RFC-0007: System Collator Selection

-
- - - -
Start Date07 July 2023
DescriptionMechanism for selecting collators of system chains.
AuthorsJoe Petrowski
-
-

Summary

-

As core functionality moves from the Relay Chain into system chains, so increases the reliance on -the liveness of these chains for the use of the network. It is not economically scalable, nor -necessary from a game-theoretic perspective, to pay collators large rewards. This RFC proposes a -mechanism -- part technical and part social -- for ensuring reliable collator sets that are -resilient to attemps to stop any subsytem of the Polkadot protocol.

-

Motivation

-

In order to guarantee access to Polkadot's system, the collators on its system chains must propose -blocks (provide liveness) and allow all transactions to eventually be included. That is, some -collators may censor transactions, but there must exist one collator in the set who will include a -given transaction. In fact, all collators may censor varying subsets of transactions, but as long -as no transaction is in the intersection of every subset, it will eventually be included. The -objective of this RFC is to propose a mechanism to select such a set on each system chain.

-

While the network as a whole uses staking (and inflationary rewards) to attract validators, -collators face different challenges in scale and have lower security assumptions than validators. -Regarding scale, there exist many system chains, and it is economically expensive to pay collators -a premium. Likewise, any staked DOT for collation is not staked for validation. Since collator -sets do not need to meet Byzantine Fault Tolerance criteria, staking as the primary mechanism for -collator selection would remove stake that is securing BFT assumptions, making the network less -secure.

-

Another problem with economic scalability relates to the increasing number of system chains, and -corresponding increase in need for collators (i.e., increase in collator slots). "Good" (highly -available, non-censoring) collators will not want to compete in elections on many chains when they -could use their resources to compete in the more profitable validator election. Such dilution -decreases the required bond on each chain, leaving them vulnerable to takeover by hostile -collator groups.

-

This RFC proposes a system whereby collation is primarily an infrastructure service, with the -on-chain Treasury reimbursing costs of semi-trusted node operators, referred to as "Invulnerables". -The system need not trust the individual operators, only that as a set they would be resilient to -coordinated attempts to stop a single chain from halting or to censor a particular subset of -transactions.

-

In the case that users do not trust this set, this RFC also proposes that each chain always have -available collator positions that can be acquired by anyone by placing a bond.

-

Requirements

-
    -
  • System MUST have at least one valid collator for every chain.
  • -
  • System MUST allow anyone to become a collator, provided they reserve/hold enough DOT.
  • -
  • System SHOULD select a set of collators with reasonable expectation that the set will not collude -to censor any subset of transactions.
  • -
  • Collators selected by governance SHOULD have a reasonable expectation that the Treasury will -reimburse their operating costs.
  • -
-

Stakeholders

-
    -
  • Infrastructure providers (people who run validator/collator nodes)
  • -
  • Polkadot Treasury
  • -
-

Explanation

-

This protocol builds on the existing -Collator Selection pallet -and its notion of Invulnerables. Invulnerables are collators (identified by their AccountIds) who -will be selected as part of the collator set every session. Operations relating to the management -of the Invulnerables are done through privileged, governance origins. The implementation should -maintain an API for adding and removing Invulnerable collators.

-

In addition to Invulnerables, there are also open slots for "Candidates". Anyone can register as a -Candidate by placing a fixed bond. However, with a fixed bond and fixed number of slots, there is -an obvious selection problem: The slots fill up without any logic to replace their occupants.

-

This RFC proposes that the collator selection protocol allow Candidates to increase (and decrease) -their individual bonds, sort the Candidates according to bond, and select the top N Candidates. -The selection and changeover should be coordinated by the session manager.

-

A FRAME pallet already exists for sorting ("bagging") "top N" groups, the -Bags List pallet. -This pallet's SortedListProvider should be integrated into the session manager of the Collator -Selection pallet.

-

Despite the lack of apparent economic incentives (i.e., inflation), several reasons exist why one -may want to bond funds to participate in the Candidates election, for example:

-
    -
  • They want to build credibility to be selected as Invulnerable;
  • -
  • They want to ensure availability of an application, e.g. a stablecoin issuer might run a collator -on Asset Hub to ensure transactions in its asset are included in blocks;
  • -
  • They fear censorship themselves, e.g. a voter might think their votes are being censored from -governance, so they run a collator on the governance chain to include their votes.
  • -
-

Unlike the fixed-bond mechanism that fills up its Candidates, the election mechanism ensures that -anyone can join the collator set by placing the Nth highest bond.

-

Set Size

-

In order to achieve the requirements listed under Motivation, it is reasonable to have -approximately:

-
    -
  • 20 collators per system chain,
  • -
  • of which 15 are Invulnerable, and
  • -
  • five are elected by bond.
  • -
-

Drawbacks

-

The primary drawback is a reliance on governance for continued treasury funding of infrastructure -costs for Invulnerable collators.

-

Testing, Security, and Privacy

-

The vast majority of cases can be covered by unit testing. Integration test should ensure that the -Collator Selection UpdateOrigin, which has permission to modify the Invulnerables and desired -number of Candidates, can handle updates over XCM from the system's governance location.

-

Performance, Ergonomics, and Compatibility

-

This proposal has very little impact on most users of Polkadot, and should improve the performance -of system chains by reducing the number of missed blocks.

-

Performance

-

As chains have strict PoV size limits, care must be taken in the PoV impact of the session manager. -Appropriate benchmarking and tests should ensure that conservative limits are placed on the number -of Invulnerables and Candidates.

-

Ergonomics

-

The primary group affected is Candidate collators, who, after implementation of this RFC, will need -to compete in a bond-based election rather than a race to claim a Candidate spot.

-

Compatibility

-

This RFC is compatible with the existing implementation and can be handled via upgrades and -migration.

-

Prior Art and References

-

Written Discussions

- -

Prior Feedback and Input From

-
    -
  • Kian Paimani
  • -
  • Jeff Burdges
  • -
  • Rob Habermeier
  • -
  • SR Labs Auditors
  • -
  • Current collators including Paranodes, Stake Plus, Turboflakes, Peter Mensik, SIK, and many more.
  • -
-

Unresolved Questions

-

None at this time.

- -

There may exist in the future system chains for which this model of collator selection is not -appropriate. These chains should be evaluated on a case-by-case basis.

- -
- - -
-
- - - -
- - - - - - - - - - - - - - - - - - - - -
- - diff --git a/text/0008-parachain-bootnodes-dht.html b/text/0008-parachain-bootnodes-dht.html deleted file mode 100644 index b5f8f237b..000000000 --- a/text/0008-parachain-bootnodes-dht.html +++ /dev/null @@ -1,331 +0,0 @@ - - - - - - - 0008 - Polkadot Fellowship RFCs - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - - - - - - - - - - - - - - - - -
- -
- - - - - - - - -
-
-

RFC-0008: Store parachain bootnodes in relay chain DHT

-
- - - -
Start Date2023-07-14
DescriptionParachain bootnodes shall register themselves in the DHT of the relay chain
AuthorsPierre Krieger
-
-

Summary

-

The full nodes of the Polkadot peer-to-peer network maintain a distributed hash table (DHT), which is currently used for full nodes discovery and validators discovery purposes.

-

This RFC proposes to extend this DHT to be used to discover full nodes of the parachains of Polkadot.

-

Motivation

-

The maintenance of bootnodes has long been an annoyance for everyone.

-

When a bootnode is newly-deployed or removed, every chain specification must be updated in order to take the update into account. This has lead to various non-optimal solutions, such as pulling chain specifications from GitHub repositories. -When it comes to RPC nodes, UX developers often have trouble finding up-to-date addresses of parachain RPC nodes. With the ongoing migration from RPC nodes to light clients, similar problems would happen with chain specifications as well.

-

Furthermore, there exists multiple different possible variants of a certain chain specification: with the non-raw storage, with the raw storage, with just the genesis trie root hash, with or without checkpoint, etc. All of this creates confusion. Removing the need for parachain developers to be aware of and manage these different versions would be beneficial.

-

Since the PeerId and addresses of bootnodes needs to be stable, extra maintenance work is required from the chain maintainers. For example, they need to be extra careful when migrating nodes within their infrastructure. In some situations, bootnodes are put behind domain names, which also requires maintenance work.

-

Because the list of bootnodes in chain specifications is so annoying to modify, the consequence is that the number of bootnodes is rather low (typically between 2 and 15). In order to better resist downtimes and DoS attacks, a better solution would be to use every node of a certain chain as potential bootnode, rather than special-casing some specific nodes.

-

While this RFC doesn't solve these problems for relay chains, it aims at solving it for parachains by storing the list of all the full nodes of a parachain on the relay chain DHT.

-

Assuming that this RFC is implemented, and that light clients are used, deploying a parachain wouldn't require more work than registering it onto the relay chain and starting the collators. There wouldn't be any need for special infrastructure nodes anymore.

-

Stakeholders

-

This RFC has been opened on my own initiative because I think that this is a good technical solution to a usability problem that many people are encountering and that they don't realize can be solved.

-

Explanation

-

The content of this RFC only applies for parachains and parachain nodes that are "Substrate-compatible". It is in no way mandatory for parachains to comply to this RFC.

-

Note that "Substrate-compatible" is very loosely defined as "implements the same mechanisms and networking protocols as Substrate". The author of this RFC believes that "Substrate-compatible" should be very precisely specified, but there is controversy on this topic.

-

While a lot of this RFC concerns the implementation of parachain nodes, it makes use of the resources of the Polkadot chain, and as such it is important to describe them in the Polkadot specification.

-

This RFC adds two mechanisms: a registration in the DHT, and a new networking protocol.

-

DHT provider registration

-

This RFC heavily relies on the functionalities of the Kademlia DHT already in use by Polkadot. -You can find a link to the specification here.

-

Full nodes of a parachain registered on Polkadot should register themselves onto the Polkadot DHT as the providers of a key corresponding to the parachain that they are serving, as described in the Content provider advertisement section of the specification. This uses the ADD_PROVIDER system of libp2p-kademlia.

-

This key is: sha256(concat(scale_compact(para_id), randomness)) where the value of randomness can be found in the randomness field when calling the BabeApi_currentEpoch function. -For example, for a para_id equal to 1000, and at the time of writing of this RFC (July 14th 2023 at 09:13 UTC), it is sha(0xa10f12872447958d50aa7b937b0106561a588e0e2628d33f81b5361b13dbcf8df708), which is equal to 0x483dd8084d50dbbbc962067f216c37b627831d9339f5a6e426a32e3076313d87.

-

In order to avoid downtime when the key changes, parachain full nodes should also register themselves as a secondary key that uses a value of randomness equal to the randomness field when calling BabeApi_nextEpoch.

-

Implementers should be aware that their implementation of Kademlia might already hash the key before XOR'ing it. The key is not meant to be hashed twice.

-

The compact SCALE encoding has been chosen in order to avoid problems related to the number of bytes and endianness of the para_id.

-

New networking protocol

-

A new request-response protocol should be added, whose name is /91b171bb158e2d3848fa23a9f1c25182fb8e20313b2c1eb49219da7a70ce90c3/paranode (that hexadecimal number is the genesis hash of the Polkadot chain, and should be adjusted appropriately for Kusama and others).

-

The request consists in a SCALE-compact-encoded para_id. For example, for a para_id equal to 1000, this is 0xa10f.

-

Note that because this is a request-response protocol, the request is always prefixed with its length in bytes. While the body of the request is simply the SCALE-compact-encoded para_id, the data actually sent onto the substream is both the length and body.

-

The response consists in a protobuf struct, defined as:

-
syntax = "proto2";
-
-message Response {
-    // Peer ID of the node on the parachain side.
-    bytes peer_id = 1;
-
-    // Multiaddresses of the parachain side of the node. The list and format are the same as for the `listenAddrs` field of the `identify` protocol.
-    repeated bytes addrs = 2;
-
-    // Genesis hash of the parachain. Used to determine the name of the networking protocol to connect to the parachain. Untrusted.
-    bytes genesis_hash = 3;
-
-    // So-called "fork ID" of the parachain. Used to determine the name of the networking protocol to connect to the parachain. Untrusted.
-    optional string fork_id = 4;
-};
-
-

The maximum size of a response is set to an arbitrary 16kiB. The responding side should make sure to conform to this limit. Given that fork_id is typically very small and that the only variable-length field is addrs, this is easily achieved by limiting the number of addresses.

-

Implementers should be aware that addrs might be very large, and are encouraged to limit the number of addrs to an implementation-defined value.

-

Drawbacks

-

The peer_id and addrs fields are in theory not strictly needed, as the PeerId and addresses could be always equal to the PeerId and addresses of the node being registered as the provider and serving the response. However, the Cumulus implementation currently uses two different networking stacks, one of the parachain and one for the relay chain, using two separate PeerIds and addresses, and as such the PeerId and addresses of the other networking stack must be indicated. Asking them to use only one networking stack wouldn't feasible in a realistic time frame.

-

The values of the genesis_hash and fork_id fields cannot be verified by the requester and are expected to be unused at the moment. Instead, a client that desires connecting to a parachain is expected to obtain the genesis hash and fork ID of the parachain from the parachain chain specification. These fields are included in the networking protocol nonetheless in case an acceptable solution is found in the future, and in order to allow use cases such as discovering parachains in a not-strictly-trusted way.

-

Testing, Security, and Privacy

-

Because not all nodes want to be used as bootnodes, implementers are encouraged to provide a way to disable this mechanism. However, it is very much encouraged to leave this mechanism on by default for all parachain nodes.

-

This mechanism doesn't add or remove any security by itself, as it relies on existing mechanisms. -However, if the principle of chain specification bootnodes is entirely replaced with the mechanism described in this RFC (which is the objective), then it becomes important whether the mechanism in this RFC can be abused in order to make a parachain unreachable.

-

Due to the way Kademlia works, it would become the responsibility of the 20 Polkadot nodes whose sha256(peer_id) is closest to the key (described in the explanations section) to store the list of bootnodes of each parachain. -Furthermore, when a large number of providers (here, a provider is a bootnode) are registered, only the providers closest to the key are kept, up to a certain implementation-defined limit.

-

For this reason, an attacker can abuse this mechanism by randomly generating libp2p PeerIds until they find the 20 entries closest to the key representing the target parachain. They are then in control of the parachain bootnodes. -Because the key changes periodically and isn't predictable, and assuming that the Polkadot DHT is sufficiently large, it is not realistic for an attack like this to be maintained in the long term.

-

Furthermore, parachain clients are expected to cache a list of known good nodes on their disk. If the mechanism described in this RFC went down, it would only prevent new nodes from accessing the parachain, while clients that have connected before would not be affected.

-

Performance, Ergonomics, and Compatibility

-

Performance

-

The DHT mechanism generally has a low overhead, especially given that publishing providers is done only every 24 hours.

-

Doing a Kademlia iterative query then sending a provider record shouldn't take more than around 50 kiB in total of bandwidth for the parachain bootnode.

-

Assuming 1000 parachain full nodes, the 20 Polkadot full nodes corresponding to a specific parachain will each receive a sudden spike of a few megabytes of networking traffic when the key rotates. Again, this is relatively negligible. If this becomes a problem, one can add a random delay before a parachain full node registers itself to be the provider of the key corresponding to BabeApi_next_epoch.

-

Maybe the biggest uncertainty is the traffic that the 20 Polkadot full nodes will receive from light clients that desire knowing the bootnodes of a parachain. Light clients are generally encouraged to cache the peers that they use between restarts, so they should only query these 20 Polkadot full nodes at their first initialization. -If this every becomes a problem, this value of 20 is an arbitrary constant that can be increased for more redundancy.

-

Ergonomics

-

Irrelevant.

-

Compatibility

-

Irrelevant.

-

Prior Art and References

-

None.

-

Unresolved Questions

-

While it fundamentally doesn't change much to this RFC, using BabeApi_currentEpoch and BabeApi_nextEpoch might be inappropriate. I'm not familiar enough with good practices within the runtime to have an opinion here. Should it be an entirely new pallet?

- -

It is possible that in the future a client could connect to a parachain without having to rely on a trusted parachain specification.

- -
- - -
-
- - - -
- - - - - - - - - - - - - - - - - - - - -
- - diff --git a/text/0010-burn-coretime-revenue.html b/text/0010-burn-coretime-revenue.html deleted file mode 100644 index 1f2cea890..000000000 --- a/text/0010-burn-coretime-revenue.html +++ /dev/null @@ -1,272 +0,0 @@ - - - - - - - 0010 - Polkadot Fellowship RFCs - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - - - - - - - - - - - - - - - - -
- -
- - - - - - - - -
-
-

RFC-0010: Burn Coretime Revenue

-
- - - -
Start Date19.07.2023
DescriptionRevenue from Coretime sales should be burned
AuthorsJonas Gehrlein
-
-

Summary

-

The Polkadot UC will generate revenue from the sale of available Coretime. The question then arises: how should we handle these revenues? Broadly, there are two reasonable paths – burning the revenue and thereby removing it from total issuance or divert it to the Treasury. This Request for Comment (RFC) presents arguments favoring burning as the preferred mechanism for handling revenues from Coretime sales.

-

Motivation

-

How to handle the revenue accrued from Coretime sales is an important economic question that influences the value of DOT and should be properly discussed before deciding for either of the options. Now is the best time to start this discussion.

-

Stakeholders

-

Polkadot DOT token holders.

-

Explanation

-

This RFC discusses potential benefits of burning the revenue accrued from Coretime sales instead of diverting them to Treasury. Here are the following arguments for it.

-

It's in the interest of the Polkadot community to have a consistent and predictable Treasury income, because volatility in the inflow can be damaging, especially in situations when it is insufficient. As such, this RFC operates under the presumption of a steady and sustainable Treasury income flow, which is crucial for the Polkadot community's stability. The assurance of a predictable Treasury income, as outlined in a prior discussion here, or through other equally effective measures, serves as a baseline assumption for this argument.

-

Consequently, we need not concern ourselves with this particular issue here. This naturally begs the question - why should we introduce additional volatility to the Treasury by aligning it with the variable Coretime sales? It's worth noting that Coretime revenues often exhibit an inverse relationship with periods when Treasury spending should ideally be ramped up. During periods of low Coretime utilization (indicated by lower revenue), Treasury should spend more on projects and endeavours to increase the demand for Coretime. This pattern underscores that Coretime sales, by their very nature, are an inconsistent and unpredictable source of funding for the Treasury. Given the importance of maintaining a steady and predictable inflow, it's unnecessary to rely on another volatile mechanism. Some might argue that we could have both: a steady inflow (from inflation) and some added bonus from Coretime sales, but burning the revenue would offer further benefits as described below.

-
    -
  • -

    Balancing Inflation: While DOT as a utility token inherently profits from a (reasonable) net inflation, it also benefits from a deflationary force that functions as a counterbalance to the overall inflation. Right now, the only mechanism on Polkadot that burns fees is the one for underutilized DOT in the Treasury. Finding other, more direct target for burns makes sense and the Coretime market is a good option.

    -
  • -
  • -

    Clear incentives: By burning the revenue accrued on Coretime sales, prices paid by buyers are clearly costs. This removes distortion from the market that might arise when the paid tokens occur on some other places within the network. In that case, some actors might have secondary motives of influencing the price of Coretime sales, because they benefit down the line. For example, actors that actively participate in the Coretime sales are likely to also benefit from a higher Treasury balance, because they might frequently request funds for their projects. While those effects might appear far-fetched, they could accumulate. Burning the revenues makes sure that the prices paid are clearly costs to the actors themselves.

    -
  • -
  • -

    Collective Value Accrual: Following the previous argument, burning the revenue also generates some externality, because it reduces the overall issuance of DOT and thereby increases the value of each remaining token. In contrast to the aforementioned argument, this benefits all token holders collectively and equally. Therefore, I'd consider this as the preferrable option, because burns lets all token holders participate at Polkadot's success as Coretime usage increases.

    -
  • -
- -
- - -
-
- - - -
- - - - - - - - - - - - - - - - - - - - -
- - diff --git a/text/0012-process-for-adding-new-collectives.html b/text/0012-process-for-adding-new-collectives.html deleted file mode 100644 index 51f60e25b..000000000 --- a/text/0012-process-for-adding-new-collectives.html +++ /dev/null @@ -1,329 +0,0 @@ - - - - - - - 0012 - Polkadot Fellowship RFCs - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - - - - - - - - - - - - - - - - -
- -
- - - - - - - - -
-
-

RFC-0012: Process for Adding New System Collectives

-
- - - -
Start Date24 July 2023
DescriptionA process for adding new (and removing existing) system collectives.
AuthorsJoe Petrowski
-
-

Summary

-

Since the introduction of the Collectives parachain, many groups have expressed interest in forming -new -- or migrating existing groups into -- on-chain collectives. While adding a new collective is -relatively simple from a technical standpoint, the Fellowship will need to merge new pallets into -the Collectives parachain for each new collective. This RFC proposes a means for the network to -ratify a new collective, thus instructing the Fellowship to instate it in the runtime.

-

Motivation

-

Many groups have expressed interest in representing collectives on-chain. Some of these include:

-
    -
  • Parachain technical fellowship (new)
  • -
  • Fellowship(s) for media, education, and evangelism (new)
  • -
  • Polkadot Ambassador Program (existing)
  • -
  • Anti-Scam Team (existing)
  • -
-

Collectives that form part of the core Polkadot protocol should have a mandate to serve the -Polkadot network. However, as part of the Polkadot protocol, the Fellowship, in its capacity of -maintaining system runtimes, will need to include modules and configurations for each collective.

-

Once a group has developed a value proposition for the Polkadot network, it should have a clear -path to having its collective accepted on-chain as part of the protocol. Acceptance should direct -the Fellowship to include the new collective with a given initial configuration into the runtime. -However, the network, not the Fellowship, should ultimately decide which collectives are in the -interest of the network.

-

Stakeholders

-
    -
  • Polkadot stakeholders who would like to organize on-chain.
  • -
  • Technical Fellowship, in its role of maintaining system runtimes.
  • -
-

Explanation

-

The group that wishes to operate an on-chain collective should publish the following information:

-
    -
  • Charter, including the collective's mandate and how it benefits Polkadot. This would be similar -to the -Fellowship Manifesto.
  • -
  • Seeding recommendation.
  • -
  • Member types, i.e. should members be individuals or organizations.
  • -
  • Member management strategy, i.e. how do members join and get promoted, if applicable.
  • -
  • How much, if at all, members should get paid in salary.
  • -
  • Any special origins this Collective should have outside its self. For example, the Fellowship -can whitelist calls for referenda via the WhitelistOrigin.
  • -
-

This information could all be in a single document or, for example, a GitHub repository.

-

After publication, members should seek feedback from the community and Technical Fellowship, and -make any revisions needed. When the collective believes the proposal is ready, they should bring a -remark with the text APPROVE_COLLECTIVE("{collective name}, {commitment}") to a Root origin -referendum. The proposer should provide instructions for generating commitment. The passing of -this referendum would be unequivocal direction to the Fellowship that this collective should be -part of the Polkadot runtime.

-

Note: There is no need for a REJECT referendum. Proposals that have not been approved are simply -not included in the runtime.

-

Removing Collectives

-

If someone believes that an existing collective is not acting in the interest of the network or in -accordance with its charter, they should likewise have a means to instruct the Fellowship to -remove that collective from Polkadot.

-

An on-chain remark from the Root origin with the text -REMOVE_COLLECTIVE("{collective name}, {para ID}, [{pallet indices}]") would instruct the -Fellowship to remove the collective via the listed pallet indices on paraId. Should someone want -to construct such a remark, they should have a reasonable expectation that a member of the -Fellowship would help them identify the pallet indices associated with a given collective, whether -or not the Fellowship member agrees with removal.

-

Collective removal may also come with other governance calls, for example voiding any scheduled -Treasury spends that would fund the given collective.

-

Drawbacks

-

Passing a Root origin referendum is slow. However, given the network's investment (in terms of code -maintenance and salaries) in a new collective, this is an appropriate step.

-

Testing, Security, and Privacy

-

No impacts.

-

Performance, Ergonomics, and Compatibility

-

Generally all new collectives will be in the Collectives parachain. Thus, performance impacts -should strictly be limited to this parachain and not affect others. As the majority of logic for -collectives is generalized and reusable, we expect most collectives to be instances of similar -subsets of modules. That is, new collectives should generally be compatible with UIs and other -services that provide collective-related functionality, with little modifications to support new -ones.

-

Prior Art and References

-

The launch of the Technical Fellowship, see the -initial forum post.

-

Unresolved Questions

-

None at this time.

- -
- - -
-
- - - -
- - - - - - - - - - - - - - - - - - - - -
- - diff --git a/text/0013-prepare-blockbuilder-and-core-runtime-apis-for-mbms.html b/text/0013-prepare-blockbuilder-and-core-runtime-apis-for-mbms.html deleted file mode 100644 index 7af178b42..000000000 --- a/text/0013-prepare-blockbuilder-and-core-runtime-apis-for-mbms.html +++ /dev/null @@ -1,322 +0,0 @@ - - - - - - - 0013 - Polkadot Fellowship RFCs - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - - - - - - - - - - - - - - - - -
- -
- - - - - - - - -
-
-

RFC-0013: Prepare Core runtime API for MBMs

-
- - - -
Start DateJuly 24, 2023
DescriptionPrepare the Core Runtime API for Multi-Block-Migrations
AuthorsOliver Tale-Yazdi
-
-

Summary

-

Introduces breaking changes to the Core runtime API by letting Core::initialize_block return an enum. The versions of Core is bumped from 4 to 5.

-

Motivation

-

The main feature that motivates this RFC are Multi-Block-Migrations (MBM); these make it possible to split a migration over multiple blocks.
-Further it would be nice to not hinder the possibility of implementing a new hook poll, that runs at the beginning of the block when there are no MBMs and has access to AllPalletsWithSystem. This hook can then be used to replace the use of on_initialize and on_finalize for non-deadline critical logic.
-In a similar fashion, it should not hinder the future addition of a System::PostInherents callback that always runs after all inherents were applied.

-

Stakeholders

-
    -
  • Substrate Maintainers: They have to implement this, including tests, audit and -maintenance burden.
  • -
  • Polkadot Runtime developers: They will have to adapt the runtime files to this breaking change.
  • -
  • Polkadot Parachain Teams: They have to adapt to the breaking changes but then eventually have -multi-block migrations available.
  • -
-

Explanation

-

Core::initialize_block

-

This runtime API function is changed from returning () to ExtrinsicInclusionMode:

-
fn initialize_block(header: &<Block as BlockT>::Header)
-+  -> ExtrinsicInclusionMode;
-
-

With ExtrinsicInclusionMode is defined as:

-
#![allow(unused)]
-fn main() {
-enum ExtrinsicInclusionMode {
-  /// All extrinsics are allowed in this block.
-  AllExtrinsics,
-  /// Only inherents are allowed in this block.
-  OnlyInherents,
-}
-}
-

A block author MUST respect the ExtrinsicInclusionMode that is returned by initialize_block. The runtime MUST reject blocks that have non-inherent extrinsics in them while OnlyInherents was returned.

-

Coming back to the motivations and how they can be implemented with this runtime API change:

-

1. Multi-Block-Migrations: The runtime is being put into lock-down mode for the duration of the migration process by returning OnlyInherents from initialize_block. This ensures that no user provided transaction can interfere with the migration process. It is absolutely necessary to ensure this, otherwise a transaction could call into un-migrated storage and violate storage invariants.

-

2. poll is possible by using apply_extrinsic as entry-point and not hindered by this approach. It would not be possible to use a pallet inherent like System::last_inherent to achieve this for two reasons: First is that pallets do not have access to AllPalletsWithSystem which is required to invoke the poll hook on all pallets. Second is that the runtime does currently not enforce an order of inherents.

-

3. System::PostInherents can be done in the same manner as poll.

-

Drawbacks

-

The previous drawback of cementing the order of inherents has been addressed and removed by redesigning the approach. No further drawbacks have been identified thus far.

-

Testing, Security, and Privacy

-

The new logic of initialize_block can be tested by checking that the block-builder will skip transactions when OnlyInherents is returned.

-

Security: n/a

-

Privacy: n/a

-

Performance, Ergonomics, and Compatibility

-

Performance

-

The performance overhead is minimal in the sense that no clutter was added after fulfilling the -requirements. The only performance difference is that initialize_block also returns an enum that needs to be passed through the WASM boundary. This should be negligible.

-

Ergonomics

-

The new interface allows for more extensible runtime logic. In the future, this will be utilized for -multi-block-migrations which should be a huge ergonomic advantage for parachain developers.

-

Compatibility

-

The advice here is OPTIONAL and outside of the RFC. To not degrade -user experience, it is recommended to ensure that an updated node can still import historic blocks.

-

Prior Art and References

-

The RFC is currently being implemented in polkadot-sdk#1781 (formerly substrate#14275). Related issues and merge -requests:

- -

Unresolved Questions

-

Please suggest a better name for BlockExecutiveMode. We already tried: RuntimeExecutiveMode, -ExtrinsicInclusionMode. The names of the modes Normal and Minimal were also called -AllExtrinsics and OnlyInherents, so if you have naming preferences; please post them.
-=> renamed to ExtrinsicInclusionMode

-

Is post_inherents more consistent instead of last_inherent? Then we should change it.
-=> renamed to last_inherent

- -

The long-term future here is to move the block building logic into the runtime. Currently there is a tight dance between the block author and the runtime; the author has to call into different runtime functions in quick succession and exact order. Any misstep causes the block to be invalid.
-This can be unified and simplified by moving both parts into the runtime.

- -
- - -
-
- - - -
- - - - - - - - - - - - - - - - - - - - -
- - diff --git a/text/0014-improve-locking-mechanism-for-parachains.html b/text/0014-improve-locking-mechanism-for-parachains.html deleted file mode 100644 index e17404040..000000000 --- a/text/0014-improve-locking-mechanism-for-parachains.html +++ /dev/null @@ -1,363 +0,0 @@ - - - - - - - 0014 - Polkadot Fellowship RFCs - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - - - - - - - - - - - - - - - - -
- -
- - - - - - - - -
-
-

RFC-0014: Improve locking mechanism for parachains

-
- - - -
Start DateJuly 25, 2023
DescriptionImprove locking mechanism for parachains
AuthorsBryan Chen
-
-

Summary

-

This RFC proposes a set of changes to the parachain lock mechanism. The goal is to allow a parachain manager to self-service the parachain without root track governance action.

-

This is achieved by remove existing lock conditions and only lock a parachain when:

-
    -
  • A parachain manager explicitly lock the parachain
  • -
  • OR a parachain block is produced successfully
  • -
-

Motivation

-

The manager of a parachain has permission to manage the parachain when the parachain is unlocked. Parachains are by default locked when onboarded to a slot. This requires the parachain wasm/genesis must be valid, otherwise a root track governance action on relaychain is required to update the parachain.

-

The current reliance on root track governance actions for managing parachains can be time-consuming and burdensome. This RFC aims to address this technical difficulty by allowing parachain managers to take self-service actions, rather than relying on general public voting.

-

The key scenarios this RFC seeks to improve are:

-
    -
  1. Rescue a parachain with invalid wasm/genesis.
  2. -
-

While we have various resources and templates to build a new parachain, it is still not a trivial task. It is very easy to make a mistake and resulting an invalid wasm/genesis. With lack of tools to help detect those issues1, it is very likely that the issues are only discovered after the parachain is onboarded on a slot. In this case, the parachain is locked and the parachain team has to go through a lengthy governance process to rescue the parachain.

-
    -
  1. Perform lease renewal for an existing parachain.
  2. -
-

One way to perform lease renewal for a parachain is by doing a least swap with another parachain with a longer lease. This requires the other parachain must be operational and able to perform XCM transact call into relaychain to dispatch the swap call. Combined with the overhead of setting up a new parachain, this is an time consuming and expensive process. Ideally, the parachain manager should be able to perform the lease swap call without having a running parachain2.

-

Requirements

-
    -
  • A parachain manager SHOULD be able to rescue a parachain by updating the wasm/genesis without root track governance action.
  • -
  • A parachain manager MUST NOT be able to update the wasm/genesis if the parachain is locked.
  • -
  • A parachain SHOULD be locked when it successfully produced the first block.
  • -
  • A parachain manager MUST be able to perform lease swap without having a running parachain.
  • -
-

Stakeholders

-
    -
  • Parachain teams
  • -
  • Parachain users
  • -
-

Explanation

-

Status quo

-

A parachain can either be locked or unlocked3. With parachain locked, the parachain manager does not have any privileges. With parachain unlocked, the parachain manager can perform following actions with the paras_registrar pallet:

-
    -
  • deregister: Deregister a Para Id, freeing all data and returning any deposit.
  • -
  • swap: Initiate or confirm lease swap with another parachain.
  • -
  • add_lock: Lock the parachain.
  • -
  • schedule_code_upgrade: Schedule a parachain upgrade to update parachain wasm.
  • -
  • set_current_head: Set the parachain's current head.
  • -
-

Currently, a parachain can be locked with following conditions:

-
    -
  • From add_lock call, which can be dispatched by relaychain Root origin, the parachain, or the parachain manager.
  • -
  • When a parachain is onboarded on a slot4.
  • -
  • When a crowdloan is created.
  • -
-

Only the relaychain Root origin or the parachain itself can unlock the lock5.

-

This creates an issue that if the parachain is unable to produce block, the parachain manager is unable to do anything and have to rely on relaychain Root origin to manage the parachain.

-

Proposed changes

-

This RFC proposes to change the lock and unlock conditions.

-

A parachain can be locked only with following conditions:

-
    -
  • Relaychain governance MUST be able to lock any parachain.
  • -
  • A parachain MUST be able to lock its own lock.
  • -
  • A parachain manager SHOULD be able to lock the parachain.
  • -
  • A parachain SHOULD be locked when it successfully produced a block for the first time.
  • -
-

A parachain can be unlocked only with following conditions:

-
    -
  • Relaychain governance MUST be able to unlock any parachain.
  • -
  • A parachain MUST be able to unlock its own lock.
  • -
-

Note that create crowdloan MUST NOT lock the parachain and onboard a parachain SHOULD NOT lock it until a new block is successfully produced.

-

Migration

-

A one off migration is proposed in order to apply this change retrospectively so that existing parachains can also be benefited from this RFC. This migration will unlock parachains that confirms with following conditions:

-
    -
  • Parachain is locked.
  • -
  • Parachain never produced a block. Including from expired leases.
  • -
  • Parachain manager never explicitly lock the parachain.
  • -
-

Drawbacks

-

Parachain locks are designed in such way to ensure the decentralization of parachains. If parachains are not locked when it should be, it could introduce centralization risk for new parachains.

-

For example, one possible scenario is that a collective may decide to launch a parachain fully decentralized. However, if the parachain is unable to produce block, the parachain manager will be able to replace the wasm and genesis without the consent of the collective.

-

It is considered this risk is tolerable as it requires the wasm/genesis to be invalid at first place. It is not yet practically possible to develop a parachain without any centralized risk currently.

-

Another case is that a parachain team may decide to use crowdloan to help secure a slot lease. Previously, creating a crowdloan will lock a parachain. This means crowdloan participants will know exactly the genesis of the parachain for the crowdloan they are participating. However, this actually providers little assurance to crowdloan participants. For example, if the genesis block is determined before a crowdloan is started, it is not possible to have onchain mechanism to enforce reward distributions for crowdloan participants. They always have to rely on the parachain team to fulfill the promise after the parachain is alive.

-

Existing operational parachains will not be impacted.

-

Testing, Security, and Privacy

-

The implementation of this RFC will be tested on testnets (Rococo and Westend) first.

-

An audit maybe required to ensure the implementation does not introduce unwanted side effects.

-

There is no privacy related concerns.

-

Performance

-

This RFC should not introduce any performance impact.

-

Ergonomics

-

This RFC should improve the developer experiences for new and existing parachain teams

-

Compatibility

-

This RFC is fully compatibility with existing interfaces.

-

Prior Art and References

-
    -
  • Parachain Slot Extension Story: https://github.com/paritytech/polkadot/issues/4758
  • -
  • Allow parachain to renew lease without actually run another parachain: https://github.com/paritytech/polkadot/issues/6685
  • -
  • Always treat parachain that never produced block for a significant amount of time as unlocked: https://github.com/paritytech/polkadot/issues/7539
  • -
-

Unresolved Questions

-

None at this stage.

- -

This RFC is only intended to be a short term solution. Slots will be removed in future and lock mechanism is likely going to be replaced with a more generalized parachain manage & recovery system in future. Therefore long term impacts of this RFC are not considered.

-
1 -

https://github.com/paritytech/cumulus/issues/377

-
-
2 -

https://github.com/paritytech/polkadot/issues/6685

-
-
3 -

https://github.com/paritytech/polkadot/blob/994af3de79af25544bf39644844cbe70a7b4d695/runtime/common/src/paras_registrar.rs#L51-L52C15

-
-
4 -

https://github.com/paritytech/polkadot/blob/994af3de79af25544bf39644844cbe70a7b4d695/runtime/common/src/paras_registrar.rs#L473-L475

-
-
5 -

https://github.com/paritytech/polkadot/blob/994af3de79af25544bf39644844cbe70a7b4d695/runtime/common/src/paras_registrar.rs#L333-L340

-
- -
- - -
-
- - - -
- - - - - - - - - - - - - - - - - - - - -
- - diff --git a/text/0022-adopt-encointer-runtime.html b/text/0022-adopt-encointer-runtime.html deleted file mode 100644 index a5b9febc4..000000000 --- a/text/0022-adopt-encointer-runtime.html +++ /dev/null @@ -1,284 +0,0 @@ - - - - - - - 0022 - Polkadot Fellowship RFCs - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - - - - - - - - - - - - - - - - -
- -
- - - - - - - - -
-
-

RFC-0022: Adopt Encointer Runtime

-
- - - -
Start DateAug 22nd 2023
DescriptionPermanently move the Encointer runtime into the Fellowship runtimes repo.
Authors@brenzi for Encointer Association, 8000 Zurich, Switzerland
-
-

Summary

-

Encointer is a system chain on Kusama since Jan 2022 and has been developed and maintained by the Encointer association. This RFC proposes to treat Encointer like any other system chain and include it in the fellowship repo with this PR.

-

Motivation

-

Encointer does not seek to be in control of its runtime repository. As a decentralized system, the fellowship has a more suitable structure to maintain a system chain runtime repo than the Encointer association does.

-

Also, Encointer aims to update its runtime in batches with other system chains in order to have consistency for interoperability across system chains.

-

Stakeholders

-
    -
  • Fellowship: Will continue to take upon them the review and auditing work for the Encointer runtime, but the process is streamlined with other system chains and therefore less time-consuming compared to the separate repo and CI process we currently have.
  • -
  • Kusama Network: Tokenholders can easily see the changes of all system chains in one place.
  • -
  • Encointer Association: Further decentralization of the Encointer Network necessities like devops.
  • -
  • Encointer devs: Being able to work directly in the Fellowship runtimes repo to streamline and synergize with other developers.
  • -
-

Explanation

-

Our PR has all details about our runtime and how we would move it into the fellowship repo.

-

Noteworthy: All Encointer-specific pallets will still be located in encointer's repo for the time being: https://github.com/encointer/pallets

-

It will still be the duty of the Encointer team to keep its runtime up to date and provide adequate test fixtures. Frequent dependency bumps with Polkadot releases would be beneficial for interoperability and could be streamlined with other system chains but that will not be a duty of fellowship. Whenever possible, all system chains could be upgraded jointly (including Encointer) with a batch referendum.

-

Further notes:

-
    -
  • Encointer will publish all its crates crates.io
  • -
  • Encointer does not carry out external auditing of its runtime nor pallets. It would be beneficial but not a requirement from our side if Encointer could join the auditing process of other system chains.
  • -
-

Drawbacks

-

Other than all other system chains, development and maintenance of the Encointer Network is mainly financed by the KSM Treasury and possibly the DOT Treasury in the future. Encointer is dedicated to maintaining its network and runtime code for as long as possible, but there is a dependency on funding which is not in the hands of the fellowship. The only risk in the context of funding, however, is that the Encointer runtime will see less frequent updates if there's less funding.

-

Testing, Security, and Privacy

-

No changes to the existing system are proposed. Only changes to how maintenance is organized.

-

Performance, Ergonomics, and Compatibility

-

No changes

-

Prior Art and References

-

Existing Encointer runtime repo

-

Unresolved Questions

-

None identified

- -

More info on Encointer: encointer.org

- -
- - -
-
- - - -
- - - - - - - - - - - - - - - - - - - - -
- - diff --git a/text/0032-minimal-relay.html b/text/0032-minimal-relay.html deleted file mode 100644 index 9aae130f3..000000000 --- a/text/0032-minimal-relay.html +++ /dev/null @@ -1,452 +0,0 @@ - - - - - - - 0032 - Polkadot Fellowship RFCs - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - - - - - - - - - - - - - - - - -
- -
- - - - - - - - -
-
-

RFC-0032: Minimal Relay

-
- - - -
Start Date20 September 2023
DescriptionProposal to minimise Relay Chain functionality.
AuthorsJoe Petrowski, Gavin Wood
-
-

Summary

-

The Relay Chain contains most of the core logic for the Polkadot network. While this was necessary -prior to the launch of parachains and development of XCM, most of this logic can exist in -parachains. This is a proposal to migrate several subsystems into system parachains.

-

Motivation

-

Polkadot's scaling approach allows many distinct state machines (known generally as parachains) to -operate with common guarantees about the validity and security of their state transitions. Polkadot -provides these common guarantees by executing the state transitions on a strict subset (a backing -group) of the Relay Chain's validator set.

-

However, state transitions on the Relay Chain need to be executed by all validators. If any of -those state transitions can occur on parachains, then the resources of the complement of a single -backing group could be used to offer more cores. As in, they could be offering more coretime (a.k.a. -blockspace) to the network.

-

By minimising state transition logic on the Relay Chain by migrating it into "system chains" -- a -set of parachains that, with the Relay Chain, make up the Polkadot protocol -- the Polkadot -Ubiquitous Computer can maximise its primary offering: secure blockspace.

-

Stakeholders

-
    -
  • Parachains that interact with affected logic on the Relay Chain;
  • -
  • Core protocol and XCM format developers;
  • -
  • Tooling, block explorer, and UI developers.
  • -
-

Explanation

-

The following pallets and subsystems are good candidates to migrate from the Relay Chain:

-
    -
  • Identity
  • -
  • Balances
  • -
  • Staking -
      -
    • Staking
    • -
    • Election Provider
    • -
    • Bags List
    • -
    • NIS
    • -
    • Nomination Pools
    • -
    • Fast Unstake
    • -
    -
  • -
  • Governance -
      -
    • Treasury and Bounties
    • -
    • Conviction Voting
    • -
    • Referenda
    • -
    -
  • -
-

Note: The Auctions and Crowdloan pallets will be replaced by Coretime, its system chain and -interface described in RFC-1 and RFC-5, respectively.

-

Migrations

-

Some subsystems are simpler to move than others. For example, migrating Identity can be done by -simply preventing state changes in the Relay Chain, using the Identity-related state as the genesis -for a new chain, and launching that new chain with the genesis and logic (pallet) needed.

-

Other subsystems cannot experience any downtime like this because they are essential to the -network's functioning, like Staking and Governance. However, these can likely coexist with a -similarly-permissioned system chain for some time, much like how "Gov1" and "OpenGov" coexisted at -the latter's introduction.

-

Specific migration plans will be included in release notes of runtimes from the Polkadot Fellowship -when beginning the work of migrating a particular subsystem.

-

Interfaces

-

The Relay Chain, in many cases, will still need to interact with these subsystems, especially -Staking and Governance. These subsystems will require making some APIs available either via -dispatchable calls accessible to XCM Transact or possibly XCM Instructions in future versions.

-

For example, Staking provides a pallet-API to register points (e.g. for block production) and -offences (e.g. equivocation). With Staking in a system chain, that chain would need to allow the -Relay Chain to update validator points periodically so that it can correctly calculate rewards.

-

A pub-sub protocol may also lend itself to these types of interactions.

-

Functional Architecture

-

This RFC proposes that system chains form individual components within the system's architecture and -that these components are chosen as functional groups. This approach allows synchronous -composibility where it is most valuable, but isolates logic in such a way that provides flexibility -for optimal resource allocation (see Resource Allocation). For the -subsystems discussed in this RFC, namely Identity, Governance, and Staking, this would mean:

-
    -
  • People Chain, for identity and personhood logic, providing functionality related to the attributes -of single actors;
  • -
  • Governance Chain, for governance and system collectives, providing functionality for pluralities -to express their voices within the system;
  • -
  • Staking Chain, for Polkadot's staking system, including elections, nominations, reward -distribution, slashing, and non-interactive staking; and
  • -
  • Asset Hub, for fungible and non-fungible assets, including DOT.
  • -
-

The Collectives chain and Asset Hub already exist, so implementation of this RFC would mean two new -chains (People and Staking), with Governance moving to the currently-known-as Collectives chain -and Asset Hub being increasingly used for DOT over the Relay Chain.

-

Note that one functional group will likely include many pallets, as we do not know how pallet -configurations and interfaces will evolve over time.

-

Resource Allocation

-

The system should minimise wasted blockspace. These three (and other) subsystems may not each -consistently require a dedicated core. However, core scheduling is far more agile than functional -grouping. While migrating functionality from one chain to another can be a multi-month endeavour, -cores can be rescheduled almost on-the-fly.

-

Migrations are also breaking changes to some use cases, for example other parachains that need to -route XCM programs to particular chains. It is thus preferable to do them a single time in migrating -off the Relay Chain, reducing the risk of needing parachain splits in the future.

-

Therefore, chain boundaries should be based on functional grouping where synchronous composibility -is most valuable; and efficient resource allocation should be managed by the core scheduling -protocol.

-

Many of these system chains (including Asset Hub) could often share a single core in a semi-round -robin fashion (the coretime may not be uniform). When needed, for example during NPoS elections or -slashing events, the scheduler could allocate a dedicated core to the chain in need of more -throughput.

-

Deployment

-

Actual migrations should happen based on some prioritization. This RFC proposes to migrate Identity, -Staking, and Governance as the systems to work on first. A brief discussion on the factors involved -in each one:

-

Identity

-

Identity will be one of the simpler pallets to migrate into a system chain, as its logic is largely -self-contained and it does not "share" balances with other subsystems. As in, any DOT is held in -reserve as a storage deposit and cannot be simultaneously used the way locked DOT can be locked for -multiple purposes.

-

Therefore, migration can take place as follows:

-
    -
  1. The pallet can be put in a locked state, blocking most calls to the pallet and preventing updates -to identity info.
  2. -
  3. The frozen state will form the genesis of a new system parachain.
  4. -
  5. Functions will be added to the pallet that allow migrating the deposit to the parachain. The -parachain deposit is on the order of 1/100th of the Relay Chain's. Therefore, this will result in -freeing up Relay State as well as most of each user's reserved balance.
  6. -
  7. The pallet and any leftover state can be removed from the Relay Chain.
  8. -
-

User interfaces that render Identity information will need to source their data from the new system -parachain.

-

Note: In the future, it may make sense to decommission Kusama's Identity chain and do all account -identities via Polkadot's. However, the Kusama chain will serve as a dress rehearsal for Polkadot.

-

Staking

-

Migrating the staking subsystem will likely be the most complex technical undertaking, as the -Staking system cannot stop (the system MUST always have a validator set) nor run in parallel (the -system MUST have only one validator set) and the subsystem itself is made up of subsystems in the -runtime and the node. For example, if offences are reported to the Staking parachain, validator -nodes will need to submit their reports there.

-

Handling balances also introduces complications. The same balance can be used for staking and -governance. Ideally, all balances stay on Asset Hub, and only report "credits" to system chains like -Staking and Governance. However, staking mutates balances by issuing new DOT on era changes and for -rewards. Allowing DOT directly on the Staking parachain would simplify staking changes.

-

Given the complexity, it would be pragmatic to include the Balances pallet in the Staking parachain -in its first version. Any other systems that use overlapping locks, most notably governance, will -need to recognise DOT held on both Asset Hub and the Staking parachain.

-

There is more discussion about staking in a parachain in Moving Staking off the Relay -Chain.

-

Governance

-

Migrating governance into a parachain will be less complicated than staking. Most of the primitives -needed for the migration already exist. The Treasury supports spending assets on remote chains and -collectives like the Polkadot Technical Fellowship already function in a parachain. That is, XCM -already provides the ability to express system origins across chains.

-

Therefore, actually moving the governance logic into a parachain will be simple. It can run in -parallel with the Relay Chain's governance, which can be removed when the parachain has demonstrated -sufficient functionality. It's possible that the Relay Chain maintain a Root-level emergency track -for situations like parachains -halting.

-

The only complication arises from the fact that both Asset Hub and the Staking parachain will have -DOT balances; therefore, the Governance chain will need to be able to credit users' voting power -based on balances from both locations. This is not expected to be difficult to handle.

-

Kusama

-

Although Polkadot and Kusama both have system chains running, they have to date only been used for -introducing new features or bodies, for example fungible assets or the Technical Fellowship. There -has not yet been a migration of logic/state from the Relay Chain into a parachain. Given its more -realistic network conditions than testnets, Kusama is the best stage for rehearsal.

-

In the case of identity, Polkadot's system may be sufficient for the ecosystem. Therefore, Kusama -should be used to test the migration of logic and state from Relay Chain to parachain, but these -features may be (at the will of Kusama's governance) dropped from Kusama entirely after a successful -migration on Polkadot.

-

For Governance, Polkadot already has the Collectives parachain, which would become the Governance -parachain. The entire group of DOT holders is itself a collective (the legislative body), and -governance provides the means to express voice. Launching a Kusama Governance chain would be -sensible to rehearse a migration.

-

The Staking subsystem is perhaps where Kusama would provide the most value in its canary capacity. -Staking is the subsystem most constrained by PoV limits. Ensuring that elections, payouts, session -changes, offences/slashes, etc. work in a parachain on Kusama -- with its larger validator set -- -will give confidence to the chain's robustness on Polkadot.

-

Drawbacks

-

These subsystems will have reduced resources in cores than on the Relay Chain. Staking in particular -may require some optimizations to deal with constraints.

-

Testing, Security, and Privacy

-

Standard audit/review requirements apply. More powerful multi-chain integration test tools would be -useful in developement.

-

Performance, Ergonomics, and Compatibility

-

Describe the impact of the proposal on the exposed functionality of Polkadot.

-

Performance

-

This is an optimization. The removal of public/user transactions on the Relay Chain ensures that its -primary resources are allocated to system performance.

-

Ergonomics

-

This proposal alters very little for coretime users (e.g. parachain developers). Application -developers will need to interact with multiple chains, making ergonomic light client tools -particularly important for application development.

-

For existing parachains that interact with these subsystems, they will need to configure their -runtimes to recognize the new locations in the network.

-

Compatibility

-

Implementing this proposal will require some changes to pallet APIs and/or a pub-sub protocol. -Application developers will need to interact with multiple chains in the network.

-

Prior Art and References

- -

Unresolved Questions

-

There remain some implementation questions, like how to use balances for both Staking and -Governance. See, for example, Moving Staking off the Relay -Chain.

- -

Ideally the Relay Chain becomes transactionless, such that not even balances are represented there. -With Staking and Governance off the Relay Chain, this is not an unreasonable next step.

-

With Identity on Polkadot, Kusama may opt to drop its People Chain.

- -
- - -
-
- - - -
- - - - - - - - - - - - - - - - - - - - -
- - diff --git a/text/0042-extrinsics-state-version.html b/text/0042-extrinsics-state-version.html deleted file mode 100644 index f4f8cc4ac..000000000 --- a/text/0042-extrinsics-state-version.html +++ /dev/null @@ -1,320 +0,0 @@ - - - - - - - 0042 - Polkadot Fellowship RFCs - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - - - - - - - - - - - - - - - - -
- -
- - - - - - - - -
-
-

RFC-0042: Add System version that replaces StateVersion on RuntimeVersion

-
- - - -
Start Date25th October 2023
DescriptionAdd System Version and remove State Version
AuthorsVedhavyas Singareddi
-
-

Summary

-

At the moment, we have system_version field on RuntimeVersion that derives which state version is used for the -Storage. -We have a use case where we want extrinsics root is derived using StateVersion::V1. Without defining a new field -under RuntimeVersion, -we would like to propose adding system_version that can be used to derive both storage and extrinsic state version.

-

Motivation

-

Since the extrinsic state version is always StateVersion::V0, deriving extrinsic root requires full extrinsic data. -This would be problematic when we need to verify the extrinsics root if the extrinsic sizes are bigger. This problem is -further explored in https://github.com/polkadot-fellows/RFCs/issues/19

-

For Subspace project, we have an enshrined rollups called Domain with optimistic verification and Fraud proofs are -used to detect malicious behavior. -One of the Fraud proof variant is to derive Domain block extrinsic root on Subspace's consensus chain. -Since StateVersion::V0 requires full extrinsic data, we are forced to pass all the extrinsics through the Fraud proof. -One of the main challenge here is some extrinsics could be big enough that this variant of Fraud proof may not be -included in the Consensus block due to Block's weight restriction. -If the extrinsic root is derived using StateVersion::V1, then we do not need to pass the full extrinsic data but -rather at maximum, 32 byte of extrinsic data.

-

Stakeholders

-
    -
  • Technical Fellowship, in its role of maintaining system runtimes.
  • -
-

Explanation

-

In order to use project specific StateVersion for extrinsic roots, we proposed -an implementation that introduced -parameter to frame_system::Config but that unfortunately did not feel correct. -So we would like to propose adding this change to -the RuntimeVersion -object. The system version, if introduced, will be used to derive both storage and extrinsic state version. -If system version is 0, then both Storage and Extrinsic State version would use V0. -If system version is 1, then Storage State version would use V1 and Extrinsic State version would use V0. -If system version is 2, then both Storage and Extrinsic State version would use V1.

-

If implemented, the new RuntimeVersion definition would look something similar to

-
#![allow(unused)]
-fn main() {
-/// Runtime version (Rococo).
-#[sp_version::runtime_version]
-pub const VERSION: RuntimeVersion = RuntimeVersion {
-		spec_name: create_runtime_str!("rococo"),
-		impl_name: create_runtime_str!("parity-rococo-v2.0"),
-		authoring_version: 0,
-		spec_version: 10020,
-		impl_version: 0,
-		apis: RUNTIME_API_VERSIONS,
-		transaction_version: 22,
-		system_version: 1,
-	};
-}
-

Drawbacks

-

There should be no drawbacks as it would replace state_version with same behavior but documentation should be updated -so that chains know which system_version to use.

-

Testing, Security, and Privacy

-

AFAIK, should not have any impact on the security or privacy.

-

Performance, Ergonomics, and Compatibility

-

These changes should be compatible for existing chains if they use state_version value for system_verision.

-

Performance

-

I do not believe there is any performance hit with this change.

-

Ergonomics

-

This does not break any exposed Apis.

-

Compatibility

-

This change should not break any compatibility.

-

Prior Art and References

-

We proposed introducing a similar change by introducing a -parameter to frame_system::Config but did not feel that -is the correct way of introducing this change.

-

Unresolved Questions

-

I do not have any specific questions about this change at the moment.

- -

IMO, this change is pretty self-contained and there won't be any future work necessary.

- -
- - -
-
- - - -
- - - - - - - - - - - - - - - - - - - - -
- - diff --git a/text/0043-storage-proof-size-hostfunction.html b/text/0043-storage-proof-size-hostfunction.html deleted file mode 100644 index ae2eb89f9..000000000 --- a/text/0043-storage-proof-size-hostfunction.html +++ /dev/null @@ -1,287 +0,0 @@ - - - - - - - 0043 - Polkadot Fellowship RFCs - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - - - - - - - - - - - - - - - - -
- -
- - - - - - - - -
-
-

RFC-0043: Introduce storage_proof_size Host Function for Improved Parachain Block Utilization

-
- - - -
Start Date30 October 2023
DescriptionHost function to provide the storage proof size to runtimes.
AuthorsSebastian Kunert
-
-

Summary

-

This RFC proposes a new host function for parachains, storage_proof_size. It shall provide the size of the currently recorded storage proof to the runtime. Runtime authors can use the proof size to improve block utilization by retroactively reclaiming unused storage weight.

-

Motivation

-

The number of extrinsics that are included in a parachain block is limited by two constraints: execution time and proof size. FRAME weights cover both concepts, and block-builders use them to decide how many extrinsics to include in a block. However, these weights are calculated ahead of time by benchmarking on a machine with reference hardware. The execution-time properties of the state-trie and its storage items are unknown at benchmarking time. Therefore, we make some assumptions about the state-trie:

-
    -
  • Trie Depth: We assume a trie depth to account for intermediary nodes.
  • -
  • Storage Item Size: We make a pessimistic assumption based on the MaxEncodedLen trait.
  • -
-

These pessimistic assumptions lead to an overestimation of storage weight, negatively impacting block utilization on parachains.

-

In addition, the current model does not account for multiple accesses to the same storage items. While these repetitive accesses will not increase storage-proof size, the runtime-side weight monitoring will account for them multiple times. Since the proof size is completely opaque to the runtime, we can not implement retroactive storage weight correction.

-

A solution must provide a way for the runtime to track the exact storage-proof size consumed on a per-extrinsic basis.

-

Stakeholders

-
    -
  • Parachain Teams: They MUST include this host function in their runtime and node.
  • -
  • Light-client Implementors: They SHOULD include this host function in their runtime and node.
  • -
-

Explanation

-

This RFC proposes a new host function that exposes the storage-proof size to the runtime. As a result, runtimes can implement storage weight reclaiming mechanisms that improve block utilization.

-

This RFC proposes the following host function signature:

-
#![allow(unused)]
-fn main() {
-fn ext_storage_proof_size_version_1() -> u64;
-}
-

The host function MUST return an unsigned 64-bit integer value representing the current proof size. In block-execution and block-import contexts, this function MUST return the current size of the proof. To achieve this, parachain node implementors need to enable proof recording for block imports. In other contexts, this function MUST return 18446744073709551615 (u64::MAX), which represents disabled proof recording.

-

Performance, Ergonomics, and Compatibility

-

Performance

-

Parachain nodes need to enable proof recording during block import to correctly implement the proposed host function. Benchmarking conducted with balance transfers has shown a performance reduction of around 0.6% when proof recording is enabled.

-

Ergonomics

-

The host function proposed in this RFC allows parachain runtime developers to keep track of the proof size. Typical usage patterns would be to keep track of the overall proof size or the difference between subsequent calls to the host function.

-

Compatibility

-

Parachain teams will need to include this host function to upgrade.

-

Prior Art and References

- - -
- - -
-
- - - -
- - - - - - - - - - - - - - - - - - - - -
- - diff --git a/text/0045-nft-deposits-asset-hub.html b/text/0045-nft-deposits-asset-hub.html deleted file mode 100644 index a84ea7588..000000000 --- a/text/0045-nft-deposits-asset-hub.html +++ /dev/null @@ -1,449 +0,0 @@ - - - - - - - 0045 - Polkadot Fellowship RFCs - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - - - - - - - - - - - - - - - - -
- -
- - - - - - - - -
-
-

RFC-0045: Lowering NFT Deposits on Asset Hub

-
- - - -
Start Date2 November 2023
DescriptionA proposal to reduce the minimum deposit required for collection creation on the Polkadot and Kusama Asset Hubs.
AuthorsAurora Poppyseed, Just_Luuuu, Viki Val, Joe Petrowski
-
-

Summary

-

This RFC proposes changing the current deposit requirements on the Polkadot and Kusama Asset Hub for -creating an NFT collection, minting an individual NFT, and lowering its corresponding metadata and -attribute deposits. The objective is to lower the barrier to entry for NFT creators, fostering a -more inclusive and vibrant ecosystem while maintaining network integrity and preventing spam.

-

Motivation

-

The current deposit of 10 DOT for collection creation (along with 0.01 DOT for item deposit and 0.2 -DOT for metadata and attribute deposits) on the Polkadot Asset Hub and 0.1 KSM on Kusama Asset Hub -presents a significant financial barrier for many NFT creators. By lowering the deposit -requirements, we aim to encourage more NFT creators to participate in the Polkadot NFT ecosystem, -thereby enriching the diversity and vibrancy of the community and its offerings.

-

The initial introduction of a 10 DOT deposit was an arbitrary starting point that does not consider -the actual storage footprint of an NFT collection. This proposal aims to adjust the deposit first to -a value based on the deposit function, which calculates a deposit based on the number of keys -introduced to storage and the size of corresponding values stored.

-

Further, it suggests a direction for a future of calculating deposits variably based on adoption -and/or market conditions. There is a discussion on tradeoffs of setting deposits too high or too -low.

-

Requirements

-
    -
  • Deposits SHOULD be derived from deposit function, adjusted by correspoding pricing mechansim.
  • -
-

Stakeholders

-
    -
  • NFT Creators: Primary beneficiaries of the proposed change, particularly those who found the -current deposit requirements prohibitive.
  • -
  • NFT Platforms: As the facilitator of artists' relations, NFT marketplaces have a vested -interest in onboarding new users and making their platforms more accessible.
  • -
  • dApp Developers: Making the blockspace more accessible will encourage developers to create and -build unique dApps in the Polkadot ecosystem.
  • -
  • Polkadot Community: Stands to benefit from an influx of artists, creators, and diverse NFT -collections, enhancing the overall ecosystem.
  • -
-

Previous discussions have been held within the Polkadot -Forum, with -artists expressing their concerns about the deposit amounts.

-

Explanation

-

This RFC proposes a revision of the deposit constants in the configuration of the NFTs pallet on the -Polkadot Asset Hub. The new deposit amounts would be determined by a standard deposit formula.

-

As of v1.1.1, the Collection Deposit is 10 DOT and the Item Deposit is 0.01 DOT (see -here).

-

Based on the storage footprint of these items, this RFC proposes changing them to:

-
#![allow(unused)]
-fn main() {
-pub const NftsCollectionDeposit: Balance = system_para_deposit(1, 130);
-pub const NftsItemDeposit: Balance = system_para_deposit(1, 164);
-}
-

This results in the following deposits (calculted using this -repository):

-

Polkadot

-
- - - - -
NameCurrent Rate (DOT)Calculated with Function (DOT)
collectionDeposit100.20064
itemDeposit0.010.20081
metadataDepositBase0.201290.20076
attributeDepositBase0.20.2
-
-

Similarly, the prices for Kusama were calculated as:

-

Kusama:

-
- - - - -
NameCurrent Rate (KSM)Calculated with Function (KSM)
collectionDeposit0.10.006688
itemDeposit0.0010.000167
metadataDepositBase0.0067096666170.0006709666617
attributeDepositBase0.006666666660.000666666666
-
-

Enhanced Approach to Further Lower Barriers for Entry

-

This RFC proposes further lowering these deposits below the rate normally charged for such a storage -footprint. This is based on the economic argument that sub-rate deposits are a subsididy for growth -and adoption of a specific technology. If the NFT functionality on Polkadot gains adoption, it makes -it more attractive for future entrants, who would be willing to pay the non-subsidized rate because -of the existing community.

-

Proposed Rate Adjustments

-
#![allow(unused)]
-fn main() {
-parameter_types! {
-	pub const NftsCollectionDeposit: Balance = system_para_deposit(1, 130);
-	pub const NftsItemDeposit: Balance = system_para_deposit(1, 164) / 40;
-	pub const NftsMetadataDepositBase: Balance = system_para_deposit(1, 129) / 10;
-	pub const NftsAttributeDepositBase: Balance = system_para_deposit(1, 0) / 10;
-	pub const NftsDepositPerByte: Balance = system_para_deposit(0, 1);
-}
-}
-

This adjustment would result in the following DOT and KSM deposit values:

-
- - - - -
NameProposed Rate PolkadotProposed Rate Kusama
collectionDeposit0.20064 DOT0.006688 KSM
itemDeposit0.005 DOT0.000167 KSM
metadataDepositBase0.002 DOT0.0006709666617 KSM
attributeDepositBase0.002 DOT0.000666666666 KSM
-
-

Short- and Long-Term Plans

-

The plan presented above is recommended as an immediate step to make Polkadot a more attractive -place to launch NFTs, although one would note that a forty fold reduction in the Item Deposit is -just as arbitrary as the value it was replacing. As explained earlier, this is meant as a subsidy to -gain more momentum for NFTs on Polkadot.

-

In the long term, an implementation should account for what should happen to the deposit rates -assuming that the subsidy is successful and attracts a lot of deployments. Many options are -discussed in the Addendum.

-

The deposit should be calculated as a function of the number of existing collections with maximum -DOT and stablecoin values limiting the amount. With asset rates available via the Asset Conversion -pallet, the system could take the lower value required. A sigmoid curve would make sense for this -application to avoid sudden rate changes, as in:

-

$$ minDeposit + \frac{\mathrm{min(DotDeposit, StableDeposit) - minDeposit} }{\mathrm{1 + e^{a - b * x}} }$$

-

where the constant a moves the inflection to lower or higher x values, the constant b adjusts -the rate of the deposit increase, and the independent variable x is the number of collections or -items, depending on application.

-

Drawbacks

-

Modifying deposit requirements necessitates a balanced assessment of the potential drawbacks. -Highlighted below are cogent points extracted from the discourse on the Polkadot Forum -conversation, -which provide critical perspectives on the implications of such changes.

-

Adjusting NFT deposit requirements on Polkadot and Kusama Asset Hubs involves key challenges:

-
    -
  1. -

    State Growth and Technical Concerns: Lowering deposit requirements can lead to increased -blockchain state size, potentially causing state bloat. This growth needs to be managed to -prevent strain on the network's resources and maintain operational efficiency. As stated earlier, -the deposit levels proposed here are intentionally low with the thesis that future participants -would pay the standard rate.

    -
  2. -
  3. -

    Network Security and Market Response: Adapting to the cryptocurrency market's volatility is -crucial. The mechanism for setting deposit amounts must be responsive yet stable, avoiding undue -complexity for users.

    -
  4. -
  5. -

    Economic Impact on Previous Stakeholders: The change could have varied economic effects on -previous (before the change) creators, platform operators, and investors. Balancing these -interests is essential to ensure the adjustment benefits the ecosystem without negatively -impacting its value dynamics. However in the particular case of Polkadot and Kusama Asset Hub -this does not pose a concern since there are very few collections currently and thus previous -stakeholders wouldn't be much affected. As of date 9th January 2024 there are 42 collections on -Polkadot Asset Hub and 191 on Kusama Asset Hub with a relatively low volume.

    -
  6. -
-

Testing, Security, and Privacy

-

Security concerns

-

As noted above, state bloat is a security concern. In the case of abuse, governance could adapt by -increasing deposit rates and/or using forceDestroy on collections agreed to be spam.

-

Performance, Ergonomics, and Compatibility

-

Performance

-

The primary performance consideration stems from the potential for state bloat due to increased -activity from lower deposit requirements. It's vital to monitor and manage this to avoid any -negative impact on the chain's performance. Strategies for mitigating state bloat, including -efficient data management and periodic reviews of storage requirements, will be essential.

-

Ergonomics

-

The proposed change aims to enhance the user experience for artists, traders, and utilizers of -Kusama and Polkadot Asset Hubs, making Polkadot and Kusama more accessible and user-friendly.

-

Compatibility

-

The change does not impact compatibility as a redeposit function is already implemented.

-

Unresolved Questions

-

If this RFC is accepted, there should not be any unresolved questions regarding how to adapt the -implementation of deposits for NFT collections.

-

Addendum

-

Several innovative proposals have been considered to enhance the network's adaptability and manage -deposit requirements more effectively. The RFC recommends a mixture of the function-based model and -the stablecoin model, but some tradeoffs of each are maintained here for those interested.

-

Enhanced Weak Governance Origin Model

-

The concept of a weak governance origin, controlled by a consortium like a system collective, has -been proposed. This model would allow for dynamic adjustments of NFT deposit requirements in -response to market conditions, adhering to storage deposit norms.

-
    -
  • Responsiveness: To address concerns about delayed responses, the model could incorporate -automated triggers based on predefined market indicators, ensuring timely adjustments.
  • -
  • Stability vs. Flexibility: Balancing stability with the need for flexibility is challenging. -To mitigate the issue of frequent changes in DOT-based deposits, a mechanism for gradual and -predictable adjustments could be introduced.
  • -
  • Scalability: The model's scalability is a concern, given the numerous deposits across the -system. A more centralized approach to deposit management might be needed to avoid constant, -decentralized adjustments.
  • -
-

Function-Based Pricing Model

-

Another proposal is to use a mathematical function to regulate deposit prices, initially allowing -low prices to encourage participation, followed by a gradual increase to prevent network bloat.

-
    -
  • Choice of Function: A logarithmic or sigmoid function is favored over an exponential one, as -these functions increase prices at a rate that encourages participation while preventing -prohibitive costs.
  • -
  • Adjustment of Constants: To finely tune the pricing rise, one of the function's constants -could correlate with the total number of NFTs on Asset Hub. This would align the deposit -requirements with the actual usage and growth of the network.
  • -
-

Linking Deposit to USD(x) Value

-

This approach suggests pegging the deposit value to a stable currency like the USD, introducing -predictability and stability for network users.

-
    -
  • Market Dynamics: One perspective is that fluctuations in native currency value naturally -balance user participation and pricing, deterring network spam while encouraging higher-value -collections. Conversely, there's an argument for allowing broader participation if the DOT/KSM -value increases.
  • -
  • Complexity and Risks: Implementing a USD-based pricing system could add complexity and -potential risks. The implementation needs to be carefully designed to avoid unintended -consequences, such as excessive reliance on external financial systems or currencies.
  • -
-

Each of these proposals offers unique advantages and challenges. The optimal approach may involve a -combination of these ideas, carefully adjusted to address the specific needs and dynamics of the -Polkadot and Kusama networks.

- -
- - -
-
- - - -
- - - - - - - - - - - - - - - - - - - - -
- - diff --git a/text/0047-assignment-of-availability-chunks.html b/text/0047-assignment-of-availability-chunks.html deleted file mode 100644 index 64aeba5b6..000000000 --- a/text/0047-assignment-of-availability-chunks.html +++ /dev/null @@ -1,494 +0,0 @@ - - - - - - - 0047 - Polkadot Fellowship RFCs - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - - - - - - - - - - - - - - - - -
- -
- - - - - - - - -
-
-

RFC-0047: Assignment of availability chunks to validators

-
- - - -
Start Date03 November 2023
DescriptionAn evenly-distributing indirection layer between availability chunks and validators.
AuthorsAlin Dima
-
-

Summary

-

Propose a way of permuting the availability chunk indices assigned to validators, in the context of -recovering available data from systematic chunks, with the -purpose of fairly distributing network bandwidth usage.

-

Motivation

-

Currently, the ValidatorIndex is always identical to the ChunkIndex. Since the validator array is only shuffled once -per session, naively using the ValidatorIndex as the ChunkIndex would pose an unreasonable stress on the first N/3 -validators during an entire session, when favouring availability recovery from systematic chunks.

-

Therefore, the relay chain node needs a deterministic way of evenly distributing the first ~(N_VALIDATORS / 3) -systematic availability chunks to different validators, based on the relay chain block and core. -The main purpose is to ensure fair distribution of network bandwidth usage for availability recovery in general and in -particular for systematic chunk holders.

-

Stakeholders

-

Relay chain node core developers.

-

Explanation

-

Systematic erasure codes

-

An erasure coding algorithm is considered systematic if it preserves the original unencoded data as part of the -resulting code. -The implementation of the erasure coding algorithm used for polkadot's availability data is systematic. -Roughly speaking, the first N_VALIDATORS/3 chunks of data can be cheaply concatenated to retrieve the original data, -without running the resource-intensive and time-consuming reconstruction algorithm.

-

You can find the concatenation procedure of systematic chunks for polkadot's erasure coding algorithm -here

-

In a nutshell, it performs a column-wise concatenation with 2-byte chunks. -The output could be zero-padded at the end, so scale decoding must be aware of the expected length in bytes and ignore -trailing zeros (this assertion is already being made for regular reconstruction).

-

Availability recovery at present

-

According to the polkadot protocol spec:

-
-

A validator should request chunks by picking peers randomly and must recover at least f+1 chunks, where -n=3f+k and k in {1,2,3}.

-
-

For parity's polkadot node implementation, the process was further optimised. At this moment, it works differently based -on the estimated size of the available data:

-

(a) for small PoVs (up to 128 Kib), sequentially try requesting the unencoded data from the backing group, in a random -order. If this fails, fallback to option (b).

-

(b) for large PoVs (over 128 Kib), launch N parallel requests for the erasure coded chunks (currently, N has an upper -limit of 50), until enough chunks were recovered. Validators are tried in a random order. Then, reconstruct the -original data.

-

All options require that after reconstruction, validators then re-encode the data and re-create the erasure chunks trie -in order to check the erasure root.

-

Availability recovery from systematic chunks

-

As part of the effort of -increasing polkadot's resource efficiency, scalability and performance, -work is under way to modify the Availability Recovery protocol by leveraging systematic chunks. See -this comment for preliminary -performance results.

-

In this scheme, the relay chain node will first attempt to retrieve the ~N/3 systematic chunks from the validators that -should hold them, before falling back to recovering from regular chunks, as before.

-

A re-encoding step is still needed for verifying the erasure root, so the erasure coding overhead cannot be completely -brought down to 0.

-

Not being able to retrieve even one systematic chunk would make systematic reconstruction impossible. Therefore, backers -can be used as a backup to retrieve a couple of missing systematic chunks, before falling back to retrieving regular -chunks.

-

Chunk assignment function

-

Properties

-

The function that decides the chunk index for a validator will be parameterized by at least -(validator_index, core_index) -and have the following properties:

-
    -
  1. deterministic
  2. -
  3. relatively quick to compute and resource-efficient.
  4. -
  5. when considering a fixed core_index, the function should describe a permutation of the chunk indices
  6. -
  7. the validators that map to the first N/3 chunk indices should have as little overlap as possible for different cores.
  8. -
-

In other words, we want a uniformly distributed, deterministic mapping from ValidatorIndex to ChunkIndex per core.

-

It's desirable to not embed this function in the runtime, for performance and complexity reasons. -However, this means that the function needs to be kept very simple and with minimal or no external dependencies. -Any change to this function could result in parachains being stalled and needs to be coordinated via a runtime upgrade -or governance call.

-

Proposed function

-

Pseudocode:

-
#![allow(unused)]
-fn main() {
-pub fn get_chunk_index(
-  n_validators: u32,
-  validator_index: ValidatorIndex,
-  core_index: CoreIndex
-) -> ChunkIndex {
-  let threshold = systematic_threshold(n_validators); // Roughly n_validators/3
-  let core_start_pos = core_index * threshold;
-
-  (core_start_pos + validator_index) % n_validators
-}
-}
-

Network protocol

-

The request-response /req_chunk protocol will be bumped to a new version (from v1 to v2). -For v1, the request and response payloads are:

-
#![allow(unused)]
-fn main() {
-/// Request an availability chunk.
-pub struct ChunkFetchingRequest {
-	/// Hash of candidate we want a chunk for.
-	pub candidate_hash: CandidateHash,
-	/// The index of the chunk to fetch.
-	pub index: ValidatorIndex,
-}
-
-/// Receive a requested erasure chunk.
-pub enum ChunkFetchingResponse {
-	/// The requested chunk data.
-	Chunk(ChunkResponse),
-	/// Node was not in possession of the requested chunk.
-	NoSuchChunk,
-}
-
-/// This omits the chunk's index because it is already known by
-/// the requester and by not transmitting it, we ensure the requester is going to use his index
-/// value for validating the response, thus making sure he got what he requested.
-pub struct ChunkResponse {
-	/// The erasure-encoded chunk of data belonging to the candidate block.
-	pub chunk: Vec<u8>,
-	/// Proof for this chunk's branch in the Merkle tree.
-	pub proof: Proof,
-}
-}
-

Version 2 will add an index field to ChunkResponse:

-
#![allow(unused)]
-fn main() {
-#[derive(Debug, Clone, Encode, Decode)]
-pub struct ChunkResponse {
-	/// The erasure-encoded chunk of data belonging to the candidate block.
-	pub chunk: Vec<u8>,
-	/// Proof for this chunk's branch in the Merkle tree.
-	pub proof: Proof,
-	/// Chunk index.
-	pub index: ChunkIndex
-}
-}
-

An important thing to note is that in version 1, the ValidatorIndex value is always equal to the ChunkIndex. -Until the chunk rotation feature is enabled, this will also be true for version 2. However, after the feature is -enabled, this will generally not be true.

-

The requester will send the request to validator with index V. The responder will map the V validator index to the -C chunk index and respond with the C-th chunk. This mapping can be seamless, by having each validator store their -chunk by ValidatorIndex (just as before).

-

The protocol implementation MAY check the returned ChunkIndex against the expected mapping to ensure that -it received the right chunk. -In practice, this is desirable during availability-distribution and systematic chunk recovery. However, regular -recovery may not check this index, which is particularly useful when participating in disputes that don't allow -for easy access to the validator->chunk mapping. See Appendix A for more details.

-

In any case, the requester MUST verify the chunk's proof using the provided index.

-

During availability-recovery, given that the requester may not know (if the mapping is not available) whether the -received chunk corresponds to the requested validator index, it has to keep track of received chunk indices and ignore -duplicates. Such duplicates should be considered the same as an invalid/garbage response (drop it and move on to the -next validator - we can't punish via reputation changes, because we don't know which validator misbehaved).

-

Upgrade path

-

Step 1: Enabling new network protocol

-

In the beginning, both /req_chunk/1 and /req_chunk/2 will be supported, until all validators and -collators have upgraded to use the new version. V1 will be considered deprecated. During this step, the mapping will -still be 1:1 (ValidatorIndex == ChunkIndex), regardless of protocol. -Once all nodes are upgraded, a new release will be cut that removes the v1 protocol. Only once all nodes have upgraded -to this version will step 2 commence.

-

Step 2: Enabling the new validator->chunk mapping

-

Considering that the Validator->Chunk mapping is critical to para consensus, the change needs to be enacted atomically -via governance, only after all validators have upgraded the node to a version that is aware of this mapping, -functionality-wise. -It needs to be explicitly stated that after the governance enactment, validators that run older client versions that -don't support this mapping will not be able to participate in parachain consensus.

-

Additionally, an error will be logged when starting a validator with an older version, after the feature was enabled.

-

On the other hand, collators will not be required to upgrade in this step (but are still require to upgrade for step 1), -as regular chunk recovery will work as before, granted that version 1 of the networking protocol has been removed. -Note that collators only perform availability-recovery in rare, adversarial scenarios, so it is fine to not optimise for -this case and let them upgrade at their own pace.

-

To support enabling this feature via the runtime, we will use the NodeFeatures bitfield of the HostConfiguration -struct (added in https://github.com/paritytech/polkadot-sdk/pull/2177). Adding and enabling a feature -with this scheme does not require a runtime upgrade, but only a referendum that issues a -Configuration::set_node_feature extrinsic. Once the feature is enabled and new configuration is live, the -validator->chunk mapping ceases to be a 1:1 mapping and systematic recovery may begin.

-

Drawbacks

-
    -
  • Getting access to the core_index that used to be occupied by a candidate in some parts of the dispute protocol is -very complicated (See appendix A). This RFC assumes that availability-recovery processes initiated during -disputes will only use regular recovery, as before. This is acceptable since disputes are rare occurrences in practice -and is something that can be optimised later, if need be. Adding the core_index to the CandidateReceipt would -mitigate this problem and will likely be needed in the future for CoreJam and/or Elastic scaling. -Related discussion about updating CandidateReceipt
  • -
  • It's a breaking change that requires all validators and collators to upgrade their node version at least once.
  • -
-

Testing, Security, and Privacy

-

Extensive testing will be conducted - both automated and manual. -This proposal doesn't affect security or privacy.

-

Performance, Ergonomics, and Compatibility

-

Performance

-

This is a necessary data availability optimisation, as reed-solomon erasure coding has proven to be a top consumer of -CPU time in polkadot as we scale up the parachain block size and number of availability cores.

-

With this optimisation, preliminary performance results show that CPU time used for reed-solomon coding/decoding can be -halved and total POV recovery time decrease by 80% for large POVs. See more -here.

-

Ergonomics

-

Not applicable.

-

Compatibility

-

This is a breaking change. See upgrade path section above. -All validators and collators need to have upgraded their node versions before the feature will be enabled via a -governance call.

-

Prior Art and References

-

See comments on the tracking issue and the -in-progress PR

-

Unresolved Questions

-

Not applicable.

- -

This enables future optimisations for the performance of availability recovery, such as retrieving batched systematic -chunks from backers/approval-checkers.

-

Appendix A

-

This appendix details the intricacies of getting access to the core index of a candidate in parity's polkadot node.

-

Here, core_index refers to the index of the core that a candidate was occupying while it was pending availability -(from backing to inclusion).

-

Availability-recovery can currently be triggered by the following phases in the polkadot protocol:

-
    -
  1. During the approval voting process.
  2. -
  3. By other collators of the same parachain.
  4. -
  5. During disputes.
  6. -
-

Getting the right core index for a candidate can be troublesome. Here's a breakdown of how different parts of the -node implementation can get access to it:

-
    -
  1. -

    The approval-voting process for a candidate begins after observing that the candidate was included. Therefore, the -node has easy access to the block where the candidate got included (and also the core that it occupied).

    -
  2. -
  3. -

    The pov_recovery task of the collators starts availability recovery in response to noticing a candidate getting -backed, which enables easy access to the core index the candidate started occupying.

    -
  4. -
  5. -

    Disputes may be initiated on a number of occasions:

    -

    3.a. is initiated by the validator as a result of finding an invalid candidate while participating in the -approval-voting protocol. In this case, availability-recovery is not needed, since the validator already issued their -vote.

    -

    3.b is initiated by the validator noticing dispute votes recorded on-chain. In this case, we can safely -assume that the backing event for that candidate has been recorded and kept in memory.

    -

    3.c is initiated as a result of getting a dispute statement from another validator. It is possible that the dispute -is happening on a fork that was not yet imported by this validator, so the subsystem may not have seen this candidate -being backed.

    -
  6. -
-

A naive attempt of solving 3.c would be to add a new version for the disputes request-response networking protocol. -Blindly passing the core index in the network payload would not work, since there is no way of validating that -the reported core_index was indeed the one occupied by the candidate at the respective relay parent.

-

Another attempt could be to include in the message the relay block hash where the candidate was included. -This information would be used in order to query the runtime API and retrieve the core index that the candidate was -occupying. However, considering it's part of an unimported fork, the validator cannot call a runtime API on that block.

-

Adding the core_index to the CandidateReceipt would solve this problem and would enable systematic recovery for all -dispute scenarios.

- -
- - -
-
- - - -
- - - - - - - - - - - - - - - - - - - - -
- - diff --git a/text/0048-session-keys-runtime-api.html b/text/0048-session-keys-runtime-api.html deleted file mode 100644 index 2c1778125..000000000 --- a/text/0048-session-keys-runtime-api.html +++ /dev/null @@ -1,333 +0,0 @@ - - - - - - - 0048 - Polkadot Fellowship RFCs - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - - - - - - - - - - - - - - - - -
- -
- - - - - - - - -
-
-

RFC-0048: Generate ownership proof for SessionKeys

-
- - - -
Start Date13 November 2023
DescriptionChange SessionKeys runtime api to support generating an ownership proof for the on chain registration.
AuthorsBastian Köcher
-
-

Summary

-

This RFC proposes to changes the SessionKeys::generate_session_keys runtime api interface. This runtime api is used by validator operators to -generate new session keys on a node. The public session keys are then registered manually on chain by the validator operator. -Before this RFC it was not possible by the on chain logic to ensure that the account setting the public session keys is also in -possession of the private session keys. To solve this the RFC proposes to pass the account id of the account doing the -registration on chain to generate_session_keys. Further this RFC proposes to change the return value of the generate_session_keys -function also to not only return the public session keys, but also the proof of ownership for the private session keys. The -validator operator will then need to send the public session keys and the proof together when registering new session keys on chain.

-

Motivation

-

When submitting the new public session keys to the on chain logic there doesn't exist any verification of possession of the private session keys. -This means that users can basically register any kind of public session keys on chain. While the on chain logic ensures that there are -no duplicate keys, someone could try to prevent others from registering new session keys by setting them first. While this wouldn't bring -the "attacker" any kind of advantage, more like disadvantages (potential slashes on their account), it could prevent someone from -e.g. changing its session key in the event of a private session key leak.

-

After this RFC this kind of attack would not be possible anymore, because the on chain logic can verify that the sending account -is in ownership of the private session keys.

-

Stakeholders

-
    -
  • Polkadot runtime implementors
  • -
  • Polkadot node implementors
  • -
  • Validator operators
  • -
-

Explanation

-

We are first going to explain the proof format being used:

-
#![allow(unused)]
-fn main() {
-type Proof = (Signature, Signature, ..);
-}
-

The proof being a SCALE encoded tuple over all signatures of each private session -key signing the account_id. The actual type of each signature depends on the -corresponding session key cryptographic algorithm. The order of the signatures in -the proof is the same as the order of the session keys in the SessionKeys type -declared in the runtime.

-

The version of the SessionKeys needs to be bumped to 1 to reflect the changes to the -signature of SessionKeys_generate_session_keys:

-
#![allow(unused)]
-fn main() {
-pub struct OpaqueGeneratedSessionKeys {
-	pub keys: Vec<u8>,
-	pub proof: Vec<u8>,
-}
-
-fn SessionKeys_generate_session_keys(account_id: Vec<u8>, seed: Option<Vec<u8>>) -> OpaqueGeneratedSessionKeys;
-}
-

The default calling convention for runtime apis is applied, meaning the parameters -passed as SCALE encoded array and the length of the encoded array. The return value -being the SCALE encoded return value as u64 (array_ptr | length << 32). So, the -actual exported function signature looks like:

-
#![allow(unused)]
-fn main() {
-fn SessionKeys_generate_session_keys(array: *const u8, len: usize) -> u64;
-}
-

The on chain logic for setting the SessionKeys needs to be changed as well. It -already gets the proof passed as Vec<u8>. This proof needs to be decoded to -the actual Proof type as explained above. The proof and the SCALE encoded -account_id of the sender are used to verify the ownership of the SessionKeys.

-

Drawbacks

-

Validator operators need to pass the their account id when rotating their session keys in a node. -This will require updating some high level docs and making users familiar with the slightly changed ergonomics.

-

Testing, Security, and Privacy

-

Testing of the new changes only requires passing an appropriate owner for the current testing context. -The changes to the proof generation and verification got audited to ensure they are correct.

-

Performance, Ergonomics, and Compatibility

-

Performance

-

The session key generation is an offchain process and thus, doesn't influence the performance of the -chain. Verifying the proof is done on chain as part of the transaction logic for setting the session keys. -The verification of the proof is a signature verification number of individual session keys times. As setting -the session keys is happening quite rarely, it should not influence the overall system performance.

-

Ergonomics

-

The interfaces have been optimized to make it as easy as possible to generate the ownership proof.

-

Compatibility

-

Introduces a new version of the SessionKeys runtime api. Thus, nodes should be updated before -a runtime is enacted that contains these changes otherwise they will fail to generate session keys. -The RPC that exists around this runtime api needs to be updated to support passing the account id -and for returning the ownership proof alongside the public session keys.

-

UIs would need to be updated to support the new RPC and the changed on chain logic.

-

Prior Art and References

-

None.

-

Unresolved Questions

-

None.

- -

Substrate implementation of the RFC.

- -
- - -
-
- - - -
- - - - - - - - - - - - - - - - - - - - -
- - diff --git a/text/0050-fellowship-salaries.html b/text/0050-fellowship-salaries.html deleted file mode 100644 index 1b4419977..000000000 --- a/text/0050-fellowship-salaries.html +++ /dev/null @@ -1,351 +0,0 @@ - - - - - - - 0050 - Polkadot Fellowship RFCs - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - - - - - - - - - - - - - - - - -
- -
- - - - - - - - -
-
-

RFC-0050: Fellowship Salaries

-
- - - -
Start Date15 November 2023
DescriptionProposal to set rank-based Fellowship salary levels.
AuthorsJoe Petrowski, Gavin Wood
-
-

Summary

-

The Fellowship Manifesto states that members should receive a monthly allowance on par with gross -income in OECD countries. This RFC proposes concrete amounts.

-

Motivation

-

One motivation for the Technical Fellowship is to provide an incentive mechanism that can induct and -retain technical talent for the continued progress of the network.

-

In order for members to uphold their commitment to the network, they should receive support to -ensure that their needs are met such that they have the time to dedicate to their work on Polkadot. -Given the high expectations of Fellows, it is reasonable to consider contributions and requirements -on par with a full-time job. Providing a livable wage to those making such contributions makes it -pragmatic to work full-time on Polkadot.

-

Note: Goals of the Fellowship, expectations for each Dan, and conditions for promotion and demotion -are all explained in the Manifesto. This RFC is only to propose concrete values for allowances.

-

Stakeholders

-
    -
  • Fellowship members
  • -
  • Polkadot Treasury
  • -
-

Explanation

-

This RFC proposes agreeing on salaries relative to a single level, the III Dan. As such, changes to -the amount or asset used would only be on a single value, and all others would adjust relatively. A -III Dan is someone whose contributions match the expectations of a full-time individual contributor. -The salary at this level should be reasonably close to averages in OECD countries.

-
- - - - - - - - - -
DanFactor
I0.125
II0.25
III1
IV1.5
V2.0
VI2.5
VII2.5
VIII2.5
IX2.5
-
-

Note that there is a sizable increase between II Dan (Proficient) and III Dan (Fellow). By the third -Dan, it is generally expected that one is working on Polkadot as their primary focus in a full-time -capacity.

-

Salary Asset

-

Although the Manifesto (Section 8) specifies a monthly allowance in DOT, this RFC proposes the use -of USDT instead. The allowance is meant to provide members stability in meeting their day-to-day -needs and recognize contributions. Using USDT provides more stability and less speculation.

-

This RFC proposes that a III Dan earn 80,000 USDT per year. The salary at this level is commensurate -with average salaries in OECD countries (note: 77,000 USD in the U.S., with an average engineer at -100,000 USD). The other ranks would thus earn:

-
- - - - - - - - - -
DanAnnual Salary
I10,000
II20,000
III80,000
IV120,000
V160,000
VI200,000
VII200,000
VIII200,000
IX200,000
-
-

The salary levels for Architects (IV, V, and VI Dan) are typical of senior engineers.

-

Allowances will be managed by the Salary pallet.

-

Projections

-

Based on the current membership, the maximum yearly and monthly costs are shown below:

-
- - - - - - - - - -
DanSalaryMembersYearlyMonthly
I10,00027270,00022,500
II20,00011220,00018,333
III80,0008640,00053,333
IV120,0003360,00030,000
V160,0005800,00066,667
VI200,0003600,00050,000
> VI200,000000
Total2,890,000240,833
-
-

Note that these are the maximum amounts; members may choose to take a passive (lower) level. On the -other hand, more people will likely join the Fellowship in the coming years.

-

Updates

-

Updates to these levels, whether relative ratios, the asset used, or the amount, shall be done via -RFC.

-

Drawbacks

-

By not using DOT for payment, the protocol relies on the stability of other assets and the ability -to acquire them. However, the asset of choice can be changed in the future.

-

Testing, Security, and Privacy

-

N/A.

-

Performance, Ergonomics, and Compatibility

-

Performance

-

N/A

-

Ergonomics

-

N/A

-

Compatibility

-

N/A

-

Prior Art and References

- -

Unresolved Questions

-

None at present.

- -
- - -
-
- - - -
- - - - - - - - - - - - - - - - - - - - -
- - diff --git a/text/0056-one-transaction-per-notification.html b/text/0056-one-transaction-per-notification.html deleted file mode 100644 index 7215d137f..000000000 --- a/text/0056-one-transaction-per-notification.html +++ /dev/null @@ -1,308 +0,0 @@ - - - - - - - 0056 - Polkadot Fellowship RFCs - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - - - - - - - - - - - - - - - - -
- -
- - - - - - - - -
-
-

RFC-0056: Enforce only one transaction per notification

-
- - - -
Start Date2023-11-30
DescriptionModify the transactions notifications protocol to always send only one transaction at a time
AuthorsPierre Krieger
-
-

Summary

-

When two peers connect to each other, they open (amongst other things) a so-called "notifications protocol" substream dedicated to gossiping transactions to each other.

-

Each notification on this substream currently consists in a SCALE-encoded Vec<Transaction> where Transaction is defined in the runtime.

-

This RFC proposes to modify the format of the notification to become (Compact(1), Transaction). This maintains backwards compatibility, as this new format decodes as a Vec of length equal to 1.

-

Motivation

-

There exists three motivations behind this change:

-
    -
  • -

    It is technically impossible to decode a SCALE-encoded Vec<Transaction> into a list of SCALE-encoded transactions without knowing how to decode a Transaction. That's because a Vec<Transaction> consists in several Transactions one after the other in memory, without any delimiter that indicates the end of a transaction and the start of the next. Unfortunately, the format of a Transaction is runtime-specific. This means that the code that receives notifications is necessarily tied to a specific runtime, and it is not possible to write runtime-agnostic code.

    -
  • -
  • -

    Notifications protocols are already designed to be optimized to send many items. Currently, when it comes to transactions, each item is a Vec<Transaction> that consists in multiple sub-items of type Transaction. This two-steps hierarchy is completely unnecessary, and was originally written at a time when the networking protocol of Substrate didn't have proper multiplexing.

    -
  • -
  • -

    It makes the implementation way more straight-forward by not having to repeat code related to back-pressure. See explanations below.

    -
  • -
-

Stakeholders

-

Low-level developers.

-

Explanation

-

To give an example, if you send one notification with three transactions, the bytes that are sent on the wire are:

-
concat(
-    leb128(total-size-in-bytes-of-the-rest),
-    scale(compact(3)), scale(transaction1), scale(transaction2), scale(transaction3)
-)
-
-

But you can also send three notifications of one transaction each, in which case it is:

-
concat(
-    leb128(size(scale(transaction1)) + 1), scale(compact(1)), scale(transaction1),
-    leb128(size(scale(transaction2)) + 1), scale(compact(1)), scale(transaction2),
-    leb128(size(scale(transaction3)) + 1), scale(compact(1)), scale(transaction3)
-)
-
-

Right now the sender can choose which of the two encoding to use. This RFC proposes to make the second encoding mandatory.

-

The format of the notification would become a SCALE-encoded (Compact(1), Transaction). -A SCALE-compact encoded 1 is one byte of value 4. In other words, the format of the notification would become concat(&[4], scale_encoded_transaction). -This is equivalent to forcing the Vec<Transaction> to always have a length of 1, and I expect the Substrate implementation to simply modify the sending side to add a for loop that sends one notification per item in the Vec.

-

As explained in the motivation section, this allows extracting scale(transaction) items without having to know how to decode them.

-

By "flattening" the two-steps hierarchy, an implementation only needs to back-pressure individual notifications rather than back-pressure notifications and transactions within notifications.

-

Drawbacks

-

This RFC chooses to maintain backwards compatibility at the cost of introducing a very small wart (the Compact(1)).

-

An alternative could be to introduce a new version of the transactions notifications protocol that sends one Transaction per notification, but this is significantly more complicated to implement and can always be done later in case the Compact(1) is bothersome.

-

Testing, Security, and Privacy

-

Irrelevant.

-

Performance, Ergonomics, and Compatibility

-

Performance

-

Irrelevant.

-

Ergonomics

-

Irrelevant.

-

Compatibility

-

The change is backwards compatible if done in two steps: modify the sender to always send one transaction per notification, then, after a while, modify the receiver to enforce the new format.

-

Prior Art and References

-

Irrelevant.

-

Unresolved Questions

-

None.

- -

None. This is a simple isolated change.

- -
- - -
-
- - - -
- - - - - - - - - - - - - - - - - - - - -
- - diff --git a/text/0059-nodes-capabilities-discovery.html b/text/0059-nodes-capabilities-discovery.html deleted file mode 100644 index 00d7ef126..000000000 --- a/text/0059-nodes-capabilities-discovery.html +++ /dev/null @@ -1,327 +0,0 @@ - - - - - - - 0059 - Polkadot Fellowship RFCs - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - - - - - - - - - - - - - - - - -
- -
- - - - - - - - -
-
-

RFC-0059: Add a discovery mechanism for nodes based on their capabilities

-
- - - -
Start Date2023-12-18
DescriptionNodes having certain capabilities register themselves in the DHT to be discoverable
AuthorsPierre Krieger
-
-

Summary

-

This RFC proposes to make the mechanism of RFC #8 more generic by introducing the concept of "capabilities".

-

Implementations can implement certain "capabilities", such as serving old block headers or being a parachain bootnode.

-

The discovery mechanism of RFC #8 is extended to be able to discover nodes of specific capabilities.

-

Motivation

-

The Polkadot peer-to-peer network is made of nodes. Not all these nodes are equal. Some nodes store only the headers of recent blocks, some nodes store all the block headers and bodies since the genesis, some nodes store the storage of all blocks since the genesis, and so on.

-

It is currently not possible to know ahead of time (without connecting to it and asking) which nodes have which data available, and it is not easily possible to build a list of nodes that have a specific piece of data available.

-

If you want to download for example the header of block 500, you have to connect to a randomly-chosen node, ask it for block 500, and if it says that it doesn't have the block, disconnect and try another randomly-chosen node. -In certain situations such as downloading the storage of old blocks, nodes that have the information are relatively rare, and finding through trial and error a node that has the data can take a long time.

-

This RFC attempts to solve this problem by giving the possibility to build a list of nodes that are capable of serving specific data.

-

Stakeholders

-

Low-level client developers. -People interested in accessing the archive of the chain.

-

Explanation

-

Reading RFC #8 first might help with comprehension, as this RFC is very similar.

-

Please keep in mind while reading that everything below applies for both relay chains and parachains, except mentioned otherwise.

-

Capabilities

-

This RFC defines a list of so-called capabilities:

-
    -
  • Head of chain provider. An implementation with this capability must be able to serve to other nodes block headers, block bodies, justifications, calls proofs, and storage proofs of "recent" (see below) blocks, and, for relay chains, to serve to other nodes warp sync proofs where the starting block is a session change block and must participate in Grandpa and Beefy gossip.
  • -
  • History provider. An implementation with this capability must be able to serve to other nodes block headers and block bodies of any block since the genesis, and must be able to serve to other nodes justifications of any session change block since the genesis up until and including their currently finalized block.
  • -
  • Archive provider. This capability is a superset of History provider. In addition to the requirements of History provider, an implementation with this capability must be able to serve call proofs and storage proof requests of any block since the genesis up until and including their currently finalized block.
  • -
  • Parachain bootnode (only for relay chains). An implementation with this capability must be able to serve the network request described in RFC 8.
  • -
-

More capabilities might be added in the future.

-

In the context of the head of chain provider, the word "recent" means: any not-finalized-yet block that is equal to or an ancestor of a block that it has announced through a block announce, and any finalized block whose height is superior to its current finalized block minus 16. -This does not include blocks that have been pruned because they're not a descendant of its current finalized block. In other words, blocks that aren't a descendant of the current finalized block can be thrown away. -A gap of blocks is required due to race conditions: when a node finalizes a block, it takes some time for its peers to be made aware of this, during which they might send requests concerning older blocks. The choice of the number of blocks in this gap is arbitrary.

-

Substrate is currently by default a head of chain provider provider. After it has finished warp syncing, it downloads the list of old blocks, after which it becomes a history provider. -If Substrate is instead configured as an archive node, then it downloads all blocks since the genesis and builds their state, after which it becomes an archive provider, history provider, and head of chain provider. -If blocks pruning is enabled and the chain is a relay chain, then Substrate unfortunately doesn't implement any of these capabilities, not even head of chain provider. This is considered as a bug that should be fixed, see https://github.com/paritytech/polkadot-sdk/issues/2733.

-

DHT provider registration

-

This RFC heavily relies on the functionalities of the Kademlia DHT already in use by Polkadot. You can find a link to the specification here.

-

Implementations that have the history provider capability should register themselves as providers under the key sha256(concat("history", randomness)).

-

Implementations that have the archive provider capability should register themselves as providers under the key sha256(concat("archive", randomness)).

-

Implementations that have the parachain bootnode capability should register themselves as provider under the key sha256(concat(scale_compact(para_id), randomness)), as described in RFC 8.

-

"Register themselves as providers" consists in sending ADD_PROVIDER requests to nodes close to the key, as described in the Content provider advertisement section of the specification.

-

The value of randomness can be found in the randomness field when calling the BabeApi_currentEpoch function.

-

In order to avoid downtimes when the key changes, nodes should also register themselves as a secondary key that uses a value of randomness equal to the randomness field when calling BabeApi_nextEpoch.

-

Implementers should be aware that their implementation of Kademlia might already hash the key before XOR'ing it. The key is not meant to be hashed twice.

-

Implementations must not register themselves if they don't fulfill the capability yet. For example, a node configured to be an archive node but that is still building its archive state in the background must register itself only after it has finished building its archive.

-

Secondary DHTs

-

Implementations that have the history provider capability must also participate in a secondary DHT that comprises only of nodes with that capability. The protocol name of that secondary DHT must be /<genesis-hash>/kad/history.

-

Similarly, implementations that have the archive provider capability must also participate in a secondary DHT that comprises only of nodes with that capability and whose protocol name is /<genesis-hash>/kad/archive.

-

Just like implementations must not register themselves if they don't fulfill their capability yet, they must also not participate in the secondary DHT if they don't fulfill their capability yet.

-

Head of the chain providers

-

Implementations that have the head of the chain provider capability do not register themselves as providers, but instead are the nodes that participate in the main DHT. In other words, they are the nodes that serve requests of the /<genesis_hash>/kad protocol.

-

Any implementation that isn't a head of the chain provider (read: light clients) must not participate in the main DHT. This is already presently the case.

-

Implementations must not participate in the main DHT if they don't fulfill the capability yet. For example, a node that is still in the process of warp syncing must not participate in the main DHT. However, assuming that warp syncing doesn't last more than a few seconds, it is acceptable to ignore this requirement in order to avoid complicating implementations too much.

-

Drawbacks

-

None that I can see.

-

Testing, Security, and Privacy

-

The content of this section is basically the same as the one in RFC 8.

-

This mechanism doesn't add or remove any security by itself, as it relies on existing mechanisms.

-

Due to the way Kademlia works, it would become the responsibility of the 20 Polkadot nodes whose sha256(peer_id) is closest to the key (described in the explanations section) to store the list of nodes that have specific capabilities. -Furthermore, when a large number of providers are registered, only the providers closest to the key are kept, up to a certain implementation-defined limit.

-

For this reason, an attacker can abuse this mechanism by randomly generating libp2p PeerIds until they find the 20 entries closest to the key representing the target capability. They are then in control of the list of nodes with that capability. While doing this can in no way be actually harmful, it could lead to eclipse attacks.

-

Because the key changes periodically and isn't predictable, and assuming that the Polkadot DHT is sufficiently large, it is not realistic for an attack like this to be maintained in the long term.

-

Performance, Ergonomics, and Compatibility

-

Performance

-

The DHT mechanism generally has a low overhead, especially given that publishing providers is done only every 24 hours.

-

Doing a Kademlia iterative query then sending a provider record shouldn't take more than around 50 kiB in total of bandwidth for the parachain bootnode.

-

Assuming 1000 nodes with a specific capability, the 20 Polkadot full nodes corresponding to that capability will each receive a sudden spike of a few megabytes of networking traffic when the key rotates. Again, this is relatively negligible. If this becomes a problem, one can add a random delay before a node registers itself to be the provider of the key corresponding to BabeApi_next_epoch.

-

Maybe the biggest uncertainty is the traffic that the 20 Polkadot full nodes will receive from light clients that desire knowing the nodes with a capability. If this every becomes a problem, this value of 20 is an arbitrary constant that can be increased for more redundancy.

-

Ergonomics

-

Irrelevant.

-

Compatibility

-

Irrelevant.

-

Prior Art and References

-

Unknown.

-

Unresolved Questions

-

While it fundamentally doesn't change much to this RFC, using BabeApi_currentEpoch and BabeApi_nextEpoch might be inappropriate. I'm not familiar enough with good practices within the runtime to have an opinion here. Should it be an entirely new pallet?

- -

This RFC would make it possible to reliably discover archive nodes, which would make it possible to reliably send archive node requests, something that isn't currently possible. This could solve the problem of finding archive RPC node providers by migrating archive-related request to using the native peer-to-peer protocol rather than JSON-RPC.

-

If we ever decide to break backwards compatibility, we could divide the "history" and "archive" capabilities in two, between nodes capable of serving older blocks and nodes capable of serving newer blocks. -We could even add to the peer-to-peer network nodes that are only capable of serving older blocks (by reading from a database) but do not participate in the head of the chain, and that just exist for historical purposes.

- -
- - -
-
- - - -
- - - - - - - - - - - - - - - - - - - - -
- - diff --git a/text/0078-merkleized-metadata.html b/text/0078-merkleized-metadata.html deleted file mode 100644 index 0fde7bfce..000000000 --- a/text/0078-merkleized-metadata.html +++ /dev/null @@ -1,580 +0,0 @@ - - - - - - - 0078 - Polkadot Fellowship RFCs - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - - - - - - - - - - - - - - - - -
- -
- - - - - - - - -
-
-

RFC-0078: Merkleized Metadata

-
- - - -
Start Date22 February 2024
DescriptionInclude merkleized metadata hash in extrinsic signature for trust-less metadata verification.
AuthorsZondax AG, Parity Technologies
-
-

Summary

-

To interact with chains in the Polkadot ecosystem it is required to know how transactions are encoded and how to read state. For doing this, Polkadot-SDK, the framework used by most of the chains in the Polkadot ecosystem, exposes metadata about the runtime to the outside. UIs, wallets, and others can use this metadata to interact with these chains. This makes the metadata a crucial piece of the transaction encoding as users are relying on the interacting software to encode the transactions in the correct format.

-

It gets even more important when the user signs the transaction in an offline wallet, as the device by its nature cannot get access to the metadata without relying on the online wallet to provide it. This makes it so that the offline wallet needs to trust an online party, deeming the security assumptions of the offline devices, mute.

-

This RFC proposes a way for offline wallets to leverage metadata, within the constraints of these. The design idea is that the metadata is chunked and these chunks are put into a merkle tree. The root hash of this merkle tree represents the metadata. The offline wallets can use the root hash to decode transactions by getting proofs for the individual chunks of the metadata. This root hash is also included in the signed data of the transaction (but not sent as part of the transaction). The runtime is then including its known metadata root hash when verifying the transaction. If the metadata root hash known by the runtime differs from the one that the offline wallet used, it very likely means that the online wallet provided some fake data and the verification of the transaction fails.

-

Users depend on offline wallets to correctly display decoded transactions before signing. With merkleized metadata, they can be assured of the transaction's legitimacy, as incorrect transactions will be rejected by the runtime.

-

Motivation

-

Polkadot's innovative design (both relay chain and parachains) present the ability to developers to upgrade their network as frequently as they need. These systems manage to have integrations working after the upgrades with the help of FRAME Metadata. This Metadata, which is in the order of half a MiB for most Polkadot-SDK chains, completely describes chain interfaces and properties. Securing this metadata is key for users to be able to interact with the Polkadot-SDK chain in the expected way.

-

On the other hand, offline wallets provide a secure way for Blockchain users to hold their own keys (some do a better job than others). These devices seldomly get upgraded, usually account for one particular network and hold very small internal memories. Currently in the Polkadot ecosystem there is no secure way of having these offline devices know the latest Metadata of the Polkadot-SDK chain they are interacting with. This results in a plethora of similar yet slightly different offline wallets for all different Polkadot-SDK chains, as well as the impediment of keeping these regularly updated, thus not fully leveraging Polkadot-SDK’s unique forkless upgrade feature.

-

The two main reasons why this is not possible today are:

-
    -
  1. Metadata is too large for offline devices. Currently Polkadot-SDK metadata is on average 500 KiB, which is more than what the mostly adopted offline devices can hold.
  2. -
  3. Metadata is not authenticated. Even if there was enough space on offline devices to hold the metadata, the user would be trusting the entity providing this metadata to the hardware wallet. In the Polkadot ecosystem, this is how currently Polkadot Vault works.
  4. -
-

This RFC proposes a solution to make FRAME Metadata compatible with offline signers in a secure way. As it leverages FRAME Metadata, it does not only ensure that offline devices can always keep up to date with every FRAME based chain, but also that every offline wallet will be compatible with all FRAME based chains, avoiding the need of per-chain implementations.

-

Requirements

-
    -
  1. Metadata's integrity MUST be preserved. If any compromise were to happen, extrinsics sent with compromised metadata SHOULD fail.
  2. -
  3. Metadata information that could be used in signable extrinsic decoding MAY be included in digest, yet its inclusion MUST be indicated in signed extensions.
  4. -
  5. Digest MUST be deterministic with respect to metadata.
  6. -
  7. Digest MUST be cryptographically strong against pre-image, both first (finding an input that results in given digest) and second (finding an input that results in same digest as some other input given).
  8. -
  9. Extra-metadata information necessary for extrinsic decoding and constant within runtime version MUST be included in digest.
  10. -
  11. It SHOULD be possible to quickly withdraw offline signing mechanism without access to cold signing devices.
  12. -
  13. Digest format SHOULD be versioned.
  14. -
  15. Work necessary for proving metadata authenticity MAY be omitted at discretion of signer device design (to support automation tools).
  16. -
-

Reduce metadata size

-

Metadata should be stripped from parts that are not necessary to parse a signable extrinsic, then it should be separated into a finite set of self-descriptive chunks. Thus, a subset of chunks necessary for signable extrinsic decoding and rendering could be sent, possibly in small portions (ultimately, one at a time), to cold devices together with the proof.

-
    -
  1. Single chunk with proof payload size SHOULD fit within few kB;
  2. -
  3. Chunks handling mechanism SHOULD support chunks being sent in any order without memory utilization overhead;
  4. -
  5. Unused enum variants MUST be stripped (this has great impact on transmitted metadata size; examples: era enum, enum with all calls for call batching).
  6. -
-

Stakeholders

-
    -
  • Runtime implementors
  • -
  • UI/wallet implementors
  • -
  • Offline wallet implementors
  • -
-

The idea for this RFC was brought up by runtime implementors and was extensively discussed with offline wallet implementors. It was designed in such a way that it can work easily with the existing offline wallet solutions in the Polkadot ecosystem.

-

Explanation

-

The FRAME metadata provides a wide range of information about a FRAME based runtime. It contains information about the pallets, the calls per pallet, the storage entries per pallet, runtime APIs, and type information about most of the types that are used in the runtime. For decoding extrinsics on an offline wallet, what is mainly required is type information. Most of the other information in the FRAME metadata is actually not required for decoding extrinsics and thus it can be removed. Therefore, the following is a proposal on a custom representation of the metadata and how this custom metadata is chunked, ensuring that only the needed chunks required for decoding a particular extrinsic are sent to the offline wallet. The necessary information to transform the FRAME metadata type information into the type information presented in this RFC will be provided. However, not every single detail on how to convert from FRAME metadata into the RFC type information is described.

-

First, the MetadataDigest is introduced. After that, ExtrinsicMetadata is covered and finally the actual format of the type information. Then pruning of unrelated type information is covered and how to generate the TypeRefs. In the latest step, merkle tree calculation is explained.

-

Metadata digest

-

The metadata digest is the compact representation of the metadata. The hash of this digest is the metadata hash. Below the type declaration of the Hash type and the MetadataDigest itself can be found:

-
#![allow(unused)]
-fn main() {
-type Hash = [u8; 32];
-
-enum MetadataDigest {
-    #[index = 1]
-    V1 {
-        type_information_tree_root: Hash,
-        extrinsic_metadata_hash: Hash,
-        spec_version: u32,
-        spec_name: String,
-        base58_prefix: u16,
-        decimals: u8,
-        token_symbol: String,
-    },
-}
-}
-

The Hash is 32 bytes long and blake3 is used for calculating it. The hash of the MetadataDigest is calculated by blake3(SCALE(MetadataDigest)). Therefore, MetadataDigest is at first SCALE encoded, and then those bytes are hashed.

-

The MetadataDigest itself is represented as an enum. This is done to make it future proof, because a SCALE encoded enum is prefixed by the index of the variant. This index represents the version of the digest. As seen above, there is no index zero and it starts directly with one. Version one of the digest contains the following elements:

-
    -
  • type_information_tree_root: The root of the merkleized type information tree.
  • -
  • extrinsic_metadata_hash: The hash of the extrinsic metadata.
  • -
  • spec_version: The spec_version of the runtime as found in the RuntimeVersion when generating the metadata. While this information can also be found in the metadata, it is hidden in a big blob of data. To avoid transferring this big blob of data, we directly add this information here.
  • -
  • spec_name: Similar to spec_version, but being the spec_name found in the RuntimeVersion.
  • -
  • ss58_prefix: The SS58 prefix used for address encoding.
  • -
  • decimals: The number of decimals for the token.
  • -
  • token_symbol: The symbol of the token.
  • -
-

Extrinsic metadata

-

For decoding an extrinsic, more information on what types are being used is required. The actual format of the extrinsic is the format as described in the Polkadot specification. The metadata for an extrinsic is as follows:

-
#![allow(unused)]
-fn main() {
-struct ExtrinsicMetadata {
-    version: u8,
-    address_ty: TypeRef,
-    call_ty: TypeRef,
-    signature_ty: TypeRef,
-    signed_extensions: Vec<SignedExtensionMetadata>,
-}
-
-struct SignedExtensionMetadata {
-    identifier: String,
-    included_in_extrinsic: TypeRef,
-    included_in_signed_data: TypeRef,
-}
-}
-

To begin with, TypeRef. This is a unique identifier for a type as found in the type information. Using this TypeRef, it is possible to look up the type in the type information tree. More details on this process can be found in the section Generating TypeRef.

-

The actual ExtrinsicMetadata contains the following information:

-
    -
  • version: The version of the extrinsic format. As of writing this, the latest version is 4.
  • -
  • address_ty: The address type used by the chain.
  • -
  • call_ty: The call type used by the chain. The call in FRAME based runtimes represents the type of transaction being executed on chain. It references the actual function to execute and the parameters of this function.
  • -
  • signature_ty: The signature type used by the chain.
  • -
  • signed_extensions: FRAME based runtimes can extend the base extrinsic with extra information. This extra information that is put into an extrinsic is called "signed extensions". These extensions offer the runtime developer the possibility to include data directly into the extrinsic, like nonce, tip, amongst others. This means that the this data is sent alongside the extrinsic to the runtime. The other possibility these extensions offer is to include extra information only in the signed data that is signed by the sender. This means that this data needs to be known by both sides, the signing side and the verification side. An example for this kind of data is the genesis hash that ensures that extrinsics are unique per chain. Another example is the metadata hash itself that will also be included in the signed data. The offline wallets need to know which signed extensions are present in the chain and this is communicated to them using this field.
  • -
-

The SignedExtensionMetadata provides information about a signed extension:

-
    -
  • identifier: The identifier of the signed extension. An identifier is required to be unique in the Polkadot ecosystem as otherwise extrinsics are maybe built incorrectly.
  • -
  • included_in_extrinsic: The type that will be included in the extrinsic by this signed extension.
  • -
  • included_in_signed_data: The type that will be included in the signed data by this signed extension.
  • -
-

Type Information

-

As SCALE is not self descriptive like JSON, a decoder always needs to know the format of the type to decode it properly. This is where the type information comes into play. The format of the extrinsic is fixed as described above and ExtrinsicMetadata provides information on which type information is required for which part of the extrinsic. So, offline wallets only need access to the actual type information. It is a requirement that the type information can be chunked into logical pieces to reduce the amount of data that is sent to the offline wallets for decoding the extrinsics. So, the type information is structured in the following way:

-
#![allow(unused)]
-fn main() {
-struct Type {
-    path: Vec<String>,
-    type_def: TypeDef,
-    type_id: Compact<u32>,
-}
-
-enum TypeDef {
-    Composite(Vec<Field>),
-    Enumeration(EnumerationVariant),
-    Sequence(TypeRef),
-    Array(Array),
-    Tuple(Vec<TypeRef>),
-    BitSequence(BitSequence),
-}
-
-struct Field {
-    name: Option<String>,
-    ty: TypeRef,
-    type_name: Option<String>,
-}
-
-struct Array {
-    len: u32,
-    type_param: TypeRef,
-}
-
-struct BitSequence {
-    num_bytes: u8,
-    least_significant_bit_first: bool,
-}
-
-struct EnumerationVariant {
-    name: String,
-    fields: Vec<Field>,
-    index: Compact<u32>,
-}
-
-enum TypeRef {
-    Bool,
-    Char,
-    Str,
-    U8,
-    U16,
-    U32,
-    U64,
-    U128,
-    U256,
-    I8,
-    I16,
-    I32,
-    I64,
-    I128,
-    I256,
-    CompactU8,
-    CompactU16,
-    CompactU32,
-    CompactU64,
-    CompactU128,
-    CompactU256,
-    Void,
-    PerId(Compact<u32>),
-}
-}
-

The Type declares the structure of a type. The type has the following fields:

-
    -
  • path: A path declares the position of a type locally to the place where it is defined. The path is not globally unique, this means that there can be multiple types with the same path.
  • -
  • type_def: The high-level type definition, e.g. the type is a composition of fields where each field has a type, the type is a composition of different types as tuple etc.
  • -
  • type_id: The unique identifier of this type.
  • -
-

Every Type is composed of multiple different types. Each of these "sub types" can reference either a full Type again or reference one of the primitive types. This is where TypeRef becomes relevant as the type referencing information. To reference a Type in the type information, a unique identifier is used. As primitive types can be represented using a single byte, they are not put as separate types into the type information. Instead the primitive types are directly part of TypeRef to not require the overhead of referencing them in an extra Type. The special primitive type Void represents a type that encodes to nothing and can be decoded from nothing. As FRAME doesn't support Compact as primitive type it requires a more involved implementation to convert a FRAME type to a Compact primitive type. SCALE only supports u8, u16, u32, u64 and u128 as Compact which maps onto the primitive type declaration in the RFC. One special case is a Compact that wraps an empty Tuple which is expressed as primitive type Void.

-

The TypeDef variants have the following meaning:

-
    -
  • Composite: A struct like type that is composed of multiple different fields. Each Field can have its own type. The order of the fields is significant. A Composite with no fields is expressed as primitive type Void.
  • -
  • Enumeration: Stores a EnumerationVariant. A EnumerationVariant is a struct that is described by a name, an index and a vector of Fields, each of which can have it's own type. Typically Enumerations have more than just one variant, and in those cases Enumeration will appear multiple times, each time with a different variant, in the type information. Enumerations can become quite large, yet usually for decoding a type only one variant is required, therefore this design brings optimizations and helps reduce the size of the proof. An Enumeration with no variants is expressed as primitive type Void.
  • -
  • Sequence: A vector like type wrapping the given type.
  • -
  • BitSequence: A vector storing bits. num_bytes represents the size in bytes of the internal storage. If least_significant_bit_first is true the least significant bit is first, otherwise the most significant bit is first.
  • -
  • Array: A fixed-length array of a specific type.
  • -
  • Tuple: A composition of multiple types. A Tuple that is composed of no types is expressed as primitive type Void.
  • -
-

Using the type information together with the SCALE specification provides enough information on how to decode types.

-

Prune unrelated Types

-

The FRAME metadata contains not only the type information for decoding extrinsics, but it also contains type information about storage types. The scope of the RFC is only about decoding transactions on offline wallets. Thus, a lot of type information can be pruned. To know which type information are required to decode all possible extrinsics, ExtrinsicMetadata has been defined. The extrinsic metadata contains all the types that define the layout of an extrinsic. Therefore, all the types that are accessible from the types declared in the extrinsic metadata can be collected. To collect all accessible types, it requires to recursively iterate over all types starting from the types in ExtrinsicMetadata. Note that some types are accessible, but they don't appear in the final type information and thus, can be pruned as well. These are for example inner types of Compact or the types referenced by BitSequence. The result of collecting these accessible types is a list of all the types that are required to decode each possible extrinsic.

-

Generating TypeRef

-

Each TypeRef basically references one of the following types:

-
    -
  • One of the primitive types. All primitive types can be represented by 1 byte and thus, they are directly part of the TypeRef itself to remove an extra level of indirection.
  • -
  • A Type using its unique identifier.
  • -
-

In FRAME metadata a primitive type is represented like any other type. So, the first step is to remove all the primitive only types from the list of types that were generated in the previous section. The resulting list of types is sorted using the id provided by FRAME metadata. In the last step the TypeRefs are created. Each reference to a primitive type is replaced by one of the corresponding TypeRef primitive type variants and every other reference is replaced by the type's unique identifier. The unique identifier of a type is the index of the type in our sorted list. For Enumerations all variants have the same unique identifier, while they are represented as multiple type information. All variants need to have the same unique identifier as the reference doesn't know which variant will appear in the actual encoded data.

-
#![allow(unused)]
-fn main() {
-let pruned_types = get_pruned_types();
-
-for ty in pruned_types {
-    if ty.is_primitive_type() {
-        pruned_types.remove(ty);
-    }
-}
-
-pruned_types.sort(|(left, right)|
-    if left.frame_metadata_id() == right.frame_metadata_id() {
-        left.variant_index() < right.variant_index()
-    } else {
-        left.frame_metadata_id() < right.frame_metadata_id()
-    }
-);
-
-fn generate_type_ref(ty, ty_list) -> TypeRef {
-    if ty.is_primitive_type() {
-        TypeRef::primtive_from_ty(ty)
-    }
-
-    TypeRef::from_id(
-        // Determine the id by using the position of the type in the
-        // list of unique frame metadata ids.
-        ty_list.position_by_frame_metadata_id(ty.frame_metadata_id())
-    )
-}
-
-fn replace_all_sub_types_with_type_refs(ty, ty_list) -> Type {
-    for sub_ty in ty.sub_types() {
-        replace_all_sub_types_with_type_refs(sub_ty, ty_list);
-        sub_ty = generate_type_ref(sub_ty, ty_list)
-    }
-
-    ty
-}
-
-let final_ty_list = Vec::new();
-for ty in pruned_types {
-    final_ty_list.push(replace_all_sub_types_with_type_refs(ty, ty_list))
-}
-}
-

Building the Merkle Tree Root

-

A complete binary merkle tree with blake3 as the hashing function is proposed. For building the merkle tree root, the initial data has to be hashed as a first step. This initial data is referred to as the leaves of the merkle tree. The leaves need to be sorted to make the tree root deterministic. The type information is sorted using their unique identifiers and for the Enumeration, variants are sort using their index. After sorting and hashing all leaves, two leaves have to be combined to one hash. The combination of these of two hashes is referred to as a node.

-
#![allow(unused)]
-fn main() {
-let nodes = leaves;
-while nodes.len() > 1 {
-    let right = nodes.pop_back();
-    let left = nodes.pop_back();
-    nodes.push_front(blake3::hash(scale::encode((left, right))));
-}
-
-let merkle_tree_root = if nodes.is_empty() { [0u8; 32] } else { nodes.back() };
-}
-

The merkle_tree_root in the end is the last node left in the list of nodes. If there are no nodes in the list left, it means that the initial data set was empty. In this case, all zeros hash is used to represent the empty tree.

-

Building a tree with 5 leaves (numbered 0 to 4):

-
nodes: 0 1 2 3 4
-
-nodes: [3, 4] 0 1 2
-
-nodes: [1, 2] [3, 4] 0
-
-nodes: [[3, 4], 0] [1, 2]
-
-nodes: [[[3, 4], 0], [1, 2]]
-
-

The resulting tree visualized:

-
     [root]
-     /    \
-    *      *
-   / \    / \
-  *   0  1   2
- / \
-3   4
-
-

Building a tree with 6 leaves (numbered 0 to 5):

-
nodes: 0 1 2 3 4 5
-
-nodes: [4, 5] 0 1 2 3
-
-nodes: [2, 3] [4, 5] 0 1
-
-nodes: [0, 1] [2, 3] [4, 5]
-
-nodes: [[2, 3], [4, 5]] [0, 1]
-
-nodes: [[[2, 3], [4, 5]], [0, 1]]
-
-

The resulting tree visualized:

-
       [root]
-      /      \
-     *        *
-   /   \     / \
-  *     *   0   1
- / \   / \
-2   3 4   5
-
-

Inclusion in an Extrinsic

-

To ensure that the offline wallet used the correct metadata to show the extrinsic to the user the metadata hash needs to be included in the extrinsic. The metadata hash is generated by hashing the SCALE encoded MetadataDigest:

-
#![allow(unused)]
-fn main() {
-blake3::hash(SCALE::encode(MetadataDigest::V1 { .. }))
-}
-

For the runtime the metadata hash is generated at compile time. Wallets will have to generate the hash using the FRAME metadata.

-

The signing side should control whether it wants to add the metadata hash or if it wants to omit it. To accomplish this it is required to add one extra byte to the extrinsic itself. If this byte is 0 the metadata hash is not required and if the byte is 1 the metadata hash is added using V1 of the MetadataDigest. This leaves room for future versions of the MetadataDigest format. When the metadata hash should be included, it is only added to the data that is signed. This brings the advantage of not requiring to include 32 bytes into the extrinsic itself, because the runtime knows the metadata hash as well and can add it to the signed data as well if required. This is similar to the genesis hash, while this isn't added conditionally to the signed data.

-

Drawbacks

-

The chunking may not be the optimal case for every kind of offline wallet.

-

Testing, Security, and Privacy

-

All implementations are required to strictly follow the RFC to generate the metadata hash. This includes which hash function to use and how to construct the metadata types tree. So, all implementations are following the same security criteria. As the chains will calculate the metadata hash at compile time, the build process needs to be trusted. However, this is already a solved problem in the Polkadot ecosystem by using reproducible builds. So, anyone can rebuild a chain runtime to ensure that a proposal is actually containing the changes as advertised.

-

Implementations can also be tested easily against each other by taking some metadata and ensuring that they all come to the same metadata hash.

-

Privacy of users should also not be impacted. This assumes that wallets will generate the metadata hash locally and don't leak any information to third party services about which chunks a user will send to their offline wallet. Besides that, there is no leak of private information as getting the raw metadata from the chain is an operation that is done by almost everyone.

-

Performance, Ergonomics, and Compatibility

-

Performance

-

There should be no measurable impact on performance to Polkadot or any other chain using this feature. The metadata root hash is calculated at compile time and at runtime it is optionally used when checking the signature of a transaction. This means that at runtime no performance heavy operations are done.

-

Ergonomics & Compatibility

-

The proposal alters the way a transaction is built, signed, and verified. So, this imposes some required changes to any kind of developer who wants to construct transactions for Polkadot or any chain using this feature. As the developer can pass 0 for disabling the verification of the metadata root hash, it can be easily ignored.

-

Prior Art and References

-

RFC 46 produced by the Alzymologist team is a previous work reference that goes in this direction as well.

-

On other ecosystems, there are other solutions to the problem of trusted signing. Cosmos for example has a standardized way of transforming a transaction into some textual representation and this textual representation is included in the signed data. Basically achieving the same as what the RFC proposes, but it requires that for every transaction applied in a block, every node in the network always has to generate this textual representation to ensure the transaction signature is valid.

-

Unresolved Questions

-

None.

- -
    -
  • Does it work with all kind of offline wallets?
  • -
  • Generic types currently appear multiple times in the metadata with each instantiation. It could be may be useful to have generic type only once in the metadata and declare the generic parameters at their instantiation.
  • -
  • The metadata doesn't contain any kind of semantic information. This means that the offline wallet for example doesn't know what is a balance etc. The current solution for this problem is to match on the type name, but this isn't a sustainable solution.
  • -
  • MetadataDigest only provides one token and decimal. However, chains support a lot of chains support multiple tokens for paying fees etc. Probably more a question of having semantic information as mentioned above.
  • -
- -
- - -
-
- - - -
- - - - - - - - - - - - - - - - - - - - -
- - diff --git a/text/0084-general-transaction-extrinsic-format.html b/text/0084-general-transaction-extrinsic-format.html deleted file mode 100644 index 83d1d7732..000000000 --- a/text/0084-general-transaction-extrinsic-format.html +++ /dev/null @@ -1,287 +0,0 @@ - - - - - - - 0084 - Polkadot Fellowship RFCs - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - - - - - - - - - - - - - - - - -
- -
- - - - - - - - -
-
-

RFC-0084: General transactions in extrinsic format

-
- - - -
Start Date12 March 2024
DescriptionSupport more extrinsic types by updating the extrinsic format
AuthorsGeorge Pisaltu
-
-

Summary

-

This RFC proposes a change to the extrinsic format to incorporate a new transaction type, the "general" transaction.

-

Motivation

-

"General" transactions, a new type of transaction that this RFC aims to support, are transactions which obey the runtime's extensions and have according extension data yet do not have hard-coded signatures. They are first described in Extrinsic Horizon and supported in 3685. They enable users to authorize origins in new, more flexible ways (e.g. ZK proofs, mutations over pre-authenticated origins). As of now, all transactions are limited to the account signing model for origin authorization and any additional origin changes happen in extrinsic logic, which cannot leverage the validation process of extensions.

-

An example of a use case for such an extension would be sponsoring the transaction fee for some other user. A new extension would be put in place to verify that a part of the initial payload was signed by the author under who the extrinsic should run and change the origin, but the payment for the whole transaction should be handled under a sponsor's account. A POC for this can be found in 3712.

-

The new "general" transaction type would coexist with both current transaction types for a while and, therefore, the current number of supported transaction types, capped at 2, is insufficient. A new extrinsic type must be introduced alongside the current signed and unsigned types. Currently, an encoded extrinsic's first byte indicate the type of extrinsic using the most significant bit - 0 for unsigned, 1 for signed - and the 7 following bits indicate the extrinsic format version, which has been equal to 4 for a long time.

-

By taking one bit from the extrinsic format version encoding, we can support 2 additional extrinsic types while also having a minimal impact on our capability to extend and change the extrinsic format in the future.

-

Stakeholders

-
    -
  • Runtime users
  • -
  • Runtime devs
  • -
  • Wallet devs
  • -
-

Explanation

-

An extrinsic is currently encoded as one byte to identify the extrinsic type and version. This RFC aims to change the interpretation of this byte regarding the reserved bits for the extrinsic type and version. In the following explanation, bits represented using T make up the extrinsic type and bits represented using V make up the extrinsic version.

-

Currently, the bit allocation within the leading encoded byte is 0bTVVV_VVVV. In practice in the Polkadot ecosystem, the leading byte would be 0bT000_0100 as the version has been equal to 4 for a long time.

-

This RFC proposes for the bit allocation to change to 0bTTVV_VVVV. As a result, the extrinsic format version will be bumped to 5 and the extrinsic type bit representation would change as follows:

-
- - - - -
bitstype
00unsigned
10signed
01reserved
11reserved
-
-

Drawbacks

-

This change would reduce the maximum possible transaction version from the current 127 to 63. In order to bypass the new, lower limit, the extrinsic format would have to change again.

-

Testing, Security, and Privacy

-

There is no impact on testing, security or privacy.

-

Performance, Ergonomics, and Compatibility

-

This change would allow Polkadot to support new types of transactions, with the specific "general" transaction type in mind at the time of writing this proposal.

-

Performance

-

There is no performance impact.

-

Ergonomics

-

The impact to developers and end-users is minimal as it would just be a bitmask update on their part for parsing the extrinsic type along with the version.

-

Compatibility

-

This change breaks backwards compatiblity because any transaction that is neither signed nor unsigned, but a new transaction type, would be interpreted as having a future extrinsic format version.

-

Prior Art and References

-

The original design was originally proposed in the TransactionExtension PR, which is also the motivation behind this effort.

-

Unresolved Questions

-

None.

- -

Following this change, the "general" transaction type will be introduced as part of the Extrinsic Horizon effort, which will shape future work.

- -
- - -
-
- - - -
- - - - - - - - - - - - - - - - - - - - -
- - diff --git a/text/0145-remove-unnecessary-allocator-usage.md b/text/0145-remove-unnecessary-allocator-usage.md index 0775ec05d..5a2e7f09f 100644 --- a/text/0145-remove-unnecessary-allocator-usage.md +++ b/text/0145-remove-unnecessary-allocator-usage.md @@ -145,7 +145,8 @@ The function used to accept only a prefix and a limit and return a SCALE-encoded ```wat (func $ext_storage_clear_prefix_version_3 (param $maybe_prefix i64) (param $maybe_limit i64) (param $maybe_cursor_in i64) - (param $maybe_cursor_out i64) (param $counters i32) (result i32)) + (param $maybe_cursor_out i64) (param $backend i32) (param $unique i32) (param $loops i32) + (result i32)) ``` ##### Arguments @@ -153,11 +154,10 @@ The function used to accept only a prefix and a limit and return a SCALE-encoded * `maybe_prefix` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) containing a (possibly empty) storage prefix being cleared; * `maybe_limit` is an optional positive integer ([New Definition I](#new-def-i)) representing either the maximum number of backend deletions which may happen, or the _absence_ of such a limit. The number of backend iterations may surpass this limit by no more than one; * `maybe_cursor_in` is an optional pointer-size ([New Definition II](#new-def-ii)) representing the cursor returned by the previous (unfinished) call to this function. It should be _absent_ on the first call; -* `maybe_cursor_out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer where the continuation cursor will optionally be written (see also the Result section). The value is actually stored only if the buffer is large enough. Whenever the value is not written into the buffer, the buffer contents are unmodified; -* `counters` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to a 12-byte buffer where three low-endian 32-bit integers will be stored one after another, representing the counters, respectively: - * Of items removed from the backend database will be written; - * Of unique keys removed, taking into account both the backend and the overlay; - * Of iterations (each requiring a storage seek/read) which were done. +* `maybe_cursor_out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer where the continuation cursor will optionally be written (see also the Result section). The value is actually stored only if the buffer is large enough. Otherwise, the buffer contents are undefined; +* `backend` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to a 4-byte buffer where a 32-bit integer representing the number of items removed from the backend database will be written; +* `unique` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to a 4-byte buffer where a 32-bit integer representing the number of unique keys removed, taking into account both the backend and the overlay; +* `loops` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to a 4-byte buffer where a 32-bit integer representing the number of iterations (each requiring a storage seek/read) which were done will be written. ##### Result @@ -285,7 +285,8 @@ The function used to accept only a child storage key and a limit and return a SC ```wat (func $ext_default_child_storage_storage_kill_version_4 (param $storage_key i64) (param $maybe_limit i64) (param $maybe_cursor_in i64) - (param $maybe_cursor_out i64) (param $counters i32) (result i32)) + (param $maybe_cursor_out i64) (param $backend i32) (param $unique i32) (param $loops i32) + (result i32)) ``` ##### Arguments @@ -293,11 +294,10 @@ The function used to accept only a child storage key and a limit and return a SC * `storage_key` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the child storage key ([Definition 219](https://spec.polkadot.network/chap-host-api#defn-child-storage-type)); * `maybe_limit` is an optional positive integer representing either the maximum number of backend deletions which may happen, or the absence of such a limit. The number of backend iterations may surpass this limit by no more than one; * `maybe_cursor_in` is an optional pointer-size representing the cursor returned by the previous (unfinished) call to this function. It should be _absent_ on the first call; -* `maybe_cursor_out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer where the continuation cursor will optionally be written (see also the Result section). The value is actually stored only if the buffer is large enough. Whenever the value is not written into the buffer, the buffer contents are unmodified; -* `counters` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to a 12-byte buffer where three low-endian 32-bit integers will be stored one after another, representing the counters, respectively: - * Of items removed from the backend database will be written; - * Of unique keys removed, taking into account both the backend and the overlay; - * Of iterations (each requiring a storage seek/read) which were done. +* `maybe_cursor_out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer where the continuation cursor will optionally be written (see also the Result section). The value is actually stored only if the buffer is large enough. Otherwise, the buffer contents are undefined; +* `backend` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to a 4-byte buffer where a 32-bit integer representing the number of items removed from the backend database will be written; +* `unique` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to a 4-byte buffer where a 32-bit integer representing the number of unique keys removed, taking into account both the backend and the overlay; +* `loops` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to a 4-byte buffer where a 32-bit integer representing the number of iterations (each requiring a storage seek/read) which were done will be written. ##### Result @@ -322,8 +322,8 @@ The function used to accept (along with the child storage key) only a prefix and ```wat (func $ext_default_child_storage_clear_prefix_version_3 (param $storage_key i64) (param $prefix i64) (param $maybe_limit i64) - (param $maybe_cursor_in i64) (param $maybe_cursor_out i64) (param $counters i32) - (result i32)) + (param $maybe_cursor_in i64) (param $maybe_cursor_out i64) (param $backend i32) + (param $unique i32) (param $loops i32) (result i32)) ``` ##### Arguments @@ -332,11 +332,10 @@ The function used to accept (along with the child storage key) only a prefix and * `prefix` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) containing a storage prefix being cleared; * `maybe_limit` is an optional positive integer representing either the maximum number of backend deletions which may happen, or the absence of such a limit. The number of backend iterations may surpass this limit by no more than one; * `maybe_cursor_in` is an optional pointer-size representing the cursor returned by the previous (unfinished) call to this function. It should be _absent_ on the first call; -* `maybe_cursor_out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer where the continuation cursor will optionally be written (see also the Result section). The value is actually stored only if the buffer is large enough. Whenever the value is not written into the buffer, the buffer contents are unmodified; -* `counters` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to a 12-byte buffer where three low-endian 32-bit integers will be stored one after another, representing the counters, respectively: - * Of items removed from the backend database will be written; - * Of unique keys removed, taking into account both the backend and the overlay; - * Of iterations (each requiring a storage seek/read) which were done. +* `maybe_cursor_out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer where the continuation cursor will optionally be written (see also the Result section). The value is actually stored only if the buffer is large enough. Otherwise, the buffer contents are undefined; +* `backend` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to a 4-byte buffer where a 32-bit integer representing the number of items removed from the backend database will be written; +* `unique` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to a 4-byte buffer where a 32-bit integer representing the number of unique keys removed, taking into account both the backend and the overlay; +* `loops` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to a 4-byte buffer where a 32-bit integer representing the number of iterations (each requiring a storage seek/read) which were done will be written. ##### Result From 1aee5e83126ff6e248e9d091c9075697da6646fb Mon Sep 17 00:00:00 2001 From: Dmitry Sinyavin Date: Tue, 18 Nov 2025 19:50:00 +0100 Subject: [PATCH 16/30] Re-apply meaningful changes --- ...0145-remove-unnecessary-allocator-usage.md | 37 ++++++++++--------- 1 file changed, 19 insertions(+), 18 deletions(-) diff --git a/text/0145-remove-unnecessary-allocator-usage.md b/text/0145-remove-unnecessary-allocator-usage.md index 5a2e7f09f..0775ec05d 100644 --- a/text/0145-remove-unnecessary-allocator-usage.md +++ b/text/0145-remove-unnecessary-allocator-usage.md @@ -145,8 +145,7 @@ The function used to accept only a prefix and a limit and return a SCALE-encoded ```wat (func $ext_storage_clear_prefix_version_3 (param $maybe_prefix i64) (param $maybe_limit i64) (param $maybe_cursor_in i64) - (param $maybe_cursor_out i64) (param $backend i32) (param $unique i32) (param $loops i32) - (result i32)) + (param $maybe_cursor_out i64) (param $counters i32) (result i32)) ``` ##### Arguments @@ -154,10 +153,11 @@ The function used to accept only a prefix and a limit and return a SCALE-encoded * `maybe_prefix` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) containing a (possibly empty) storage prefix being cleared; * `maybe_limit` is an optional positive integer ([New Definition I](#new-def-i)) representing either the maximum number of backend deletions which may happen, or the _absence_ of such a limit. The number of backend iterations may surpass this limit by no more than one; * `maybe_cursor_in` is an optional pointer-size ([New Definition II](#new-def-ii)) representing the cursor returned by the previous (unfinished) call to this function. It should be _absent_ on the first call; -* `maybe_cursor_out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer where the continuation cursor will optionally be written (see also the Result section). The value is actually stored only if the buffer is large enough. Otherwise, the buffer contents are undefined; -* `backend` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to a 4-byte buffer where a 32-bit integer representing the number of items removed from the backend database will be written; -* `unique` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to a 4-byte buffer where a 32-bit integer representing the number of unique keys removed, taking into account both the backend and the overlay; -* `loops` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to a 4-byte buffer where a 32-bit integer representing the number of iterations (each requiring a storage seek/read) which were done will be written. +* `maybe_cursor_out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer where the continuation cursor will optionally be written (see also the Result section). The value is actually stored only if the buffer is large enough. Whenever the value is not written into the buffer, the buffer contents are unmodified; +* `counters` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to a 12-byte buffer where three low-endian 32-bit integers will be stored one after another, representing the counters, respectively: + * Of items removed from the backend database will be written; + * Of unique keys removed, taking into account both the backend and the overlay; + * Of iterations (each requiring a storage seek/read) which were done. ##### Result @@ -285,8 +285,7 @@ The function used to accept only a child storage key and a limit and return a SC ```wat (func $ext_default_child_storage_storage_kill_version_4 (param $storage_key i64) (param $maybe_limit i64) (param $maybe_cursor_in i64) - (param $maybe_cursor_out i64) (param $backend i32) (param $unique i32) (param $loops i32) - (result i32)) + (param $maybe_cursor_out i64) (param $counters i32) (result i32)) ``` ##### Arguments @@ -294,10 +293,11 @@ The function used to accept only a child storage key and a limit and return a SC * `storage_key` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the child storage key ([Definition 219](https://spec.polkadot.network/chap-host-api#defn-child-storage-type)); * `maybe_limit` is an optional positive integer representing either the maximum number of backend deletions which may happen, or the absence of such a limit. The number of backend iterations may surpass this limit by no more than one; * `maybe_cursor_in` is an optional pointer-size representing the cursor returned by the previous (unfinished) call to this function. It should be _absent_ on the first call; -* `maybe_cursor_out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer where the continuation cursor will optionally be written (see also the Result section). The value is actually stored only if the buffer is large enough. Otherwise, the buffer contents are undefined; -* `backend` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to a 4-byte buffer where a 32-bit integer representing the number of items removed from the backend database will be written; -* `unique` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to a 4-byte buffer where a 32-bit integer representing the number of unique keys removed, taking into account both the backend and the overlay; -* `loops` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to a 4-byte buffer where a 32-bit integer representing the number of iterations (each requiring a storage seek/read) which were done will be written. +* `maybe_cursor_out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer where the continuation cursor will optionally be written (see also the Result section). The value is actually stored only if the buffer is large enough. Whenever the value is not written into the buffer, the buffer contents are unmodified; +* `counters` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to a 12-byte buffer where three low-endian 32-bit integers will be stored one after another, representing the counters, respectively: + * Of items removed from the backend database will be written; + * Of unique keys removed, taking into account both the backend and the overlay; + * Of iterations (each requiring a storage seek/read) which were done. ##### Result @@ -322,8 +322,8 @@ The function used to accept (along with the child storage key) only a prefix and ```wat (func $ext_default_child_storage_clear_prefix_version_3 (param $storage_key i64) (param $prefix i64) (param $maybe_limit i64) - (param $maybe_cursor_in i64) (param $maybe_cursor_out i64) (param $backend i32) - (param $unique i32) (param $loops i32) (result i32)) + (param $maybe_cursor_in i64) (param $maybe_cursor_out i64) (param $counters i32) + (result i32)) ``` ##### Arguments @@ -332,10 +332,11 @@ The function used to accept (along with the child storage key) only a prefix and * `prefix` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) containing a storage prefix being cleared; * `maybe_limit` is an optional positive integer representing either the maximum number of backend deletions which may happen, or the absence of such a limit. The number of backend iterations may surpass this limit by no more than one; * `maybe_cursor_in` is an optional pointer-size representing the cursor returned by the previous (unfinished) call to this function. It should be _absent_ on the first call; -* `maybe_cursor_out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer where the continuation cursor will optionally be written (see also the Result section). The value is actually stored only if the buffer is large enough. Otherwise, the buffer contents are undefined; -* `backend` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to a 4-byte buffer where a 32-bit integer representing the number of items removed from the backend database will be written; -* `unique` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to a 4-byte buffer where a 32-bit integer representing the number of unique keys removed, taking into account both the backend and the overlay; -* `loops` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to a 4-byte buffer where a 32-bit integer representing the number of iterations (each requiring a storage seek/read) which were done will be written. +* `maybe_cursor_out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer where the continuation cursor will optionally be written (see also the Result section). The value is actually stored only if the buffer is large enough. Whenever the value is not written into the buffer, the buffer contents are unmodified; +* `counters` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to a 12-byte buffer where three low-endian 32-bit integers will be stored one after another, representing the counters, respectively: + * Of items removed from the backend database will be written; + * Of unique keys removed, taking into account both the backend and the overlay; + * Of iterations (each requiring a storage seek/read) which were done. ##### Result From 83474f8c8b83f79c4f5f957abd7a3b36c1d45791 Mon Sep 17 00:00:00 2001 From: Dmitry Sinyavin Date: Wed, 19 Nov 2025 15:30:40 +0100 Subject: [PATCH 17/30] Add memory safety section --- ...0145-remove-unnecessary-allocator-usage.md | 30 +++++++++++-------- 1 file changed, 18 insertions(+), 12 deletions(-) diff --git a/text/0145-remove-unnecessary-allocator-usage.md b/text/0145-remove-unnecessary-allocator-usage.md index 0775ec05d..be18e9787 100644 --- a/text/0145-remove-unnecessary-allocator-usage.md +++ b/text/0145-remove-unnecessary-allocator-usage.md @@ -82,6 +82,12 @@ Conforming implementations must not produce invalid values when encoding. Receiv The Runtime optional pointer-size has exactly the same definition as Runtime pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) with the value of 2⁶⁴-1 representing a non-existing value (an _absent_ value). +### Memory safety + +Pointers to input parameters passed to the host must reference readable memory regions. The host must abort execution if the memory region referenced by the pointer cannot be read, unless explicitly stated otherwise. + +Pointers to output parameters passed to the host must reference writeable memory regions. The host must abort execution if the memory region referenced by the pointer cannot be written, in case it is performing the actual write, unless explicitly stated otherwise. + ### Changes to host functions #### ext_storage_get @@ -120,7 +126,7 @@ The function was returning a SCALE-encoded `Option`-wrapped 32-bit integer repre ##### Arguments * `key` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the storage key being read; -* `value_out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer where the value read should be stored. The value is actually stored only if the buffer is large enough. Otherwise, the buffer contents are unchanged; +* `value_out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer where the value read should be stored. The value is actually stored only if the buffer is large enough. Otherwise, the buffer is not written into and its contents are unchanged; * `value_offset` is an unsigned 32-bit offset from which the value reading should start. ##### Result @@ -185,7 +191,7 @@ The old version accepted the state version as an argument and returned a SCALE-e ##### Arguments -* `out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer where the SCALE-encoded storage root, calculated after committing all the existing operations, will be stored. The value is actually stored only if the buffer is large enough. Otherwise, the buffer contents are unchanged. +* `out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer where the SCALE-encoded storage root, calculated after committing all the existing operations, will be stored. The value is actually stored only if the buffer is large enough. Otherwise, the buffer is not written into and its contents are unchanged. ##### Results @@ -214,7 +220,7 @@ The old version accepted the key and returned the SCALE-encoded next key in a ho ##### Arguments * `key_in` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer containing a storage key; -* `key_out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to an output buffer where the next key in the storage in the lexicographical order will be written. The value is actually stored only if the buffer is large enough. Otherwise, the buffer contents are unchanged. +* `key_out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to an output buffer where the next key in the storage in the lexicographical order will be written. The value is actually stored only if the buffer is large enough. Otherwise, the buffer is not written into and its contents are unchanged. ##### Result @@ -259,7 +265,7 @@ The function was returning a SCALE-encoded `Option`-wrapped 32-bit integer repre * `storage_key` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the child storage key ([Definition 219](https://spec.polkadot.network/chap-host-api#defn-child-storage-type)); * `key` is the storage key being read; -* `value_out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer where the value read should be stored. The value is actually stored only if the buffer is large enough. Otherwise, the buffer contents are unchanged; +* `value_out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer where the value read should be stored. The value is actually stored only if the buffer is large enough. Otherwise, the buffer is not written into and its contents are unchanged; * `value_offset` is an unsigned 32-bit offset from which the value reading should start. ##### Result @@ -365,7 +371,7 @@ The old version accepted (along with the child storage key) the state version as ##### Arguments * `storage_key` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the child storage key ([Definition 219](https://spec.polkadot.network/chap-host-api#defn-child-storage-type)); -* `out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer where the SCALE-encoded storage root, calculated after committing all the existing operations, will be stored. The value is actually stored only if the buffer is large enough. Otherwise, the buffer contents are unchanged. +* `out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer where the SCALE-encoded storage root, calculated after committing all the existing operations, will be stored. The value is actually stored only if the buffer is large enough. Otherwise, the buffer is not written into and its contents are unchanged. ##### Results @@ -395,7 +401,7 @@ The old version accepted (along with the child storage key) the key and returned * `storage_key` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the child storage key ([Definition 219](https://spec.polkadot.network/chap-host-api#defn-child-storage-type)); * `key_in` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer containing a storage key; -* `key_out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to an output buffer where the next key in the storage in the lexicographical order will be written. The value is actually stored only if the buffer is large enough. Otherwise, the buffer contents are unchanged. +* `key_out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to an output buffer where the next key in the storage in the lexicographical order will be written. The value is actually stored only if the buffer is large enough. Otherwise, the buffer is not written into and its contents are unchanged. ##### Result @@ -455,7 +461,7 @@ The function used to return the SCALE-encoded runtime version information in a h ##### Arguments * `wasm` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the Wasm blob from which the version information should be extracted; -* `out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the buffer where the SCALE-encoded extracted version information will be stored. The value is actually stored only if the buffer is large enough. Otherwise, the buffer contents are unchanged. +* `out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the buffer where the SCALE-encoded extracted version information will be stored. The value is actually stored only if the buffer is large enough. Otherwise, the buffer is not written into and its contents are unchanged. ##### Result @@ -475,7 +481,7 @@ A new function is introduced to make it possible to fetch a cursor produced by ` ``` ##### Arguments -* `out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the buffer where the last cached cursor will be stored, if one exists. The value is actually stored only if the buffer is large enough. Otherwise, the buffer contents are unchanged. +* `out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the buffer where the last cached cursor will be stored, if one exists. The value is actually stored only if the buffer is large enough. Otherwise, the buffer is not written into and its contents are unchanged. ##### Result @@ -808,7 +814,7 @@ A new function is introduced to replace `ext_offchain_local_storage_get`. The na * `kind` is an offchain storage kind, where `0` denotes the persistent storage ([Definition 222](https://spec.polkadot.network/chap-host-api#defn-offchain-persistent-storage)), and `1` denotes the local storage ([Definition 223](https://spec.polkadot.network/chap-host-api#defn-offchain-persistent-storage)); * `key` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the storage key being read; -* `value_out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer where the value read should be stored. The value is actually stored only if the buffer is large enough. Otherwise, the buffer contents are unchanged; +* `value_out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer where the value read should be stored. The value is actually stored only if the buffer is large enough. Otherwise, the buffer is not written into and its contents are unchanged; * `offset` is a 32-bit offset from which the value reading should start. ##### Result @@ -839,7 +845,7 @@ The function used to return a SCALE-encoded `Result` value in a host-allocated b `method` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the HTTP method. Possible values are “GET” and “POST”; `uri` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the URI; -`meta` is a future-reserved field containing additional, SCALE-encoded parameters. Currently, its value is ignored. +`meta` is a future-reserved field containing additional, SCALE-encoded parameters. Currently, its value is ignored. Passing a pointer to non-readable memory shall not result in execution abortion. ##### Result @@ -961,7 +967,7 @@ New function to replace functionality of `ext_offchain_http_response_headers` wi * `request_id` is an i32 integer indicating the ID of the started request, as returned by `ext_offchain_http_request_start`; * `header_index` is an i32 integer indicating the index of the header requested, starting from zero; -* `out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the buffer where the header name will be stored. The value is actually stored only if the buffer is large enough. Otherwise, the buffer contents are unchanged; +* `out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the buffer where the header name will be stored. The value is actually stored only if the buffer is large enough. Otherwise, the buffer is not written into and its contents are unchanged. ##### Result @@ -984,7 +990,7 @@ New function to replace functionality of `ext_offchain_http_response_headers` wi * `request_id` is an i32 integer indicating the ID of the started request, as returned by `ext_offchain_http_request_start`; * `header_index` is an i32 integer indicating the index of the header requested, starting from zero; -* `out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the buffer where the header value will be stored. The value is actually stored only if the buffer is large enough. Otherwise, the buffer contents are unchanged; +* `out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the buffer where the header value will be stored. The value is actually stored only if the buffer is large enough. Otherwise, the buffer is not written into and its contents are unchanged. ##### Result From 7e9394dd989750c35623209aa74872ee1b4b6fc4 Mon Sep 17 00:00:00 2001 From: Dmitry Sinyavin Date: Wed, 19 Nov 2025 15:41:29 +0100 Subject: [PATCH 18/30] Tiny fix --- text/0145-remove-unnecessary-allocator-usage.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/text/0145-remove-unnecessary-allocator-usage.md b/text/0145-remove-unnecessary-allocator-usage.md index be18e9787..2992a15a3 100644 --- a/text/0145-remove-unnecessary-allocator-usage.md +++ b/text/0145-remove-unnecessary-allocator-usage.md @@ -49,7 +49,7 @@ Runtime developers, who will benefit from the improved performance and more dete By a Runtime Optional Positive Integer we refer to an abstract value $r \in \mathcal{R}$ where $\mathcal{R} := \{\bot\} \cup \{0, 1, \dots, 2^{32} - 1\},$ and where $\bot$ denotes the _absent_ value. -At the Host-Runtime interface this type is represented by a signed 64-bit integer $x \in \mathbb{Z}$ (thus $\mathbb{Z} \in \{-2^{63}, \dots, 2^{63} - 1\}$). +At the Host-Runtime interface this type is represented by a signed 64-bit integer $x \in \mathbb{Z}$ (thus $\mathbb{Z} := \{-2^{63}, \dots, 2^{63} - 1\}$). We define the encoding function $\mathrm{Enc}_{\mathrm{ROP}} : \mathcal{R} \to \mathbb{Z}$ and decoding function $\mathrm{Dec}_{\mathrm{ROP}} : \mathbb{Z} \to \mathcal{R} \cup \{\mathrm{error}\}$ as follows. From eae557041701faf16392748a76f6a8aa11e2735d Mon Sep 17 00:00:00 2001 From: s0me0ne-unkn0wn <48632512+s0me0ne-unkn0wn@users.noreply.github.com> Date: Fri, 26 Dec 2025 11:20:53 +0100 Subject: [PATCH 19/30] Apply suggestions from code review MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Co-authored-by: Alexander Theißen --- text/0145-remove-unnecessary-allocator-usage.md | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/text/0145-remove-unnecessary-allocator-usage.md b/text/0145-remove-unnecessary-allocator-usage.md index 2992a15a3..82a3af09c 100644 --- a/text/0145-remove-unnecessary-allocator-usage.md +++ b/text/0145-remove-unnecessary-allocator-usage.md @@ -151,7 +151,7 @@ The function used to accept only a prefix and a limit and return a SCALE-encoded ```wat (func $ext_storage_clear_prefix_version_3 (param $maybe_prefix i64) (param $maybe_limit i64) (param $maybe_cursor_in i64) - (param $maybe_cursor_out i64) (param $counters i32) (result i32)) + (param $maybe_cursor_out i64) (param $counters_out i32) (result i32)) ``` ##### Arguments @@ -291,7 +291,7 @@ The function used to accept only a child storage key and a limit and return a SC ```wat (func $ext_default_child_storage_storage_kill_version_4 (param $storage_key i64) (param $maybe_limit i64) (param $maybe_cursor_in i64) - (param $maybe_cursor_out i64) (param $counters i32) (result i32)) + (param $maybe_cursor_out i64) (param $counters_out i32) (result i32)) ``` ##### Arguments @@ -328,7 +328,7 @@ The function used to accept (along with the child storage key) only a prefix and ```wat (func $ext_default_child_storage_clear_prefix_version_3 (param $storage_key i64) (param $prefix i64) (param $maybe_limit i64) - (param $maybe_cursor_in i64) (param $maybe_cursor_out i64) (param $counters i32) + (param $maybe_cursor_in i64) (param $maybe_cursor_out i64) (param $counters_out i32) (result i32)) ``` @@ -1013,7 +1013,7 @@ The function has already been using a runtime-allocated buffer to return its val ```wat (func $ext_offchain_http_response_read_body_version_2 - (param $request_id i32) (param $buffer i64) (param $deadline i64) (result i64)) + (param $request_id i32) (param $buffer_out i64) (param $deadline i64) (result i64)) ``` ##### Arguments @@ -1047,7 +1047,7 @@ A new function providing means of passing input data from the host to the runtim ```wat (func $ext_input_read_version_1 - (param $buffer i64)) + (param $buffer_out i64)) ``` ##### Arguments From f2cf36762653e3d084b81fbe8c725310ac048f88 Mon Sep 17 00:00:00 2001 From: Dmitry Sinyavin Date: Sun, 28 Dec 2025 15:59:06 +0100 Subject: [PATCH 20/30] Addewss discussions --- ...0145-remove-unnecessary-allocator-usage.md | 52 ++++++++++--------- 1 file changed, 28 insertions(+), 24 deletions(-) diff --git a/text/0145-remove-unnecessary-allocator-usage.md b/text/0145-remove-unnecessary-allocator-usage.md index 82a3af09c..3765a3be2 100644 --- a/text/0145-remove-unnecessary-allocator-usage.md +++ b/text/0145-remove-unnecessary-allocator-usage.md @@ -88,6 +88,28 @@ Pointers to input parameters passed to the host must reference readable memory r Pointers to output parameters passed to the host must reference writeable memory regions. The host must abort execution if the memory region referenced by the pointer cannot be written, in case it is performing the actual write, unless explicitly stated otherwise. +### Runtime entry points + +Currently, all runtime entry points have the following identical Wasm function signatures: + +```wat +(func $runtime_entrypoint (param $data i32) (param $len i32) (result i64)) +``` + +After this RFC is implemented, such entry points are only supported for the legacy runtimes using the host-side allocator. All the new runtimes, using runtime-side allocator, must use new entry point signature: + +```wat +(func $runtime_entrypoint (param $len i32) (result i64)) +``` + +A runtime function called through such an entry point gets the length of SCALE-encoded input data as its only argument. After that, the function must allocate exactly the amount of bytes it is requested, and call the `ext_input_read` host function to obtain the encoded input data. + +The new entry point and the legacy entry point styles must not be used in a single runtime. + +If a runtime has the new-style entry point defined in this RFC, but happens to import functions that allocate on the host side, the host must not proceed with execution of such a runtime, aborting before the execution takes place. + +If a runtime has the legacy-style entry point, but happens to import functions that allocate on the runtime side, which are defined in this RFC, the host must not proceed with execution of such a runtime, aborting before the execution takes place. + ### Changes to host functions #### ext_storage_get @@ -160,7 +182,7 @@ The function used to accept only a prefix and a limit and return a SCALE-encoded * `maybe_limit` is an optional positive integer ([New Definition I](#new-def-i)) representing either the maximum number of backend deletions which may happen, or the _absence_ of such a limit. The number of backend iterations may surpass this limit by no more than one; * `maybe_cursor_in` is an optional pointer-size ([New Definition II](#new-def-ii)) representing the cursor returned by the previous (unfinished) call to this function. It should be _absent_ on the first call; * `maybe_cursor_out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer where the continuation cursor will optionally be written (see also the Result section). The value is actually stored only if the buffer is large enough. Whenever the value is not written into the buffer, the buffer contents are unmodified; -* `counters` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to a 12-byte buffer where three low-endian 32-bit integers will be stored one after another, representing the counters, respectively: +* `counters_out` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to a 12-byte buffer where three low-endian 32-bit integers will be stored one after another, representing the counters, respectively: * Of items removed from the backend database will be written; * Of unique keys removed, taking into account both the backend and the overlay; * Of iterations (each requiring a storage seek/read) which were done. @@ -300,7 +322,7 @@ The function used to accept only a child storage key and a limit and return a SC * `maybe_limit` is an optional positive integer representing either the maximum number of backend deletions which may happen, or the absence of such a limit. The number of backend iterations may surpass this limit by no more than one; * `maybe_cursor_in` is an optional pointer-size representing the cursor returned by the previous (unfinished) call to this function. It should be _absent_ on the first call; * `maybe_cursor_out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer where the continuation cursor will optionally be written (see also the Result section). The value is actually stored only if the buffer is large enough. Whenever the value is not written into the buffer, the buffer contents are unmodified; -* `counters` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to a 12-byte buffer where three low-endian 32-bit integers will be stored one after another, representing the counters, respectively: +* `counters_out` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to a 12-byte buffer where three low-endian 32-bit integers will be stored one after another, representing the counters, respectively: * Of items removed from the backend database will be written; * Of unique keys removed, taking into account both the backend and the overlay; * Of iterations (each requiring a storage seek/read) which were done. @@ -339,7 +361,7 @@ The function used to accept (along with the child storage key) only a prefix and * `maybe_limit` is an optional positive integer representing either the maximum number of backend deletions which may happen, or the absence of such a limit. The number of backend iterations may surpass this limit by no more than one; * `maybe_cursor_in` is an optional pointer-size representing the cursor returned by the previous (unfinished) call to this function. It should be _absent_ on the first call; * `maybe_cursor_out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer where the continuation cursor will optionally be written (see also the Result section). The value is actually stored only if the buffer is large enough. Whenever the value is not written into the buffer, the buffer contents are unmodified; -* `counters` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to a 12-byte buffer where three low-endian 32-bit integers will be stored one after another, representing the counters, respectively: +* `counters_out` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to a 12-byte buffer where three low-endian 32-bit integers will be stored one after another, representing the counters, respectively: * Of items removed from the backend database will be written; * Of unique keys removed, taking into account both the backend and the overlay; * Of iterations (each requiring a storage seek/read) which were done. @@ -845,7 +867,7 @@ The function used to return a SCALE-encoded `Result` value in a host-allocated b `method` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the HTTP method. Possible values are “GET” and “POST”; `uri` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the URI; -`meta` is a future-reserved field containing additional, SCALE-encoded parameters. Currently, its value is ignored. Passing a pointer to non-readable memory shall not result in execution abortion. +`meta` is a future-reserved field containing additional, SCALE-encoded parameters. Currently, its value is ignored. Passing a pointer to non-readable memory shall not result in execution abortion. However, an empty array must be passed by the caller. Failure to do so may result in consensus breakage later when the spec is updated with this field's handling logic. ##### Result @@ -1019,7 +1041,7 @@ The function has already been using a runtime-allocated buffer to return its val ##### Arguments * `request_id` is an i32 integer indicating the ID of the started request, as returned by `ext_offchain_http_request_start`; -* `buffer` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the buffer where the body is written; +* `buffer_out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the buffer where the body is written; * `deadline` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the SCALE-encoded Option value ([Definition 200](https://spec.polkadot.network/id-cryptography-encoding#defn-option-type)) containing the UNIX timestamp ([Definition 191](https://spec.polkadot.network/id-cryptography-encoding#defn-unix-time)). Passing `None` blocks indefinitely. ##### Result @@ -1052,22 +1074,4 @@ A new function providing means of passing input data from the host to the runtim ##### Arguments -* `buffer` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the buffer where the input data will be written. If the buffer is not large enough to accommodate the input data, the function will panic. - -### Other changes - -Currently, all runtime entrypoints have the following identical Wasm function signatures: - -```wat -(func $runtime_entrypoint (param $data i32) (param $len i32) (result i64)) -``` - -After this RFC is implemented, such entrypoints are only supported for the legacy runtimes using the host-side allocator. All the new runtimes, using runtime-side allocator, must use new entry point signature: - -```wat -(func $runtime_entrypoint (param $len i32) (result i64)) -``` - -A runtime function called through such an entrypoint gets the length of SCALE-encoded input data as its only argument. After that, the function must allocate exactly the amount of bytes it is requested, and call the `ext_input_read` host function to obtain the encoded input data. - -If a runtime happens to import both functions that allocate on the host side and functions that allocate on the runtime side, the host must not proceed with execution of such a runtime, aborting before the execution takes place. +* `buffer_out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the buffer where the input data will be written. If the buffer is not large enough to accommodate the input data, the function will panic. From e03c3619c7817e2a3690304240049fc6327ade61 Mon Sep 17 00:00:00 2001 From: s0me0ne-unkn0wn <48632512+s0me0ne-unkn0wn@users.noreply.github.com> Date: Mon, 5 Jan 2026 18:32:40 +0100 Subject: [PATCH 21/30] Update text/0145-remove-unnecessary-allocator-usage.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Co-authored-by: Alexander Theißen --- text/0145-remove-unnecessary-allocator-usage.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/text/0145-remove-unnecessary-allocator-usage.md b/text/0145-remove-unnecessary-allocator-usage.md index 3765a3be2..eac87f377 100644 --- a/text/0145-remove-unnecessary-allocator-usage.md +++ b/text/0145-remove-unnecessary-allocator-usage.md @@ -867,7 +867,7 @@ The function used to return a SCALE-encoded `Result` value in a host-allocated b `method` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the HTTP method. Possible values are “GET” and “POST”; `uri` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the URI; -`meta` is a future-reserved field containing additional, SCALE-encoded parameters. Currently, its value is ignored. Passing a pointer to non-readable memory shall not result in execution abortion. However, an empty array must be passed by the caller. Failure to do so may result in consensus breakage later when the spec is updated with this field's handling logic. +`meta` is a future-reserved field containing a SCALE-encoded array with additional parameters. Currently, passing anything but a readable pointer to an empty array shall result in execution abort. This is to ensure backwards compatibility in case future versions start interpreting the contents of the array. ##### Result From 2ec2eaf871192beeee0b58b6775f94c69e74159b Mon Sep 17 00:00:00 2001 From: Dmitry Sinyavin Date: Tue, 6 Jan 2026 12:54:49 +0100 Subject: [PATCH 22/30] Add `allow_partial` to storage reads --- text/0145-remove-unnecessary-allocator-usage.md | 16 +++++++++------- 1 file changed, 9 insertions(+), 7 deletions(-) diff --git a/text/0145-remove-unnecessary-allocator-usage.md b/text/0145-remove-unnecessary-allocator-usage.md index eac87f377..45a03adc3 100644 --- a/text/0145-remove-unnecessary-allocator-usage.md +++ b/text/0145-remove-unnecessary-allocator-usage.md @@ -142,14 +142,15 @@ The function was returning a SCALE-encoded `Option`-wrapped 32-bit integer repre ```wat (func $ext_storage_read_version_2 - (param $key i64) (param $value_out i64) (param $value_offset i32) (result i64)) + (param $key i64) (param $value_out i64) (param $value_offset i32) (param $allow_partial i32) (result i64)) ``` ##### Arguments * `key` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the storage key being read; -* `value_out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer where the value read should be stored. The value is actually stored only if the buffer is large enough. Otherwise, the buffer is not written into and its contents are unchanged; -* `value_offset` is an unsigned 32-bit offset from which the value reading should start. +* `value_out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer where the value read should be stored. The implementation must write $\mathrm{min}(\mathcal{value\_len}, \mathcal{out\_len})$ bytes of the value to `value_out` only if $(\mathcal{out\_len} \geq \mathcal{value\_len}) \lor (\mathcal{allow\_partial} = \mathrm{true})$. If $(\mathcal{out\_len} < \mathcal{value\_len}) \land (\mathcal{allow\_partial} = false)$, the implementation must not write any bytes to value_out and must leave the buffer unchanged; +* `value_offset` is an unsigned 32-bit offset from which the value reading should start; +* `allow_partial` is a boolean value, where `0` represents `false` and any other value represents `true`, denoting if the output buffer must be partially written even if its length is not enough to accommodate the whole value. ##### Result @@ -280,15 +281,16 @@ The function was returning a SCALE-encoded `Option`-wrapped 32-bit integer repre ```wat (func $ext_default_child_storage_read_version_2 (param $storage_key i64) (param $key i64) (param $value_out i64) (param $value_offset i32) - (result i64)) + (param $allow_partial i32) (result i64)) ``` ##### Arguments * `storage_key` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the child storage key ([Definition 219](https://spec.polkadot.network/chap-host-api#defn-child-storage-type)); -* `key` is the storage key being read; -* `value_out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer where the value read should be stored. The value is actually stored only if the buffer is large enough. Otherwise, the buffer is not written into and its contents are unchanged; -* `value_offset` is an unsigned 32-bit offset from which the value reading should start. +* `key` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the storage key being read; +* `value_out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer where the value read should be stored. The implementation must write $\mathrm{min}(\mathcal{value\_len}, \mathcal{out\_len})$ bytes of the value to `value_out` only if $(\mathcal{out\_len} \geq \mathcal{value\_len}) \lor (\mathcal{allow\_partial} = \mathrm{true})$. If $(\mathcal{out\_len} < \mathcal{value\_len}) \land (\mathcal{allow\_partial} = false)$, the implementation must not write any bytes to value_out and must leave the buffer unchanged; +* `value_offset` is an unsigned 32-bit offset from which the value reading should start; +* `allow_partial` is a boolean value, where `0` represents `false` and any other value represents `true`, denoting if the output buffer must be partially written even if its length is not enough to accommodate the whole value. ##### Result From 50bba5d43479330456825ee8c6bde739fd4552a8 Mon Sep 17 00:00:00 2001 From: s0me0ne-unkn0wn <48632512+s0me0ne-unkn0wn@users.noreply.github.com> Date: Thu, 22 Jan 2026 17:05:27 +0100 Subject: [PATCH 23/30] Fix KaTeX rendering --- text/0145-remove-unnecessary-allocator-usage.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/text/0145-remove-unnecessary-allocator-usage.md b/text/0145-remove-unnecessary-allocator-usage.md index 45a03adc3..120478a58 100644 --- a/text/0145-remove-unnecessary-allocator-usage.md +++ b/text/0145-remove-unnecessary-allocator-usage.md @@ -51,7 +51,8 @@ By a Runtime Optional Positive Integer we refer to an abstract value $r \in \mat At the Host-Runtime interface this type is represented by a signed 64-bit integer $x \in \mathbb{Z}$ (thus $\mathbb{Z} := \{-2^{63}, \dots, 2^{63} - 1\}$). -We define the encoding function $\mathrm{Enc}_{\mathrm{ROP}} : \mathcal{R} \to \mathbb{Z}$ and decoding function $\mathrm{Dec}_{\mathrm{ROP}} : \mathbb{Z} \to \mathcal{R} \cup \{\mathrm{error}\}$ as follows. +We define the encoding function $`\mathrm{Enc}_{\mathrm{ROP}} : \mathcal{R} \to \mathbb{Z}`$ +and decoding function $`\mathrm{Dec}_{\mathrm{ROP}} : \mathbb{Z} \to \mathcal{R} \cup \{\mathrm{error}\}`$ as follows. For $r \in \mathcal{R}$, From e4e6f16265e2d80ff5794d563277364545fe4238 Mon Sep 17 00:00:00 2001 From: Dmitry Sinyavin Date: Sun, 25 Jan 2026 18:55:26 +0100 Subject: [PATCH 24/30] Revamp public keys retrieval API --- ...0145-remove-unnecessary-allocator-usage.md | 58 +++++-------------- 1 file changed, 14 insertions(+), 44 deletions(-) diff --git a/text/0145-remove-unnecessary-allocator-usage.md b/text/0145-remove-unnecessary-allocator-usage.md index 120478a58..b47557f3d 100644 --- a/text/0145-remove-unnecessary-allocator-usage.md +++ b/text/0145-remove-unnecessary-allocator-usage.md @@ -529,62 +529,32 @@ If the buffer had enough capacity and the cursor was stored successfully, the cu ##### Changes -The following functions are considered obsolete in favor of the new `*_num_public_keys` and `*_public_key` counterparts: -* `ext_crypto_ed25519_public_keys_version_1` -* `ext_crypto_sr25519_public_keys_version_1` -* `ext_crypto_ecdsa_public_keys_version_1` - -They cannot be used in a runtime using the new-style of entry-point. - -#### ext_crypto_{ed25519|sr25519|ecdsa}_num_public_keys - -##### Changes - -New functions, all sharing the same signature and logic, are introduced: -* `ext_crypto_ed25519_num_public_keys_version_1` -* `ext_crypto_sr25519_num_public_keys_version_1` -* `ext_crypto_ecdsa_num_public_keys_version_1` +The following functions share the same signatures and set of changes: +* `ext_crypto_ed25519_public_keys` +* `ext_crypto_sr25519_public_keys` +* `ext_crypto_ecdsa_public_keys` -They are intended to replace the obsolete `ext_crypto_{ed25519|sr25519|ecdsa}_public_keys` with a new iterative approach. +The functions used to return a SCALE-encoded array of public keys in a host-allocated buffer. They are changed to accept a runtime-allocated output buffer as an argument and to return the total size in bytes required to store all public keys. The keys are written consecutively without any encoding. The value is only written to the buffer if it is large enough to accommodate the entire result. ##### New prototypes ```wat -(func $ext_crypto_{ed25519|sr25519|ecdsa}_num_public_keys - (param $id i32) (result i32)) +(func $ext_crypto_ed25519_public_keys_version_2 + (param $id i32) (param $out i64) (result i32)) +(func $ext_crypto_sr25519_public_keys_version_2 + (param $id i32) (param $out i64) (result i32)) +(func $ext_crypto_ecdsa_public_keys_version_2 + (param $id i32) (param $out i64) (result i32)) ``` ##### Arguments -* `id` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to the key type identifier ([Definition 220](https://spec.polkadot.network/chap-host-api#defn-key-type-id)). +* `id` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to the key type identifier ([Definition 220](https://spec.polkadot.network/chap-host-api#defn-key-type-id)); +* `out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer where the public keys of the given type known to the keystore will be stored consecutively. The value is actually stored only if the buffer is large enough. Otherwise, the buffer is not written into and its contents are unchanged. ##### Result -The result represents a (possibly zero) number of keys of the given type known to the keystore. - -#### ext_crypto_{ed25519|sr25519|ecdsa}_public_key - -##### Changes - -New functions, all sharing the same signature and logic, are introduced: -* `ext_crypto_ed25519_public_key_version_1` -* `ext_crypto_sr25519_public_key_version_1` -* `ext_crypto_ecdsa_public_key_version_1` - -They are intended to replace the obsolete `ext_crypto_{ed25519|sr25519|ecdsa}_public_keys` with a new iterative approach. - -##### New prototypes - -```wat -(func $ext_crypto_{ed25519|sr25519|ecdsa}_public_key - (param $id i32) (param $index i32) (param $out)) -``` - -##### Arguments - -* `id` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to the key type identifier ([Definition 220](https://spec.polkadot.network/chap-host-api#defn-key-type-id)). -* `index` is the index of the key in the keystore. If the index is out of bounds (determined by the value returned by the respective `_num_public_keys` function) the function will panic; -* `out` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to the output buffer of the respective size (depending on key type) where the key will be written. +The result is an unsigned 32-bit integer representing the total size in bytes required to store all public keys of the given type. The number of keys can be determined by dividing this value by the known key size for the respective key type. A value of `0` indicates that no keys of the given type are known to the keystore. #### ext_crypto_{ed25519|sr25519|ecdsa}_generate From 77a0bfb526d74605b37ac08745906c5a95a8bb9c Mon Sep 17 00:00:00 2001 From: Dmitry Sinyavin Date: Sun, 25 Jan 2026 19:04:44 +0100 Subject: [PATCH 25/30] Explicitly define buffer/value lengths in storage reads --- text/0145-remove-unnecessary-allocator-usage.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/text/0145-remove-unnecessary-allocator-usage.md b/text/0145-remove-unnecessary-allocator-usage.md index b47557f3d..1f47a22be 100644 --- a/text/0145-remove-unnecessary-allocator-usage.md +++ b/text/0145-remove-unnecessary-allocator-usage.md @@ -149,7 +149,7 @@ The function was returning a SCALE-encoded `Option`-wrapped 32-bit integer repre ##### Arguments * `key` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the storage key being read; -* `value_out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer where the value read should be stored. The implementation must write $\mathrm{min}(\mathcal{value\_len}, \mathcal{out\_len})$ bytes of the value to `value_out` only if $(\mathcal{out\_len} \geq \mathcal{value\_len}) \lor (\mathcal{allow\_partial} = \mathrm{true})$. If $(\mathcal{out\_len} < \mathcal{value\_len}) \land (\mathcal{allow\_partial} = false)$, the implementation must not write any bytes to value_out and must leave the buffer unchanged; +* `value_out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer where the value read should be stored. Let $\mathcal{out\_len}$ denote the length component of this pointer-size (i.e., the size of the output buffer), and let $\mathcal{value\_len}$ denote the actual length of the value in storage starting from `value_offset`. The implementation must write $\mathrm{min}(\mathcal{value\_len}, \mathcal{out\_len})$ bytes of the value to `value_out` only if $(\mathcal{out\_len} \geq \mathcal{value\_len}) \lor (\mathcal{allow\_partial} = \mathrm{true})$. If $(\mathcal{out\_len} < \mathcal{value\_len}) \land (\mathcal{allow\_partial} = \mathrm{false})$, the implementation must not write any bytes to `value_out` and must leave the buffer unchanged; * `value_offset` is an unsigned 32-bit offset from which the value reading should start; * `allow_partial` is a boolean value, where `0` represents `false` and any other value represents `true`, denoting if the output buffer must be partially written even if its length is not enough to accommodate the whole value. @@ -289,7 +289,7 @@ The function was returning a SCALE-encoded `Option`-wrapped 32-bit integer repre * `storage_key` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the child storage key ([Definition 219](https://spec.polkadot.network/chap-host-api#defn-child-storage-type)); * `key` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the storage key being read; -* `value_out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer where the value read should be stored. The implementation must write $\mathrm{min}(\mathcal{value\_len}, \mathcal{out\_len})$ bytes of the value to `value_out` only if $(\mathcal{out\_len} \geq \mathcal{value\_len}) \lor (\mathcal{allow\_partial} = \mathrm{true})$. If $(\mathcal{out\_len} < \mathcal{value\_len}) \land (\mathcal{allow\_partial} = false)$, the implementation must not write any bytes to value_out and must leave the buffer unchanged; +* `value_out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer where the value read should be stored. Let $\mathcal{out\_len}$ denote the length component of this pointer-size (i.e., the size of the output buffer), and let $\mathcal{value\_len}$ denote the actual length of the value in storage starting from `value_offset`. The implementation must write $\mathrm{min}(\mathcal{value\_len}, \mathcal{out\_len})$ bytes of the value to `value_out` only if $(\mathcal{out\_len} \geq \mathcal{value\_len}) \lor (\mathcal{allow\_partial} = \mathrm{true})$. If $(\mathcal{out\_len} < \mathcal{value\_len}) \land (\mathcal{allow\_partial} = \mathrm{false})$, the implementation must not write any bytes to `value_out` and must leave the buffer unchanged; * `value_offset` is an unsigned 32-bit offset from which the value reading should start; * `allow_partial` is a boolean value, where `0` represents `false` and any other value represents `true`, denoting if the output buffer must be partially written even if its length is not enough to accommodate the whole value. From e1308fb0ddc08eafb3e7d575877f70a744e3f829 Mon Sep 17 00:00:00 2001 From: s0me0ne-unkn0wn <48632512+s0me0ne-unkn0wn@users.noreply.github.com> Date: Mon, 26 Jan 2026 13:08:41 +0200 Subject: [PATCH 26/30] Typo Co-authored-by: Oliver Tale-Yazdi --- text/0145-remove-unnecessary-allocator-usage.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/text/0145-remove-unnecessary-allocator-usage.md b/text/0145-remove-unnecessary-allocator-usage.md index 1f47a22be..76c3a9cf5 100644 --- a/text/0145-remove-unnecessary-allocator-usage.md +++ b/text/0145-remove-unnecessary-allocator-usage.md @@ -400,7 +400,7 @@ The old version accepted (along with the child storage key) the state version as ##### Results -The result is the length of the output that mught have been stored in the buffer provided in `out`. +The result is the length of the output that might have been stored in the buffer provided in `out`. #### ext_default_child_storage_next_key From cbc0b0a76b76f00a9989e486cc0215063cc73b20 Mon Sep 17 00:00:00 2001 From: Dmitry Sinyavin Date: Wed, 28 Jan 2026 19:25:26 +0100 Subject: [PATCH 27/30] Address discussions --- ...0145-remove-unnecessary-allocator-usage.md | 26 +++++++++---------- 1 file changed, 12 insertions(+), 14 deletions(-) diff --git a/text/0145-remove-unnecessary-allocator-usage.md b/text/0145-remove-unnecessary-allocator-usage.md index 1f47a22be..d66372899 100644 --- a/text/0145-remove-unnecessary-allocator-usage.md +++ b/text/0145-remove-unnecessary-allocator-usage.md @@ -47,7 +47,7 @@ Runtime developers, who will benefit from the improved performance and more dete #### New Definition I: Runtime Optional Positive Integer -By a Runtime Optional Positive Integer we refer to an abstract value $r \in \mathcal{R}$ where $\mathcal{R} := \{\bot\} \cup \{0, 1, \dots, 2^{32} - 1\},$ and where $\bot$ denotes the _absent_ value. +By a Runtime Optional Positive Integer we refer to a value of type $R \equiv \\{\bot\\} \cup \\{0, 1, \dots, 2^{32} - 1\\}$, where $\bot$ denotes the _absent_ value. At the Host-Runtime interface this type is represented by a signed 64-bit integer $x \in \mathbb{Z}$ (thus $\mathbb{Z} := \{-2^{63}, \dots, 2^{63} - 1\}$). @@ -60,7 +60,7 @@ $$ \mathrm{Enc}_{\mathrm{ROP}}(r) := \begin{cases} -1 & \text{if } r = \bot, \\ -r & \text{if } r \in \{0, 1, \dots, 2^{32} - 1\}. +r & \text{otherwise} \end{cases} $$ @@ -191,7 +191,9 @@ The function used to accept only a prefix and a limit and return a SCALE-encoded ##### Result -The result represents the length of the continuation cursor which might have been written to the buffer provided in `maybe_cursor_out`. A zero value represents the absence of such a cursor and no need for continuation (the prefix has been completely cleared). +The result represents the length of the continuation cursor, which might have been written to the buffer provided in `maybe_cursor_out`. A zero value represents the absence of such a cursor and no need for continuation (the prefix has been completely cleared). + +The runtime must only pass an obtained continuation cursor to a directly successive call of this function. It is not permitted to use cursors more than once. All cursors must be deemed invalid as soon as another storage-modifying function has been called. Different usage may result in remaining storage keys or undefined behaviour. #### ext_storage_root @@ -244,7 +246,7 @@ The old version accepted the key and returned the SCALE-encoded next key in a ho ##### Arguments * `key_in` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer containing a storage key; -* `key_out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to an output buffer where the next key in the storage in the lexicographical order will be written. The value is actually stored only if the buffer is large enough. Otherwise, the buffer is not written into and its contents are unchanged. +* `key_out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to an output buffer where the next key in the storage in the lexicographical order will be written. The value is actually stored only if the next key exists and the buffer is large enough. Otherwise, the buffer is not written into, and its contents are unchanged. ##### Result @@ -502,17 +504,13 @@ A new function is introduced to make it possible to fetch a cursor produced by ` ```wat (func $ext_misc_last_cursor_version_1 - (param $out i64) (result i64)) + (param $out i32)) ``` ##### Arguments -* `out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the buffer where the last cached cursor will be stored, if one exists. The value is actually stored only if the buffer is large enough. Otherwise, the buffer is not written into and its contents are unchanged. +* `out` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to the buffer where the last cached cursor will be stored, if one exists. The caller must provide a buffer large enough to accommodate the entire cursor; the exact length of the cursor is known to the caller from the result of the preceding call to one of the storage prefix clearing functions. If the buffer provided is not large enough, execution is aborted. -##### Result - -The result is an optional positive integer ([New Definition I](#new-def-i)) representing the length of the cursor that might have been stored in `out`. An _absent_ value represents the absence of the cached cursor. - -If the buffer had enough capacity and the cursor was stored successfully, the cursor cache is cleared and the same cursor cannot be retrieved once again using this function. +After this function is called, the cursor cache is cleared, and the same cursor cannot be retrieved again using this function. #### ext_crypto_{ed25519|sr25519|ecdsa}_public_keys @@ -578,13 +576,13 @@ The functions used to return a host-allocated buffer containing the key of the c ```wat (func $ext_crypto_{ed25519|sr25519|ecdsa}_generate_version_2 - (param $id i32) (param $seed i64) (param $out i32)) + (param $id i32) (param $seed i32) (param $out i32)) ``` ##### Arguments -* `id` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to the key type identifier ([Definition 220](https://spec.polkadot.network/chap-host-api#defn-key-type-id)). The function will panic if the identifier is invalid; -* `seed` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the SCALE-encoded Option value ([Definition 200](https://spec.polkadot.network/id-cryptography-encoding#defn-option-type)) containing the BIP-39 seed which must be valid UTF-8. The function will panic if the seed is not valid UTF-8; +* `id` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to the key type identifier ([Definition 220](https://spec.polkadot.network/chap-host-api#defn-key-type-id)). Execution will be aborted if the identifier is invalid; +* `seed` is an optional pointer-size ([New Definition II](#new-def-ii)) to the BIP-39 seed which must be valid UTF-8. Execution will be aborted if the seed is not a valid UTF-8 string; * `out` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to the output buffer of the respective size (depending on key type) where the generated key will be written. #### ext_crypto_{ed25519|sr25519|ecdsa}_sign\[_prehashed] From 3bcba0d552251e40aec66ee7a7f372241894f227 Mon Sep 17 00:00:00 2001 From: Dmitry Sinyavin Date: Wed, 28 Jan 2026 19:27:49 +0100 Subject: [PATCH 28/30] Improbe formatting and grammar --- ...0145-remove-unnecessary-allocator-usage.md | 94 +++++++++---------- 1 file changed, 47 insertions(+), 47 deletions(-) diff --git a/text/0145-remove-unnecessary-allocator-usage.md b/text/0145-remove-unnecessary-allocator-usage.md index d66372899..2e4908f73 100644 --- a/text/0145-remove-unnecessary-allocator-usage.md +++ b/text/0145-remove-unnecessary-allocator-usage.md @@ -2,10 +2,10 @@ | | | | --------------- | ------------------------------------------------------------------------------------------- | -| **Start Date** | 2025-05-16 | +| **Start Date** | 2025-05-16 | | **Description** | Update the runtime-host interface to no longer make use of a host-side allocator | -| **Authors** | Pierre Krieger, Someone Unknown - | +| **Authors** | Pierre Krieger, Someone Unknown | + ## Summary Update the runtime-host interface so that it no longer uses the host-side allocator. @@ -18,7 +18,7 @@ This RFC is mainly based on [RFC-4](https://github.com/polkadot-fellows/RFCs/pul ### Changes from RFC-4 -* The original RFC required checking if an output buffer address provided to a host function is inside the VM address space range and to stop the runtime execution if that's not the case. That requirement has been removed in this version of the RFC, as in the general case, the host doesn't have exhaustive information about the VM's memory organization. Thus, attempting to write to an out-of-bounds region will result in a "normal" runtime panic. +* The original RFC required checking if an output buffer address provided to a host function is inside the VM address space range, and to stop the runtime execution if that's not the case. That requirement has been removed in this version of the RFC, as in the general case, the host doesn't have exhaustive information about the VM's memory organization. Thus, attempting to write to an out-of-bounds region will result in a "normal" runtime panic. * Function signatures introduced by [PPP#7](https://github.com/w3f/PPPs/pull/7) have been used in this RFC, as the PPP has already been [properly implemented](https://github.com/paritytech/substrate/pull/11490) and [documented](https://github.com/w3f/polkadot-spec/pull/592/files). However, it has never been officially adopted, nor have its functions been in use. * Return values were harmonized to `i64` everywhere where they represent either a positive outcome as a positive integer or a negative outcome as a negative error code. * `ext_offchain_network_peer_id_version_1` now returns a result code instead of silently failing if the network status is unavailable. @@ -29,11 +29,11 @@ This RFC is mainly based on [RFC-4](https://github.com/polkadot-fellows/RFCs/pul The heap allocation of the runtime is currently controlled by the host using a memory allocator on the host side. -The API of many host functions contains buffer allocations. For example, when calling `ext_hashing_twox_256_version_1`, the host allocates a 32-byte buffer using the host allocator, and returns a pointer to this buffer to the runtime. The runtime later has to call `ext_allocator_free_version_1` on this pointer to free the buffer. +The API of many host functions contains buffer allocations. For example, when calling `ext_hashing_twox_256_version_1`, the host allocates a 32-byte buffer using the host allocator and returns a pointer to this buffer to the runtime. The runtime later has to call `ext_allocator_free_version_1` on this pointer to free the buffer. Even though no benchmark has been done, it is pretty obvious that this design is very inefficient. To continue with the example of `ext_hashing_twox_256_version_1`, it would be more efficient to instead write the output hash to a buffer allocated by the runtime on its stack and passed by pointer to the function. Allocating a buffer on the stack, in the worst case, consists simply of decreasing a number; in the best case, it is free. Doing so would save many VM memory reads and writes by the allocator, and would save a function call to `ext_allocator_free_version_1`. -Furthermore, the existence of the host-side allocator has become questionable over time. It is implemented in a very naive way: every allocation is rounded up to the next power of two, and once a piece of memory is allocated it can only be reused for allocations which also round up to the exactly the same size. So in theory it's possible to end up in a situation where we still technically have plenty of free memory, but our allocations will fail because all of that memory is reserved for differently sized buckets. That behavior is de-facto hardcoded into the current protocol and for determinism and backwards compatibility reasons, it needs to be implemented exactly identically in every client implementation. +Furthermore, the existence of the host-side allocator has become questionable over time. It is implemented in a very naive way: every allocation is rounded up to the next power of two, and once a piece of memory is allocated, it can only be reused for allocations that also round up to exactly the same size. So in theory it's possible to end up in a situation where we still technically have plenty of free memory, but our allocations will fail because all of that memory is reserved for differently sized buckets. That behavior is de facto hardcoded into the current protocol, and for determinism and backwards compatibility reasons, it needs to be implemented identically in every client implementation. In addition to that, runtimes make substantial use of heap memory allocations, and each allocation needs to go through the runtime <-> host boundary twice (once for allocating and once for freeing). Moving the allocator to the runtime side would be a good idea, although it would increase the runtime size. But before the host-side allocator can be deprecated, all the host functions that use it must be updated to avoid using it. @@ -49,7 +49,7 @@ Runtime developers, who will benefit from the improved performance and more dete By a Runtime Optional Positive Integer we refer to a value of type $R \equiv \\{\bot\\} \cup \\{0, 1, \dots, 2^{32} - 1\\}$, where $\bot$ denotes the _absent_ value. -At the Host-Runtime interface this type is represented by a signed 64-bit integer $x \in \mathbb{Z}$ (thus $\mathbb{Z} := \{-2^{63}, \dots, 2^{63} - 1\}$). +At the Host-Runtime interface, this type is represented by a signed 64-bit integer $x \in \mathbb{Z}$ (thus $\mathbb{Z} := \{-2^{63}, \dots, 2^{63} - 1\}$). We define the encoding function $`\mathrm{Enc}_{\mathrm{ROP}} : \mathcal{R} \to \mathbb{Z}`$ and decoding function $`\mathrm{Dec}_{\mathrm{ROP}} : \mathbb{Z} \to \mathcal{R} \cup \{\mathrm{error}\}`$ as follows. @@ -70,7 +70,7 @@ $$ \mathrm{Dec}_{\mathrm{ROP}}(x) := \begin{cases} \bot & \text{if } x = -1, \\ -x & \text{if } 0 \le x < 2^{32}, \\ +x & \text{if } 0 \le x < 2^{32}, \\ \mathrm{error} & \text{otherwise.} \end{cases} $$ @@ -87,7 +87,7 @@ The Runtime optional pointer-size has exactly the same definition as Runtime poi Pointers to input parameters passed to the host must reference readable memory regions. The host must abort execution if the memory region referenced by the pointer cannot be read, unless explicitly stated otherwise. -Pointers to output parameters passed to the host must reference writeable memory regions. The host must abort execution if the memory region referenced by the pointer cannot be written, in case it is performing the actual write, unless explicitly stated otherwise. +Pointers to output parameters passed to the host must reference writeable memory regions. With Runtime pointer-sizes ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)), the `size` part must represent the size of the continuously writable memory region pointer by the `pointer` part. With Runtime pointers ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)), which imply that the data size is known to the caller, the pointer must point to a continuously writable memory region of a size at least the size of the data in question. The host must abort execution if the memory region referenced by the pointer cannot be written, in case it is performing the actual write, unless explicitly stated otherwise. ### Runtime entry points @@ -97,13 +97,13 @@ Currently, all runtime entry points have the following identical Wasm function s (func $runtime_entrypoint (param $data i32) (param $len i32) (result i64)) ``` -After this RFC is implemented, such entry points are only supported for the legacy runtimes using the host-side allocator. All the new runtimes, using runtime-side allocator, must use new entry point signature: +After this RFC is implemented, such entry points are only supported for the legacy runtimes using the host-side allocator. All the new runtimes, using runtime-side allocator, must use the new entry point signature: ```wat (func $runtime_entrypoint (param $len i32) (result i64)) ``` -A runtime function called through such an entry point gets the length of SCALE-encoded input data as its only argument. After that, the function must allocate exactly the amount of bytes it is requested, and call the `ext_input_read` host function to obtain the encoded input data. +A runtime function called through such an entry point gets the length of SCALE-encoded input data as its only argument. After that, the function must allocate exactly the number of bytes it is requested, and call the `ext_input_read` host function to obtain the encoded input data. The new entry point and the legacy entry point styles must not be used in a single runtime. @@ -137,7 +137,7 @@ Considered obsolete in favor of `ext_storage_read_version_2`. Cannot be used in ##### Changes -The function was returning a SCALE-encoded `Option`-wrapped 32-bit integer representing the number of bytes left at supplied `offset`. It was using a host-allocated buffer to return it. It is changed to always return the full length of the value directly as a primitive value. +The function was returning a SCALE-encoded `Option`-wrapped 32-bit integer representing the number of bytes left at the supplied `offset`. It was using a host-allocated buffer to return it. It is changed to always return the full length of the value directly as a primitive value. ##### New prototype @@ -187,7 +187,7 @@ The function used to accept only a prefix and a limit and return a SCALE-encoded * `counters_out` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to a 12-byte buffer where three low-endian 32-bit integers will be stored one after another, representing the counters, respectively: * Of items removed from the backend database will be written; * Of unique keys removed, taking into account both the backend and the overlay; - * Of iterations (each requiring a storage seek/read) which were done. + * Of iterations (each requiring a storage seek/read) that were done. ##### Result @@ -206,7 +206,7 @@ The runtime must only pass an obtained continuation cursor to a directly success ##### Changes -The old version accepted the state version as an argument and returned a SCALE-encoded trie root hash through a host-allocated buffer. The new version adopts [PPP#6](https://github.com/w3f/PPPs/pull/6) getting rid of the argument that used to represent the state version. It accepts a pointer to a runtime-allocated buffer and fills it with the output value. The length of the encoded result is returned. +The old version accepted the state version as an argument and returned a SCALE-encoded trie root hash through a host-allocated buffer. The new version adopts [PPP#6](https://github.com/w3f/PPPs/pull/6), getting rid of the argument that used to represent the state version. It accepts a pointer to a runtime-allocated buffer and fills it with the output value. The length of the encoded result is returned. ##### New prototype @@ -217,7 +217,7 @@ The old version accepted the state version as an argument and returned a SCALE-e ##### Arguments -* `out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer where the SCALE-encoded storage root, calculated after committing all the existing operations, will be stored. The value is actually stored only if the buffer is large enough. Otherwise, the buffer is not written into and its contents are unchanged. +* `out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer where the SCALE-encoded storage root, calculated after committing all the existing operations, will be stored. The value is actually stored only if the buffer is large enough. Otherwise, the buffer is not written into, and its contents are unchanged. ##### Results @@ -234,7 +234,7 @@ The result is the full length of the output that might have been stored in the b ##### Changes -The old version accepted the key and returned the SCALE-encoded next key in a host-allocated buffer. The new version additionally accepts a runtime-allocated output buffer and returns full next key length. +The old version accepted the key and returned the SCALE-encoded next key in a host-allocated buffer. The new version additionally accepts a runtime-allocated output buffer and returns the full next key length. ##### New prototype @@ -277,7 +277,7 @@ Considered obsolete in favor of `ext_default_child_storage_read_version_2`. Cann ##### Changes -The function was returning a SCALE-encoded `Option`-wrapped 32-bit integer representing the number of bytes left at supplied `offset`. It was using a host-allocated buffer to return it. It is changed to always return the full length of the value directly as a primitive value. +The function was returning a SCALE-encoded `Option`-wrapped 32-bit integer representing the number of bytes left at the supplied `offset`. It was using a host-allocated buffer to return it. It is changed to always return the full length of the value directly as a primitive value. ##### New prototype @@ -324,17 +324,17 @@ The function used to accept only a child storage key and a limit and return a SC ##### Arguments * `storage_key` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the child storage key ([Definition 219](https://spec.polkadot.network/chap-host-api#defn-child-storage-type)); -* `maybe_limit` is an optional positive integer representing either the maximum number of backend deletions which may happen, or the absence of such a limit. The number of backend iterations may surpass this limit by no more than one; +* `maybe_limit` is an optional positive integer representing either the maximum number of backend deletions that may happen, or the absence of such a limit. The number of backend iterations may surpass this limit by no more than one; * `maybe_cursor_in` is an optional pointer-size representing the cursor returned by the previous (unfinished) call to this function. It should be _absent_ on the first call; * `maybe_cursor_out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer where the continuation cursor will optionally be written (see also the Result section). The value is actually stored only if the buffer is large enough. Whenever the value is not written into the buffer, the buffer contents are unmodified; * `counters_out` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to a 12-byte buffer where three low-endian 32-bit integers will be stored one after another, representing the counters, respectively: * Of items removed from the backend database will be written; * Of unique keys removed, taking into account both the backend and the overlay; - * Of iterations (each requiring a storage seek/read) which were done. + * Of iterations (each requiring a storage seek/read) that were done. ##### Result -The result represents the length of the continuation cursor which might have been written to the buffer provided in `maybe_cursor_out`. A zero value represents the absence of such a cursor and no need for continuation (the prefix has been completely cleared). +The result represents the length of the continuation cursor, which might have been written to the buffer provided in `maybe_cursor_out`. A zero value represents the absence of such a cursor and no need for continuation (the prefix has been completely cleared). #### ext_default_child_storage_clear_prefix @@ -363,17 +363,17 @@ The function used to accept (along with the child storage key) only a prefix and * `storage_key` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the child storage key ([Definition 219](https://spec.polkadot.network/chap-host-api#defn-child-storage-type)); * `prefix` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) containing a storage prefix being cleared; -* `maybe_limit` is an optional positive integer representing either the maximum number of backend deletions which may happen, or the absence of such a limit. The number of backend iterations may surpass this limit by no more than one; +* `maybe_limit` is an optional positive integer representing either the maximum number of backend deletions that may happen, or the absence of such a limit. The number of backend iterations may surpass this limit by no more than one; * `maybe_cursor_in` is an optional pointer-size representing the cursor returned by the previous (unfinished) call to this function. It should be _absent_ on the first call; * `maybe_cursor_out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer where the continuation cursor will optionally be written (see also the Result section). The value is actually stored only if the buffer is large enough. Whenever the value is not written into the buffer, the buffer contents are unmodified; * `counters_out` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to a 12-byte buffer where three low-endian 32-bit integers will be stored one after another, representing the counters, respectively: * Of items removed from the backend database will be written; * Of unique keys removed, taking into account both the backend and the overlay; - * Of iterations (each requiring a storage seek/read) which were done. + * Of iterations (each requiring a storage seek/read) that were done. ##### Result -The result represents the length of the continuation cursor which might have been written to the buffer provided in `maybe_cursor_out`. A zero value represents the absence of such a cursor and no need for continuation (the prefix has been completely cleared). +The result represents the length of the continuation cursor, which might have been written to the buffer provided in `maybe_cursor_out`. A zero value represents the absence of such a cursor and no need for continuation (the prefix has been completely cleared). #### ext_default_child_storage_root @@ -386,7 +386,7 @@ The result represents the length of the continuation cursor which might have bee ##### Changes -The old version accepted (along with the child storage key) the state version as an argument and returned a SCALE-encoded trie root hash through a host-allocated buffer. The new version adopts [PPP#6](https://github.com/w3f/PPPs/pull/6) getting rid of the argument that used to represent the state version. It accepts a pointer to a runtime-allocated buffer and fills it with the output value. The length of the encoded result is returned. +The old version accepted (along with the child storage key) the state version as an argument and returned a SCALE-encoded trie root hash through a host-allocated buffer. The new version adopts [PPP#6](https://github.com/w3f/PPPs/pull/6), getting rid of the argument that used to represent the state version. It accepts a pointer to a runtime-allocated buffer and fills it with the output value. The length of the encoded result is returned. ##### New prototype @@ -398,11 +398,11 @@ The old version accepted (along with the child storage key) the state version as ##### Arguments * `storage_key` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the child storage key ([Definition 219](https://spec.polkadot.network/chap-host-api#defn-child-storage-type)); -* `out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer where the SCALE-encoded storage root, calculated after committing all the existing operations, will be stored. The value is actually stored only if the buffer is large enough. Otherwise, the buffer is not written into and its contents are unchanged. +* `out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer where the SCALE-encoded storage root, calculated after committing all the existing operations, will be stored. The value is actually stored only if the buffer is large enough. Otherwise, the buffer is not written into, and its contents are unchanged. ##### Results -The result is the length of the output that mught have been stored in the buffer provided in `out`. +The result is the length of the output that might have been stored in the buffer provided in `out`. #### ext_default_child_storage_next_key @@ -415,7 +415,7 @@ The result is the length of the output that mught have been stored in the buffer ##### Changes -The old version accepted (along with the child storage key) the key and returned the SCALE-encoded next key in a host-allocated buffer. The new version additionally accepts a runtime-allocated output buffer and returns full next key length. +The old version accepted (along with the child storage key) the key and returned the SCALE-encoded next key in a host-allocated buffer. The new version additionally accepts a runtime-allocated output buffer and returns the full next key length. ##### New prototype @@ -428,7 +428,7 @@ The old version accepted (along with the child storage key) the key and returned * `storage_key` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the child storage key ([Definition 219](https://spec.polkadot.network/chap-host-api#defn-child-storage-type)); * `key_in` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer containing a storage key; -* `key_out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to an output buffer where the next key in the storage in the lexicographical order will be written. The value is actually stored only if the buffer is large enough. Otherwise, the buffer is not written into and its contents are unchanged. +* `key_out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to an output buffer where the next key in the storage in the lexicographical order will be written. The value is actually stored only if the next key exists and the buffer is large enough. Otherwise, the buffer is not written into, and its contents are unchanged. ##### Result @@ -451,7 +451,7 @@ The following functions share the same signatures and set of changes: * `ext_trie_keccak_256_root` * `ext_trie_keccak_256_ordered_root` -The functions used to return the root in a 32-byte host-allocated buffer. They now accept a runtime-allocated output buffer as an argument, and doesn't return anything. +The functions used to return the root in a 32-byte host-allocated buffer. They now accept a runtime-allocated output buffer as an argument, and don't return anything. ##### New prototypes @@ -477,7 +477,7 @@ The functions used to return the root in a 32-byte host-allocated buffer. They n ##### Changes -The function used to return the SCALE-encoded runtime version information in a host-allocated buffer. It is changed to accept a runtime-allocated buffer as an arguments and to return the length of the SCALE-encoded result. +The function used to return the SCALE-encoded runtime version information in a host-allocated buffer. It is changed to accept a runtime-allocated buffer as an argument and to return the length of the SCALE-encoded result. ##### New prototype @@ -488,7 +488,7 @@ The function used to return the SCALE-encoded runtime version information in a h ##### Arguments * `wasm` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the Wasm blob from which the version information should be extracted; -* `out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the buffer where the SCALE-encoded extracted version information will be stored. The value is actually stored only if the buffer is large enough. Otherwise, the buffer is not written into and its contents are unchanged. +* `out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the buffer where the SCALE-encoded extracted version information will be stored. The value is actually stored only if the buffer is large enough. Otherwise, the buffer is not written into, and its contents are unchanged. ##### Result @@ -548,7 +548,7 @@ The functions used to return a SCALE-encoded array of public keys in a host-allo ##### Arguments * `id` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to the key type identifier ([Definition 220](https://spec.polkadot.network/chap-host-api#defn-key-type-id)); -* `out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer where the public keys of the given type known to the keystore will be stored consecutively. The value is actually stored only if the buffer is large enough. Otherwise, the buffer is not written into and its contents are unchanged. +* `out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer where the public keys of the given type known to the keystore will be stored consecutively. The value is actually stored only if the buffer is large enough. Otherwise, the buffer is not written into, and its contents are unchanged. ##### Result @@ -613,14 +613,14 @@ The functions used to return a host-allocated SCALE-encoded value representing t ##### Arguments -* `id` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to the key type identifier ([Definition 220](https://spec.polkadot.network/chap-host-api#defn-key-type-id)). The function will panic if the identifier is invalid; +* `id` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to the key type identifier ([Definition 220](https://spec.polkadot.network/chap-host-api#defn-key-type-id)). Execution will be aborted if the identifier is invalid; * `pub_key` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to the public key bytes (as returned by the respective `_public_key` function); * `msg` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the message that is to be signed; * `out` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to the output buffer of the respective size (depending on key type) where the signature will be written. ##### Result -The function returns `0` on success. On error, `-1` is returned and the output buffer should be considered uninitialized. +The function returns `0` on success. On error, `-1` is returned, and the output buffer should be considered uninitialized. #### ext_crypto_secp256k1_ecdsa_recover\[_compressed] @@ -677,7 +677,7 @@ The following functions share the same signatures and set of changes: * `ext_hashing_twox_128` * `ext_hashing_twox_256` -The functions used to return a host-allocated buffer containing the hash. They are changed to accept a runtime-allocated buffer of a known size (depedent on the hash type) and to return no value, as the operation cannot fail. +The functions used to return a host-allocated buffer containing the hash. They are changed to accept a runtime-allocated buffer of a known size (dependent on the hash type) and to return no value, as the operation cannot fail. ##### New prototypes @@ -794,7 +794,7 @@ Considered obsolete in favor of `ext_offchain_local_storage_read_version_1`. Can ##### Changes -A new function is introduced to replace `ext_offchain_local_storage_get`. The name has been changed to better correspond to the family of the same-functionality functions in `ext_storage_*` group. +A new function is introduced to replace `ext_offchain_local_storage_get`. The name has been changed to better correspond to the family of the same-functionality functions in the `ext_storage_*` group. ##### New prototype @@ -807,7 +807,7 @@ A new function is introduced to replace `ext_offchain_local_storage_get`. The na * `kind` is an offchain storage kind, where `0` denotes the persistent storage ([Definition 222](https://spec.polkadot.network/chap-host-api#defn-offchain-persistent-storage)), and `1` denotes the local storage ([Definition 223](https://spec.polkadot.network/chap-host-api#defn-offchain-persistent-storage)); * `key` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the storage key being read; -* `value_out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer where the value read should be stored. The value is actually stored only if the buffer is large enough. Otherwise, the buffer is not written into and its contents are unchanged; +* `value_out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer where the value read should be stored. The value is actually stored only if the buffer is large enough. Otherwise, the buffer is not written into, and its contents are unchanged; * `offset` is a 32-bit offset from which the value reading should start. ##### Result @@ -836,9 +836,9 @@ The function used to return a SCALE-encoded `Result` value in a host-allocated b ##### Arguments -`method` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the HTTP method. Possible values are “GET” and “POST”; +`method` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the HTTP method. Possible values are "GET" and "POST"; `uri` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the URI; -`meta` is a future-reserved field containing a SCALE-encoded array with additional parameters. Currently, passing anything but a readable pointer to an empty array shall result in execution abort. This is to ensure backwards compatibility in case future versions start interpreting the contents of the array. +`meta` is a future-reserved field containing a SCALE-encoded array with additional parameters. Currently, passing anything but a readable pointer to an empty array shall result in an execution abort. This is to ensure backwards compatibility in case future versions start interpreting the contents of the array. ##### Result @@ -928,7 +928,7 @@ The function used to return a SCALE-encoded array of request statuses in a host- * `ids` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the SCALE-encoded array of started request IDs, as returned by `ext_offchain_http_request_start`; * `deadline` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the SCALE-encoded Option value ([Definition 200](https://spec.polkadot.network/id-cryptography-encoding#defn-option-type)) containing the UNIX timestamp ([Definition 191](https://spec.polkadot.network/id-cryptography-encoding#defn-unix-time)). Passing `None` blocks indefinitely; -* `out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the buffer of `i32` integers where the request statuses will be stored. The number of elements of the buffer must be strictly equal to the number of elements in the `ids` array; otherwise, the function panics. +* `out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the buffer of `i32` integers where the request statuses will be stored. The number of elements of the buffer must be strictly equal to the number of elements in the `ids` array; otherwise, execution will be aborted. #### ext_offchain_http_response_headers @@ -947,7 +947,7 @@ Considered obsolete in favor of `ext_offchain_http_response_header_name` and `ex ##### Changes -New function to replace functionality of `ext_offchain_http_response_headers` with iterative approach. Reads a header name at a given index into a runtime-allocated buffer provided. +New function to replace the functionality of `ext_offchain_http_response_headers` with an iterative approach. Reads a header name at a given index into a runtime-allocated buffer provided. ##### New prototype @@ -960,7 +960,7 @@ New function to replace functionality of `ext_offchain_http_response_headers` wi * `request_id` is an i32 integer indicating the ID of the started request, as returned by `ext_offchain_http_request_start`; * `header_index` is an i32 integer indicating the index of the header requested, starting from zero; -* `out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the buffer where the header name will be stored. The value is actually stored only if the buffer is large enough. Otherwise, the buffer is not written into and its contents are unchanged. +* `out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the buffer where the header name will be stored. The value is actually stored only if the buffer is large enough. Otherwise, the buffer is not written into, and its contents are unchanged. ##### Result @@ -970,7 +970,7 @@ The result is an optional positive integer ([New Definition I](#new-def-i)), rep ##### Changes -New function to replace functionality of `ext_offchain_http_response_headers` with iterative approach. Reads a header value at a given index into a runtime-allocated buffer provided. +New function to replace the functionality of `ext_offchain_http_response_headers` with an iterative approach. Reads a header value at a given index into a runtime-allocated buffer provided. ##### New prototype @@ -983,7 +983,7 @@ New function to replace functionality of `ext_offchain_http_response_headers` wi * `request_id` is an i32 integer indicating the ID of the started request, as returned by `ext_offchain_http_request_start`; * `header_index` is an i32 integer indicating the index of the header requested, starting from zero; -* `out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the buffer where the header value will be stored. The value is actually stored only if the buffer is large enough. Otherwise, the buffer is not written into and its contents are unchanged. +* `out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the buffer where the header value will be stored. The value is actually stored only if the buffer is large enough. Otherwise, the buffer is not written into, and its contents are unchanged. ##### Result @@ -1028,13 +1028,13 @@ On success, the number of bytes written to the buffer is returned. A value of `0 (func $ext_allocator_free_version_1 (param $ptr i32)) ``` -The functions are considered obsolete and cannot be used in a runtime using the new-style of entry-point. +The functions are considered obsolete and cannot be used in a runtime using the new-style entry point. #### ext_input_read ##### Changes -A new function providing means of passing input data from the host to the runtime. Previously, the host allocated a buffer and passed a pointer to it to the runtime. With the runtime allocator, it's not possible anymore, so the input data passing protocol changed (see "Other changes" section below). This function is required to support that change. +A new function providing a means of passing input data from the host to the runtime. Previously, the host allocated a buffer and passed a pointer to it to the runtime. With the runtime allocator, it's not possible anymore, so the input data passing protocol changed (see "Other changes" section below). This function is required to support that change. ##### New prototype @@ -1045,4 +1045,4 @@ A new function providing means of passing input data from the host to the runtim ##### Arguments -* `buffer_out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the buffer where the input data will be written. If the buffer is not large enough to accommodate the input data, the function will panic. +* `buffer_out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the buffer where the input data will be written. If the buffer is not large enough to accommodate the input data, execution will be aborted. From b2dd2e586dcb31a42af8ad880a2b3cc9fe959730 Mon Sep 17 00:00:00 2001 From: Dmitry Sinyavin Date: Thu, 29 Jan 2026 12:41:15 +0100 Subject: [PATCH 29/30] Address discussions --- ...0145-remove-unnecessary-allocator-usage.md | 24 ++++++++----------- 1 file changed, 10 insertions(+), 14 deletions(-) diff --git a/text/0145-remove-unnecessary-allocator-usage.md b/text/0145-remove-unnecessary-allocator-usage.md index 2e4908f73..d7dbcf11b 100644 --- a/text/0145-remove-unnecessary-allocator-usage.md +++ b/text/0145-remove-unnecessary-allocator-usage.md @@ -206,22 +206,18 @@ The runtime must only pass an obtained continuation cursor to a directly success ##### Changes -The old version accepted the state version as an argument and returned a SCALE-encoded trie root hash through a host-allocated buffer. The new version adopts [PPP#6](https://github.com/w3f/PPPs/pull/6), getting rid of the argument that used to represent the state version. It accepts a pointer to a runtime-allocated buffer and fills it with the output value. The length of the encoded result is returned. +The old version accepted the state version as an argument and returned a SCALE-encoded trie root hash through a host-allocated buffer. The new version adopts [PPP#6](https://github.com/w3f/PPPs/pull/6), getting rid of the argument that used to represent the state version. It accepts a pointer to a runtime-allocated buffer and fills it with the output value. ##### New prototype ```wat (func $ext_storage_root_version_3 - (param $out i64) (result i32)) + (param $out i64)) ``` ##### Arguments -* `out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer where the SCALE-encoded storage root, calculated after committing all the existing operations, will be stored. The value is actually stored only if the buffer is large enough. Otherwise, the buffer is not written into, and its contents are unchanged. - -##### Results - -The result is the full length of the output that might have been stored in the buffer provided in `out`. +* `out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer where the SCALE-encoded storage root, calculated after committing all the existing operations, will be stored. Since the size of the resulting value is known to the caller, this function requires the provided buffer to be large enough to store the entire value; providing a buffer that is too small will result in execution being aborted. #### ext_storage_next_key @@ -386,19 +382,19 @@ The result represents the length of the continuation cursor, which might have be ##### Changes -The old version accepted (along with the child storage key) the state version as an argument and returned a SCALE-encoded trie root hash through a host-allocated buffer. The new version adopts [PPP#6](https://github.com/w3f/PPPs/pull/6), getting rid of the argument that used to represent the state version. It accepts a pointer to a runtime-allocated buffer and fills it with the output value. The length of the encoded result is returned. +The old version accepted (along with the child storage key) the state version as an argument and returned a SCALE-encoded trie root hash through a host-allocated buffer. The new version adopts [PPP#6](https://github.com/w3f/PPPs/pull/6), getting rid of the argument that used to represent the state version. It accepts a pointer to a runtime-allocated buffer and fills it with the output value. ##### New prototype ```wat (func $ext_default_child_storage_root_version_3 - (param $storage_key i64) (param $out i64) (result i32)) + (param $storage_key i64) (param $out i64)) ``` ##### Arguments * `storage_key` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the child storage key ([Definition 219](https://spec.polkadot.network/chap-host-api#defn-child-storage-type)); -* `out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer where the SCALE-encoded storage root, calculated after committing all the existing operations, will be stored. The value is actually stored only if the buffer is large enough. Otherwise, the buffer is not written into, and its contents are unchanged. +* `out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer where the SCALE-encoded storage root, calculated after committing all the existing operations, will be stored. Since the size of the resulting value is known to the caller, this function requires the provided buffer to be large enough to store the entire value; providing a buffer that is too small will result in execution being aborted. ##### Results @@ -608,7 +604,7 @@ The functions used to return a host-allocated SCALE-encoded value representing t ```wat (func $ext_crypto_{ed25519|sr25519|ecdsa}_sign{_prehashed|}_version_2 - (param $id i32) (param $pub_key i32) (param $msg i64) (param $out i64) (result i64)) + (param $id i32) (param $pub_key i32) (param $msg i64) (param $out i64) (result i32)) ``` ##### Arguments @@ -643,7 +639,7 @@ The functions used to return a host-allocated SCALE-encoded value representing t ```wat (func $ext_crypto_secp256k1_ecdsa_recover\[_compressed]_version_3 - (param $sig i32) (param $msg i32) (param $out i32) (result i64)) + (param $sig i32) (param $msg i32) (param $out i32) (result i32)) ``` ##### Arguments @@ -708,7 +704,7 @@ The old version returned a SCALE-encoded result in a host-allocated buffer. That ```wat (func $ext_offchain_submit_transaction_version_2 - (param $data i64) (result i64)) + (param $data i64) (result i32)) ``` ##### Arguments @@ -742,7 +738,7 @@ A new function is introduced to replace `ext_offchain_network_state`. It fills t ```wat (func $ext_offchain_submit_transaction_version_2 - (param $out i32) (result i64)) + (param $out i32) (result i32)) ``` ##### Arguments From 54d0435716c11bb9402b27bfad4f9bad50d342a8 Mon Sep 17 00:00:00 2001 From: Dmitry Sinyavin Date: Thu, 29 Jan 2026 12:46:38 +0100 Subject: [PATCH 30/30] Fix Polkadot spec dead links --- ...0145-remove-unnecessary-allocator-usage.md | 128 +++++++++--------- 1 file changed, 64 insertions(+), 64 deletions(-) diff --git a/text/0145-remove-unnecessary-allocator-usage.md b/text/0145-remove-unnecessary-allocator-usage.md index d7dbcf11b..7f045c5d8 100644 --- a/text/0145-remove-unnecessary-allocator-usage.md +++ b/text/0145-remove-unnecessary-allocator-usage.md @@ -81,13 +81,13 @@ Conforming implementations must not produce invalid values when encoding. Receiv #### New Definition II: Runtime Optional Pointer-Size -The Runtime optional pointer-size has exactly the same definition as Runtime pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) with the value of 2⁶⁴-1 representing a non-existing value (an _absent_ value). +The Runtime optional pointer-size has exactly the same definition as Runtime pointer-size ([Definition 216](https://polkadotspec.dev/chap-host-api#defn-runtime-pointer-size)) with the value of 2⁶⁴-1 representing a non-existing value (an _absent_ value). ### Memory safety Pointers to input parameters passed to the host must reference readable memory regions. The host must abort execution if the memory region referenced by the pointer cannot be read, unless explicitly stated otherwise. -Pointers to output parameters passed to the host must reference writeable memory regions. With Runtime pointer-sizes ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)), the `size` part must represent the size of the continuously writable memory region pointer by the `pointer` part. With Runtime pointers ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)), which imply that the data size is known to the caller, the pointer must point to a continuously writable memory region of a size at least the size of the data in question. The host must abort execution if the memory region referenced by the pointer cannot be written, in case it is performing the actual write, unless explicitly stated otherwise. +Pointers to output parameters passed to the host must reference writeable memory regions. With Runtime pointer-sizes ([Definition 216](https://polkadotspec.dev/chap-host-api#defn-runtime-pointer-size)), the `size` part must represent the size of the continuously writable memory region pointer by the `pointer` part. With Runtime pointers ([Definition 215](https://polkadotspec.dev/chap-host-api#defn-runtime-pointer)), which imply that the data size is known to the caller, the pointer must point to a continuously writable memory region of a size at least the size of the data in question. The host must abort execution if the memory region referenced by the pointer cannot be written, in case it is performing the actual write, unless explicitly stated otherwise. ### Runtime entry points @@ -148,8 +148,8 @@ The function was returning a SCALE-encoded `Option`-wrapped 32-bit integer repre ##### Arguments -* `key` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the storage key being read; -* `value_out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer where the value read should be stored. Let $\mathcal{out\_len}$ denote the length component of this pointer-size (i.e., the size of the output buffer), and let $\mathcal{value\_len}$ denote the actual length of the value in storage starting from `value_offset`. The implementation must write $\mathrm{min}(\mathcal{value\_len}, \mathcal{out\_len})$ bytes of the value to `value_out` only if $(\mathcal{out\_len} \geq \mathcal{value\_len}) \lor (\mathcal{allow\_partial} = \mathrm{true})$. If $(\mathcal{out\_len} < \mathcal{value\_len}) \land (\mathcal{allow\_partial} = \mathrm{false})$, the implementation must not write any bytes to `value_out` and must leave the buffer unchanged; +* `key` is a pointer-size ([Definition 216](https://polkadotspec.dev/chap-host-api#defn-runtime-pointer-size)) to the storage key being read; +* `value_out` is a pointer-size ([Definition 216](https://polkadotspec.dev/chap-host-api#defn-runtime-pointer-size)) to a buffer where the value read should be stored. Let $\mathcal{out\_len}$ denote the length component of this pointer-size (i.e., the size of the output buffer), and let $\mathcal{value\_len}$ denote the actual length of the value in storage starting from `value_offset`. The implementation must write $\mathrm{min}(\mathcal{value\_len}, \mathcal{out\_len})$ bytes of the value to `value_out` only if $(\mathcal{out\_len} \geq \mathcal{value\_len}) \lor (\mathcal{allow\_partial} = \mathrm{true})$. If $(\mathcal{out\_len} < \mathcal{value\_len}) \land (\mathcal{allow\_partial} = \mathrm{false})$, the implementation must not write any bytes to `value_out` and must leave the buffer unchanged; * `value_offset` is an unsigned 32-bit offset from which the value reading should start; * `allow_partial` is a boolean value, where `0` represents `false` and any other value represents `true`, denoting if the output buffer must be partially written even if its length is not enough to accommodate the whole value. @@ -180,11 +180,11 @@ The function used to accept only a prefix and a limit and return a SCALE-encoded ##### Arguments -* `maybe_prefix` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) containing a (possibly empty) storage prefix being cleared; +* `maybe_prefix` is a pointer-size ([Definition 216](https://polkadotspec.dev/chap-host-api#defn-runtime-pointer-size)) containing a (possibly empty) storage prefix being cleared; * `maybe_limit` is an optional positive integer ([New Definition I](#new-def-i)) representing either the maximum number of backend deletions which may happen, or the _absence_ of such a limit. The number of backend iterations may surpass this limit by no more than one; * `maybe_cursor_in` is an optional pointer-size ([New Definition II](#new-def-ii)) representing the cursor returned by the previous (unfinished) call to this function. It should be _absent_ on the first call; -* `maybe_cursor_out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer where the continuation cursor will optionally be written (see also the Result section). The value is actually stored only if the buffer is large enough. Whenever the value is not written into the buffer, the buffer contents are unmodified; -* `counters_out` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to a 12-byte buffer where three low-endian 32-bit integers will be stored one after another, representing the counters, respectively: +* `maybe_cursor_out` is a pointer-size ([Definition 216](https://polkadotspec.dev/chap-host-api#defn-runtime-pointer-size)) to a buffer where the continuation cursor will optionally be written (see also the Result section). The value is actually stored only if the buffer is large enough. Whenever the value is not written into the buffer, the buffer contents are unmodified; +* `counters_out` is a pointer ([Definition 215](https://polkadotspec.dev/chap-host-api#defn-runtime-pointer)) to a 12-byte buffer where three low-endian 32-bit integers will be stored one after another, representing the counters, respectively: * Of items removed from the backend database will be written; * Of unique keys removed, taking into account both the backend and the overlay; * Of iterations (each requiring a storage seek/read) that were done. @@ -217,7 +217,7 @@ The old version accepted the state version as an argument and returned a SCALE-e ##### Arguments -* `out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer where the SCALE-encoded storage root, calculated after committing all the existing operations, will be stored. Since the size of the resulting value is known to the caller, this function requires the provided buffer to be large enough to store the entire value; providing a buffer that is too small will result in execution being aborted. +* `out` is a pointer-size ([Definition 216](https://polkadotspec.dev/chap-host-api#defn-runtime-pointer-size)) to a buffer where the SCALE-encoded storage root, calculated after committing all the existing operations, will be stored. Since the size of the resulting value is known to the caller, this function requires the provided buffer to be large enough to store the entire value; providing a buffer that is too small will result in execution being aborted. #### ext_storage_next_key @@ -241,8 +241,8 @@ The old version accepted the key and returned the SCALE-encoded next key in a ho ##### Arguments -* `key_in` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer containing a storage key; -* `key_out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to an output buffer where the next key in the storage in the lexicographical order will be written. The value is actually stored only if the next key exists and the buffer is large enough. Otherwise, the buffer is not written into, and its contents are unchanged. +* `key_in` is a pointer-size ([Definition 216](https://polkadotspec.dev/chap-host-api#defn-runtime-pointer-size)) to a buffer containing a storage key; +* `key_out` is a pointer-size ([Definition 216](https://polkadotspec.dev/chap-host-api#defn-runtime-pointer-size)) to an output buffer where the next key in the storage in the lexicographical order will be written. The value is actually stored only if the next key exists and the buffer is large enough. Otherwise, the buffer is not written into, and its contents are unchanged. ##### Result @@ -285,9 +285,9 @@ The function was returning a SCALE-encoded `Option`-wrapped 32-bit integer repre ##### Arguments -* `storage_key` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the child storage key ([Definition 219](https://spec.polkadot.network/chap-host-api#defn-child-storage-type)); -* `key` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the storage key being read; -* `value_out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer where the value read should be stored. Let $\mathcal{out\_len}$ denote the length component of this pointer-size (i.e., the size of the output buffer), and let $\mathcal{value\_len}$ denote the actual length of the value in storage starting from `value_offset`. The implementation must write $\mathrm{min}(\mathcal{value\_len}, \mathcal{out\_len})$ bytes of the value to `value_out` only if $(\mathcal{out\_len} \geq \mathcal{value\_len}) \lor (\mathcal{allow\_partial} = \mathrm{true})$. If $(\mathcal{out\_len} < \mathcal{value\_len}) \land (\mathcal{allow\_partial} = \mathrm{false})$, the implementation must not write any bytes to `value_out` and must leave the buffer unchanged; +* `storage_key` is a pointer-size ([Definition 216](https://polkadotspec.dev/chap-host-api#defn-runtime-pointer-size)) to the child storage key ([Definition 219](https://polkadotspec.dev/chap-host-api#defn-child-storage-type)); +* `key` is a pointer-size ([Definition 216](https://polkadotspec.dev/chap-host-api#defn-runtime-pointer-size)) to the storage key being read; +* `value_out` is a pointer-size ([Definition 216](https://polkadotspec.dev/chap-host-api#defn-runtime-pointer-size)) to a buffer where the value read should be stored. Let $\mathcal{out\_len}$ denote the length component of this pointer-size (i.e., the size of the output buffer), and let $\mathcal{value\_len}$ denote the actual length of the value in storage starting from `value_offset`. The implementation must write $\mathrm{min}(\mathcal{value\_len}, \mathcal{out\_len})$ bytes of the value to `value_out` only if $(\mathcal{out\_len} \geq \mathcal{value\_len}) \lor (\mathcal{allow\_partial} = \mathrm{true})$. If $(\mathcal{out\_len} < \mathcal{value\_len}) \land (\mathcal{allow\_partial} = \mathrm{false})$, the implementation must not write any bytes to `value_out` and must leave the buffer unchanged; * `value_offset` is an unsigned 32-bit offset from which the value reading should start; * `allow_partial` is a boolean value, where `0` represents `false` and any other value represents `true`, denoting if the output buffer must be partially written even if its length is not enough to accommodate the whole value. @@ -319,11 +319,11 @@ The function used to accept only a child storage key and a limit and return a SC ##### Arguments -* `storage_key` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the child storage key ([Definition 219](https://spec.polkadot.network/chap-host-api#defn-child-storage-type)); +* `storage_key` is a pointer-size ([Definition 216](https://polkadotspec.dev/chap-host-api#defn-runtime-pointer-size)) to the child storage key ([Definition 219](https://polkadotspec.dev/chap-host-api#defn-child-storage-type)); * `maybe_limit` is an optional positive integer representing either the maximum number of backend deletions that may happen, or the absence of such a limit. The number of backend iterations may surpass this limit by no more than one; * `maybe_cursor_in` is an optional pointer-size representing the cursor returned by the previous (unfinished) call to this function. It should be _absent_ on the first call; -* `maybe_cursor_out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer where the continuation cursor will optionally be written (see also the Result section). The value is actually stored only if the buffer is large enough. Whenever the value is not written into the buffer, the buffer contents are unmodified; -* `counters_out` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to a 12-byte buffer where three low-endian 32-bit integers will be stored one after another, representing the counters, respectively: +* `maybe_cursor_out` is a pointer-size ([Definition 216](https://polkadotspec.dev/chap-host-api#defn-runtime-pointer-size)) to a buffer where the continuation cursor will optionally be written (see also the Result section). The value is actually stored only if the buffer is large enough. Whenever the value is not written into the buffer, the buffer contents are unmodified; +* `counters_out` is a pointer ([Definition 215](https://polkadotspec.dev/chap-host-api#defn-runtime-pointer)) to a 12-byte buffer where three low-endian 32-bit integers will be stored one after another, representing the counters, respectively: * Of items removed from the backend database will be written; * Of unique keys removed, taking into account both the backend and the overlay; * Of iterations (each requiring a storage seek/read) that were done. @@ -357,12 +357,12 @@ The function used to accept (along with the child storage key) only a prefix and ##### Arguments -* `storage_key` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the child storage key ([Definition 219](https://spec.polkadot.network/chap-host-api#defn-child-storage-type)); -* `prefix` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) containing a storage prefix being cleared; +* `storage_key` is a pointer-size ([Definition 216](https://polkadotspec.dev/chap-host-api#defn-runtime-pointer-size)) to the child storage key ([Definition 219](https://polkadotspec.dev/chap-host-api#defn-child-storage-type)); +* `prefix` is a pointer-size ([Definition 216](https://polkadotspec.dev/chap-host-api#defn-runtime-pointer-size)) containing a storage prefix being cleared; * `maybe_limit` is an optional positive integer representing either the maximum number of backend deletions that may happen, or the absence of such a limit. The number of backend iterations may surpass this limit by no more than one; * `maybe_cursor_in` is an optional pointer-size representing the cursor returned by the previous (unfinished) call to this function. It should be _absent_ on the first call; -* `maybe_cursor_out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer where the continuation cursor will optionally be written (see also the Result section). The value is actually stored only if the buffer is large enough. Whenever the value is not written into the buffer, the buffer contents are unmodified; -* `counters_out` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to a 12-byte buffer where three low-endian 32-bit integers will be stored one after another, representing the counters, respectively: +* `maybe_cursor_out` is a pointer-size ([Definition 216](https://polkadotspec.dev/chap-host-api#defn-runtime-pointer-size)) to a buffer where the continuation cursor will optionally be written (see also the Result section). The value is actually stored only if the buffer is large enough. Whenever the value is not written into the buffer, the buffer contents are unmodified; +* `counters_out` is a pointer ([Definition 215](https://polkadotspec.dev/chap-host-api#defn-runtime-pointer)) to a 12-byte buffer where three low-endian 32-bit integers will be stored one after another, representing the counters, respectively: * Of items removed from the backend database will be written; * Of unique keys removed, taking into account both the backend and the overlay; * Of iterations (each requiring a storage seek/read) that were done. @@ -393,8 +393,8 @@ The old version accepted (along with the child storage key) the state version as ##### Arguments -* `storage_key` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the child storage key ([Definition 219](https://spec.polkadot.network/chap-host-api#defn-child-storage-type)); -* `out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer where the SCALE-encoded storage root, calculated after committing all the existing operations, will be stored. Since the size of the resulting value is known to the caller, this function requires the provided buffer to be large enough to store the entire value; providing a buffer that is too small will result in execution being aborted. +* `storage_key` is a pointer-size ([Definition 216](https://polkadotspec.dev/chap-host-api#defn-runtime-pointer-size)) to the child storage key ([Definition 219](https://polkadotspec.dev/chap-host-api#defn-child-storage-type)); +* `out` is a pointer-size ([Definition 216](https://polkadotspec.dev/chap-host-api#defn-runtime-pointer-size)) to a buffer where the SCALE-encoded storage root, calculated after committing all the existing operations, will be stored. Since the size of the resulting value is known to the caller, this function requires the provided buffer to be large enough to store the entire value; providing a buffer that is too small will result in execution being aborted. ##### Results @@ -422,9 +422,9 @@ The old version accepted (along with the child storage key) the key and returned ##### Arguments -* `storage_key` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the child storage key ([Definition 219](https://spec.polkadot.network/chap-host-api#defn-child-storage-type)); -* `key_in` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer containing a storage key; -* `key_out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to an output buffer where the next key in the storage in the lexicographical order will be written. The value is actually stored only if the next key exists and the buffer is large enough. Otherwise, the buffer is not written into, and its contents are unchanged. +* `storage_key` is a pointer-size ([Definition 216](https://polkadotspec.dev/chap-host-api#defn-runtime-pointer-size)) to the child storage key ([Definition 219](https://polkadotspec.dev/chap-host-api#defn-child-storage-type)); +* `key_in` is a pointer-size ([Definition 216](https://polkadotspec.dev/chap-host-api#defn-runtime-pointer-size)) to a buffer containing a storage key; +* `key_out` is a pointer-size ([Definition 216](https://polkadotspec.dev/chap-host-api#defn-runtime-pointer-size)) to an output buffer where the next key in the storage in the lexicographical order will be written. The value is actually stored only if the next key exists and the buffer is large enough. Otherwise, the buffer is not written into, and its contents are unchanged. ##### Result @@ -458,9 +458,9 @@ The functions used to return the root in a 32-byte host-allocated buffer. They n ##### Arguments -* `input` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the SCALE-encoded vector of the trie key-value pairs; +* `input` is a pointer-size ([Definition 216](https://polkadotspec.dev/chap-host-api#defn-runtime-pointer-size)) to the SCALE-encoded vector of the trie key-value pairs; * `version` is the state version, where `0` denotes V0 and `1` denotes V1 state version. Other state versions may be introduced in the future; -* `out` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to a 32-byte buffer, where the calculated trie root will be stored. +* `out` is a pointer ([Definition 215](https://polkadotspec.dev/chap-host-api#defn-runtime-pointer)) to a 32-byte buffer, where the calculated trie root will be stored. #### ext_misc_runtime_version @@ -483,8 +483,8 @@ The function used to return the SCALE-encoded runtime version information in a h ``` ##### Arguments -* `wasm` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the Wasm blob from which the version information should be extracted; -* `out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the buffer where the SCALE-encoded extracted version information will be stored. The value is actually stored only if the buffer is large enough. Otherwise, the buffer is not written into, and its contents are unchanged. +* `wasm` is a pointer-size ([Definition 216](https://polkadotspec.dev/chap-host-api#defn-runtime-pointer-size)) to the Wasm blob from which the version information should be extracted; +* `out` is a pointer-size ([Definition 216](https://polkadotspec.dev/chap-host-api#defn-runtime-pointer-size)) to the buffer where the SCALE-encoded extracted version information will be stored. The value is actually stored only if the buffer is large enough. Otherwise, the buffer is not written into, and its contents are unchanged. ##### Result @@ -504,7 +504,7 @@ A new function is introduced to make it possible to fetch a cursor produced by ` ``` ##### Arguments -* `out` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to the buffer where the last cached cursor will be stored, if one exists. The caller must provide a buffer large enough to accommodate the entire cursor; the exact length of the cursor is known to the caller from the result of the preceding call to one of the storage prefix clearing functions. If the buffer provided is not large enough, execution is aborted. +* `out` is a pointer ([Definition 215](https://polkadotspec.dev/chap-host-api#defn-runtime-pointer)) to the buffer where the last cached cursor will be stored, if one exists. The caller must provide a buffer large enough to accommodate the entire cursor; the exact length of the cursor is known to the caller from the result of the preceding call to one of the storage prefix clearing functions. If the buffer provided is not large enough, execution is aborted. After this function is called, the cursor cache is cleared, and the same cursor cannot be retrieved again using this function. @@ -543,8 +543,8 @@ The functions used to return a SCALE-encoded array of public keys in a host-allo ##### Arguments -* `id` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to the key type identifier ([Definition 220](https://spec.polkadot.network/chap-host-api#defn-key-type-id)); -* `out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer where the public keys of the given type known to the keystore will be stored consecutively. The value is actually stored only if the buffer is large enough. Otherwise, the buffer is not written into, and its contents are unchanged. +* `id` is a pointer ([Definition 215](https://polkadotspec.dev/chap-host-api#defn-runtime-pointer)) to the key type identifier ([Definition 220](https://polkadotspec.dev/chap-host-api#defn-key-type-id)); +* `out` is a pointer-size ([Definition 216](https://polkadotspec.dev/chap-host-api#defn-runtime-pointer-size)) to a buffer where the public keys of the given type known to the keystore will be stored consecutively. The value is actually stored only if the buffer is large enough. Otherwise, the buffer is not written into, and its contents are unchanged. ##### Result @@ -577,9 +577,9 @@ The functions used to return a host-allocated buffer containing the key of the c ##### Arguments -* `id` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to the key type identifier ([Definition 220](https://spec.polkadot.network/chap-host-api#defn-key-type-id)). Execution will be aborted if the identifier is invalid; +* `id` is a pointer ([Definition 215](https://polkadotspec.dev/chap-host-api#defn-runtime-pointer)) to the key type identifier ([Definition 220](https://polkadotspec.dev/chap-host-api#defn-key-type-id)). Execution will be aborted if the identifier is invalid; * `seed` is an optional pointer-size ([New Definition II](#new-def-ii)) to the BIP-39 seed which must be valid UTF-8. Execution will be aborted if the seed is not a valid UTF-8 string; -* `out` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to the output buffer of the respective size (depending on key type) where the generated key will be written. +* `out` is a pointer ([Definition 215](https://polkadotspec.dev/chap-host-api#defn-runtime-pointer)) to the output buffer of the respective size (depending on key type) where the generated key will be written. #### ext_crypto_{ed25519|sr25519|ecdsa}_sign\[_prehashed] @@ -609,10 +609,10 @@ The functions used to return a host-allocated SCALE-encoded value representing t ##### Arguments -* `id` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to the key type identifier ([Definition 220](https://spec.polkadot.network/chap-host-api#defn-key-type-id)). Execution will be aborted if the identifier is invalid; -* `pub_key` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to the public key bytes (as returned by the respective `_public_key` function); -* `msg` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the message that is to be signed; -* `out` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to the output buffer of the respective size (depending on key type) where the signature will be written. +* `id` is a pointer ([Definition 215](https://polkadotspec.dev/chap-host-api#defn-runtime-pointer)) to the key type identifier ([Definition 220](https://polkadotspec.dev/chap-host-api#defn-key-type-id)). Execution will be aborted if the identifier is invalid; +* `pub_key` is a pointer ([Definition 215](https://polkadotspec.dev/chap-host-api#defn-runtime-pointer)) to the public key bytes (as returned by the respective `_public_key` function); +* `msg` is a pointer-size ([Definition 216](https://polkadotspec.dev/chap-host-api#defn-runtime-pointer-size)) to the message that is to be signed; +* `out` is a pointer ([Definition 215](https://polkadotspec.dev/chap-host-api#defn-runtime-pointer)) to the output buffer of the respective size (depending on key type) where the signature will be written. ##### Result @@ -633,7 +633,7 @@ The following functions share the same signatures and set of changes: * `ext_crypto_secp256k1_ecdsa_recover` * `ext_crypto_secp256k1_ecdsa_recover_compressed` -The functions used to return a host-allocated SCALE-encoded value representing the result of the key recovery. They are changed to accept a pointer to a runtime-allocated buffer of a known size and to return a result code. The return error encoding, defined under [Definition 221](https://spec.polkadot.network/chap-host-api#defn-ecdsa-verify-error), is changed to promote the unification of host function result reporting (zero and positive values are for success, and the negative values are for failure codes). +The functions used to return a host-allocated SCALE-encoded value representing the result of the key recovery. They are changed to accept a pointer to a runtime-allocated buffer of a known size and to return a result code. The return error encoding, defined under [Definition 221](https://polkadotspec.dev/chap-host-api#defn-ecdsa-verify-error), is changed to promote the unification of host function result reporting (zero and positive values are for success, and the negative values are for failure codes). ##### New prototypes @@ -644,9 +644,9 @@ The functions used to return a host-allocated SCALE-encoded value representing t ##### Arguments -* `sig` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to the buffer containing the 65-byte signature in RSV format. V must be either 0/1 or 27/28; -* `msg` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to the buffer containing the 256-bit Blake2 hash of the message; -* `out` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to the output buffer of the respective size (depending on key type) where the recovered public key will be written. +* `sig` is a pointer ([Definition 215](https://polkadotspec.dev/chap-host-api#defn-runtime-pointer)) to the buffer containing the 65-byte signature in RSV format. V must be either 0/1 or 27/28; +* `msg` is a pointer ([Definition 215](https://polkadotspec.dev/chap-host-api#defn-runtime-pointer)) to the buffer containing the 256-bit Blake2 hash of the message; +* `out` is a pointer ([Definition 215](https://polkadotspec.dev/chap-host-api#defn-runtime-pointer)) to the output buffer of the respective size (depending on key type) where the recovered public key will be written. ##### Result @@ -684,8 +684,8 @@ The functions used to return a host-allocated buffer containing the hash. They a ##### Arguments -* `data` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the data to be hashed. -* `out` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to the output buffer of the respective size (depending on hash type) where the calculated hash will be written. +* `data` is a pointer-size ([Definition 216](https://polkadotspec.dev/chap-host-api#defn-runtime-pointer-size)) to the data to be hashed. +* `out` is a pointer ([Definition 215](https://polkadotspec.dev/chap-host-api#defn-runtime-pointer)) to the output buffer of the respective size (depending on hash type) where the calculated hash will be written. #### ext_offchain_submit_transaction @@ -709,7 +709,7 @@ The old version returned a SCALE-encoded result in a host-allocated buffer. That ##### Arguments -* `data` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the byte array storing the encoded extrinsic. +* `data` is a pointer-size ([Definition 216](https://polkadotspec.dev/chap-host-api#defn-runtime-pointer-size)) to the byte array storing the encoded extrinsic. ##### Result @@ -743,7 +743,7 @@ A new function is introduced to replace `ext_offchain_network_state`. It fills t ##### Arguments -* `out` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to the output buffer, 38 bytes long, where the network peer ID will be written. +* `out` is a pointer ([Definition 215](https://polkadotspec.dev/chap-host-api#defn-runtime-pointer)) to the output buffer, 38 bytes long, where the network peer ID will be written. ##### Result @@ -771,7 +771,7 @@ The function used to return a host-allocated buffer containing the random seed. ##### Arguments -* `out` is a pointer ([Definition 215](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer)) to the output buffer, 32 bytes long, where the random seed will be written. +* `out` is a pointer ([Definition 215](https://polkadotspec.dev/chap-host-api#defn-runtime-pointer)) to the output buffer, 32 bytes long, where the random seed will be written. #### ext_offchain_local_storage_get @@ -801,9 +801,9 @@ A new function is introduced to replace `ext_offchain_local_storage_get`. The na ##### Arguments -* `kind` is an offchain storage kind, where `0` denotes the persistent storage ([Definition 222](https://spec.polkadot.network/chap-host-api#defn-offchain-persistent-storage)), and `1` denotes the local storage ([Definition 223](https://spec.polkadot.network/chap-host-api#defn-offchain-persistent-storage)); -* `key` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the storage key being read; -* `value_out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to a buffer where the value read should be stored. The value is actually stored only if the buffer is large enough. Otherwise, the buffer is not written into, and its contents are unchanged; +* `kind` is an offchain storage kind, where `0` denotes the persistent storage ([Definition 222](https://polkadotspec.dev/chap-host-api#defn-offchain-persistent-storage)), and `1` denotes the local storage ([Definition 223](https://polkadotspec.dev/chap-host-api#defn-offchain-persistent-storage)); +* `key` is a pointer-size ([Definition 216](https://polkadotspec.dev/chap-host-api#defn-runtime-pointer-size)) to the storage key being read; +* `value_out` is a pointer-size ([Definition 216](https://polkadotspec.dev/chap-host-api#defn-runtime-pointer-size)) to a buffer where the value read should be stored. The value is actually stored only if the buffer is large enough. Otherwise, the buffer is not written into, and its contents are unchanged; * `offset` is a 32-bit offset from which the value reading should start. ##### Result @@ -832,8 +832,8 @@ The function used to return a SCALE-encoded `Result` value in a host-allocated b ##### Arguments -`method` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the HTTP method. Possible values are "GET" and "POST"; -`uri` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the URI; +`method` is a pointer-size ([Definition 216](https://polkadotspec.dev/chap-host-api#defn-runtime-pointer-size)) to the HTTP method. Possible values are "GET" and "POST"; +`uri` is a pointer-size ([Definition 216](https://polkadotspec.dev/chap-host-api#defn-runtime-pointer-size)) to the URI; `meta` is a future-reserved field containing a SCALE-encoded array with additional parameters. Currently, passing anything but a readable pointer to an empty array shall result in an execution abort. This is to ensure backwards compatibility in case future versions start interpreting the contents of the array. ##### Result @@ -863,8 +863,8 @@ The function used to return a SCALE-encoded `Result` value in a host-allocated b ##### Arguments * `request_id` is an i32 integer indicating the ID of the started request, as returned by `ext_offchain_http_request_start`; -* `name` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the HTTP header name; -* `value` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the HTTP header value. +* `name` is a pointer-size ([Definition 216](https://polkadotspec.dev/chap-host-api#defn-runtime-pointer-size)) to the HTTP header name; +* `value` is a pointer-size ([Definition 216](https://polkadotspec.dev/chap-host-api#defn-runtime-pointer-size)) to the HTTP header value. ##### Result @@ -893,8 +893,8 @@ The function used to return a SCALE-encoded `Result` value in a host-allocated b ##### Arguments * `request_id` is an i32 integer indicating the ID of the started request, as returned by `ext_offchain_http_request_start`; -* `chunk` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the chunk of bytes. Writing an empty chunk finalizes the request; -* `deadline` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the SCALE-encoded Option value ([Definition 200](https://spec.polkadot.network/id-cryptography-encoding#defn-option-type)) containing the UNIX timestamp ([Definition 191](https://spec.polkadot.network/id-cryptography-encoding#defn-unix-time)). Passing `None` blocks indefinitely. +* `chunk` is a pointer-size ([Definition 216](https://polkadotspec.dev/chap-host-api#defn-runtime-pointer-size)) to the chunk of bytes. Writing an empty chunk finalizes the request; +* `deadline` is a pointer-size ([Definition 216](https://polkadotspec.dev/chap-host-api#defn-runtime-pointer-size)) to the SCALE-encoded Option value ([Definition 200](https://polkadotspec.dev/id-cryptography-encoding#defn-option-type)) containing the UNIX timestamp ([Definition 191](https://polkadotspec.dev/id-cryptography-encoding#defn-unix-time)). Passing `None` blocks indefinitely. ##### Result @@ -922,9 +922,9 @@ The function used to return a SCALE-encoded array of request statuses in a host- ##### Arguments -* `ids` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the SCALE-encoded array of started request IDs, as returned by `ext_offchain_http_request_start`; -* `deadline` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the SCALE-encoded Option value ([Definition 200](https://spec.polkadot.network/id-cryptography-encoding#defn-option-type)) containing the UNIX timestamp ([Definition 191](https://spec.polkadot.network/id-cryptography-encoding#defn-unix-time)). Passing `None` blocks indefinitely; -* `out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the buffer of `i32` integers where the request statuses will be stored. The number of elements of the buffer must be strictly equal to the number of elements in the `ids` array; otherwise, execution will be aborted. +* `ids` is a pointer-size ([Definition 216](https://polkadotspec.dev/chap-host-api#defn-runtime-pointer-size)) to the SCALE-encoded array of started request IDs, as returned by `ext_offchain_http_request_start`; +* `deadline` is a pointer-size ([Definition 216](https://polkadotspec.dev/chap-host-api#defn-runtime-pointer-size)) to the SCALE-encoded Option value ([Definition 200](https://polkadotspec.dev/id-cryptography-encoding#defn-option-type)) containing the UNIX timestamp ([Definition 191](https://polkadotspec.dev/id-cryptography-encoding#defn-unix-time)). Passing `None` blocks indefinitely; +* `out` is a pointer-size ([Definition 216](https://polkadotspec.dev/chap-host-api#defn-runtime-pointer-size)) to the buffer of `i32` integers where the request statuses will be stored. The number of elements of the buffer must be strictly equal to the number of elements in the `ids` array; otherwise, execution will be aborted. #### ext_offchain_http_response_headers @@ -956,7 +956,7 @@ New function to replace the functionality of `ext_offchain_http_response_headers * `request_id` is an i32 integer indicating the ID of the started request, as returned by `ext_offchain_http_request_start`; * `header_index` is an i32 integer indicating the index of the header requested, starting from zero; -* `out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the buffer where the header name will be stored. The value is actually stored only if the buffer is large enough. Otherwise, the buffer is not written into, and its contents are unchanged. +* `out` is a pointer-size ([Definition 216](https://polkadotspec.dev/chap-host-api#defn-runtime-pointer-size)) to the buffer where the header name will be stored. The value is actually stored only if the buffer is large enough. Otherwise, the buffer is not written into, and its contents are unchanged. ##### Result @@ -979,7 +979,7 @@ New function to replace the functionality of `ext_offchain_http_response_headers * `request_id` is an i32 integer indicating the ID of the started request, as returned by `ext_offchain_http_request_start`; * `header_index` is an i32 integer indicating the index of the header requested, starting from zero; -* `out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the buffer where the header value will be stored. The value is actually stored only if the buffer is large enough. Otherwise, the buffer is not written into, and its contents are unchanged. +* `out` is a pointer-size ([Definition 216](https://polkadotspec.dev/chap-host-api#defn-runtime-pointer-size)) to the buffer where the header value will be stored. The value is actually stored only if the buffer is large enough. Otherwise, the buffer is not written into, and its contents are unchanged. ##### Result @@ -1008,8 +1008,8 @@ The function has already been using a runtime-allocated buffer to return its val ##### Arguments * `request_id` is an i32 integer indicating the ID of the started request, as returned by `ext_offchain_http_request_start`; -* `buffer_out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the buffer where the body is written; -* `deadline` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the SCALE-encoded Option value ([Definition 200](https://spec.polkadot.network/id-cryptography-encoding#defn-option-type)) containing the UNIX timestamp ([Definition 191](https://spec.polkadot.network/id-cryptography-encoding#defn-unix-time)). Passing `None` blocks indefinitely. +* `buffer_out` is a pointer-size ([Definition 216](https://polkadotspec.dev/chap-host-api#defn-runtime-pointer-size)) to the buffer where the body is written; +* `deadline` is a pointer-size ([Definition 216](https://polkadotspec.dev/chap-host-api#defn-runtime-pointer-size)) to the SCALE-encoded Option value ([Definition 200](https://polkadotspec.dev/id-cryptography-encoding#defn-option-type)) containing the UNIX timestamp ([Definition 191](https://polkadotspec.dev/id-cryptography-encoding#defn-unix-time)). Passing `None` blocks indefinitely. ##### Result @@ -1041,4 +1041,4 @@ A new function providing a means of passing input data from the host to the runt ##### Arguments -* `buffer_out` is a pointer-size ([Definition 216](https://spec.polkadot.network/chap-host-api#defn-runtime-pointer-size)) to the buffer where the input data will be written. If the buffer is not large enough to accommodate the input data, execution will be aborted. +* `buffer_out` is a pointer-size ([Definition 216](https://polkadotspec.dev/chap-host-api#defn-runtime-pointer-size)) to the buffer where the input data will be written. If the buffer is not large enough to accommodate the input data, execution will be aborted.