Skip to content

Conversation

@bernardo-rf
Copy link

This RFC proposes the addition of a Constrained Application Protocol (CoAP) gateway to Hyperledger Fabric. The goal is to enable IoT and resource-constrained devices to interact with Fabric networks using a lightweight, UDP-based protocol. The CoAP gateway will provide a simplified interface for submitting transactions, evaluating chaincode, and receiving events, maintaining security through DTLS encryption and certificate-based authentication.

The current Hyperledger Fabric gRPC gateway has limitations for IoT and resource-constrained devices due to limited memory, processing power, and network bandwidth requirements. Many IoT deployments use UDP-based networks or have unreliable connectivity, making gRPC/HTTP2 impractical. The CoAP gateway addresses these limitations by providing lightweight, UDP-based communication with DTLS security for encrypted and authenticated connections.

Key features include:

  • Lightweight, UDP-based communication using CoAP protocol
  • DTLS security for encrypted and authenticated connections
  • Support for resource-constrained devices
  • Event streaming capabilities
  • Integration with existing Fabric MSP for device identity management
  • Four core endpoints: /evaluate, /endorse, /submit, and /events
  • Reuse of existing gRPC gateway logic and protobuf message structures

The CoAP gateway will be implemented as an optional peer service that acts as a protocol converter, translating CoAP requests into appropriate gRPC calls while maintaining the same security model and transaction lifecycle as the existing Fabric gateway.

This enhancement enables new use cases such as supply chain tracking with IoT sensors, smart city applications, and other scenarios requiring direct integration of resource-constrained devices with Fabric networks.

Signed-off-by: Bernardo Figueiredo bernardo.figueiredo@voidsoftware.com
Co-authored-by: Marco Ferreira marco.ferreira@voidsoftware.com
Co-authored-by: Marco Cova marco.cova@voidsoftware.com

- Added a new RFC proposing a CoAP gateway to enable IoT and resource-constrained devices to interact with Fabric networks.
- Defined lightweight, UDP-based communication using CoAP protocol with DTLS security for encrypted and authenticated connections.
- Outlined the gateway architecture as an optional peer service that acts as a protocol converter.
- Included four core endpoints: /evaluate, /endorse, /submit, and /events for transaction operations.
- Provided integration with existing Fabric MSP for device identity management and X.509 certificate authentication.
- Ensured backward compatibility by making the CoAP gateway an optional feature that doesn't affect existing gRPC gateway functionality.
- Added comprehensive error handling with appropriate CoAP response codes and security model integration.
- Included example use cases for supply chain tracking and smart city applications with IoT sensors.

Signed-off-by: bernardo.figueiredo <bernardo.figueiredo@voidsoftware.com>
@bernardo-rf bernardo-rf marked this pull request as ready for review August 18, 2025 09:04
@bernardo-rf
Copy link
Author

@yacovm @denyeart @manish-sethi @tock-ibm @ale-linux @C0rWin @andrew-coleman @bestbeforetoday @satota2 @pfi79

This is the proper PR related to the RFC proposes adding a CoAP gateway to Fabric for IoT device integration, with the proper checks passed. Please review and provide feedback.

@pfi79
Copy link

pfi79 commented Aug 18, 2025

Thank you for your proposes.
I have a few questions.

  1. Will you reuse proto files from the fabric-protos project or will you create your own message types?

  2. I am interested in how the transaction will be processed by the client (IoT)? It is on the client that the transaction id is generated, the time of the transaction, the nonce, and the transaction is signed. Will you leave it as it is now or change it?

  3. You may have already tried something in the form of go code. Could you please provide some parts of the code as an example.

  4. In your proposes, you refer to the Fabric SDK. This repository is archived and is not recommended for use. You need to use the fabric-gateway and link to it.

@bestbeforetoday
Copy link
Member

The peer Gateway service provides a CommitStatus method in addition to the Evaluate, Endorse, Submit and ChaincodeEvents methods that you have modelled. This is used to check the eventual status of a previously submitted transaction. See the protocol buffer definition and this overview of the Gateway transaction flow. From the RFC description, it sounds like your submit is doing both the Submit and CommitStatus operations.

@bernardo-rf
Copy link
Author

bernardo-rf commented Aug 19, 2025

@pfi79 Thank you for your questions. Here are the answers.

Thank you for your proposes. I have a few questions.

  1. Will you reuse proto files from the fabric-protos project or will you create your own message types?

Yes, the CoAP gateway will reuse the existing proto files from the fabric-protos project. This ensures that the CoAP gateway maintains full compatibility with the existing Fabric gateway ecosystem.

This is stated in the RFC:
"Requests and responses use the same protobuf message types as the gRPC service, marshaled into a binary byte array for the CoAP payload."

The RFC specifically mentions using:

  • gateway.EvaluateRequest and gateway.EvaluateResponse
  • gateway.EndorseRequest and gateway.EndorseResponse
  • gateway.SubmitRequest
  • gateway.ChaincodeEventsRequest and gateway.ChaincodeEventsResponse

The CoAP gateway will unmarshal incoming CoAP payloads into these standard protobuf messages, process them through the existing gateway logic, and marshal responses back to CoAP format.
This approach ensures consistency with the existing gRPC gateway and avoids creating duplicate message definitions. The CoAP gateway acts as a protocol converter, translating CoAP requests into the same protobuf structures used by the gRPC gateway.

  1. I am interested in how the transaction will be processed by the client (IoT)? It is on the client that the transaction id is generated, the time of the transaction, the nonce, and the transaction is signed. Will you leave it as it is now or change it?

The current transaction processing model will be maintained. The RFC states:
"The client is responsible for packaging all necessary information, such as the channel, chaincode name, and function arguments, within the Protobuf payload, just as they would when using the gRPC gateway directly."

And in the cold chain example:
"It extracts the standard Fabric transaction proposal, signed with the device's identity."

This means:

  • Transaction ID generation: Remains on the client side
  • Transaction timestamp: Handled by the client
  • Nonce generation: Client responsibility
  • Transaction signing: The device signs the transaction proposal with its identity

The IoT device will use the same transaction preparation logic as any other Fabric client, creating properly formatted protobuf messages that the CoAP gateway can directly forward to the existing Fabric infrastructure.

The CoAP gateway simply acts forwarding the already-prepared and signed protobuf messages to the existing Fabric infrastructure. This maintains the security model where IoT devices are treated as first-class Fabric identities with their own certificates and signing capabilities, ensuring no degradation of Fabric's security properties.

  1. You may have already tried something in the form of go code. Could you please provide some parts of the code as an example.

While the RFC doesn't include specific code examples, I can provide a conceptual example based on the design described. This represents the planned implementation approach rather than existing code.

Here's what the implementation would look like using the existing Fabric protobuf definitions

// Example CoAP handler for endorsement
func (h *EndorseHandler) HandleEndorse(w coap.ResponseWriter, req *coap.Request) {
    // Extract protobuf payload from CoAP request
    payload := req.Body()
    
    // Unmarshal the standard Fabric gateway.EndorseRequest
    var endorseReq gateway.EndorseRequest
    if err := proto.Unmarshal(payload, &endorseReq); err != nil {
        w.SetCode(coap.BadRequest)
        w.Write([]byte("Invalid protobuf payload"))
        return
    }
    
    // Authenticate client using DTLS certificate
    clientCert := req.ClientCertificate()
    identity, err := h.authenticator.ValidateCertificate(clientCert)
    if err != nil {
        w.SetCode(coap.Unauthorized)
        return
    }
    
    // Forward to existing gRPC gateway logic
    response, err := h.gatewayService.Endorse(context.Background(), &endorseReq)
    if err != nil {
        w.SetCode(coap.InternalServerError)
        return
    }
    
    // Marshal response back to protobuf
    responseBytes, _ := proto.Marshal(response)
    w.SetCode(coap.Content)
    w.Write(responseBytes)
}


// Example server setup
func NewCoAPGateway(config *Config) *CoAPGateway {
    server := coap.NewServer()
    
    // Register handlers for each endpoint
    server.Handle("/endorse", &EndorseHandler{gatewayService: config.GatewayService})
    server.Handle("/submit", &SubmitHandler{gatewayService: config.GatewayService})
    server.Handle("/commit-status", &CommitStatusHandler{gatewayService: config.GatewayService})
    server.Handle("/evaluate", &EvaluateHandler{gatewayService: config.GatewayService})
    server.Handle("/events", &EventsHandler{gatewayService: config.GatewayService})
    
    return &CoAPGateway{server: server}
}

This implementation demonstrates how the CoAP gateway would integrate with existing Fabric infrastructure while providing the lightweight protocol interface needed for IoT devices.

  1. In your proposes, you refer to the Fabric SDK. This repository is archived and is not recommended for use. You need to use the fabric-gateway and link to it.

You raise an excellent point about the Fabric SDK being archived. Let me clarify the context of these references in the RFC.

The RFC uses "Fabric SDK" in two specific contexts, neither of which recommends its use:
Historical Problem Description (Line 51):
"Historically, this required... use a Fabric SDK to submit a transaction"
This just describes a previous approach before the CoAP gateway solution

Conceptual Analogy (Line 57):
"Think of it as a specialized application client, like one built with the Fabric SDK, but one that speaks CoAP instead of gRPC"
This uses Fabric SDK as a familiar reference point to help developers understand the new concept

The RFC correctly focuses on the current fabric-gateway in all technical sections:

  • Uses gateway.EvaluateRequest, gateway.EndorseRequest, etc. (current protobuf definitions)
  • Integrates with existing gRPC gateway infrastructure
  • References the current gateway architecture

If you prefer, we could update these references to use more generic terms like "Fabric client library" or "existing Fabric client" or "Fabric Gateway" to avoid any confusion about the deprecated SDK. However, the current references serve their intended purpose of explaining historical context and providing conceptual clarity without recommending the use of deprecated libraries.

@bernardo-rf
Copy link
Author

@bestbeforetoday Thank you for your question. Here is the answer.

The peer Gateway service provides a CommitStatus method in addition to the Evaluate, Endorse, Submit and ChaincodeEvents methods that you have modelled. This is used to check the eventual status of a previously submitted transaction. See the protocol buffer definition and this overview of the Gateway transaction flow. From the RFC description, it sounds like your submit is doing both the Submit and CommitStatus operations.

You're correct! The current Fabric Gateway service includes a CommitStatus method that is not yet addressed in the CoAP gateway design.

To ensure the CoAP gateway provides the same functionality as the existing gRPC gateway, I propose adding a fifth endpoint to the RFC.

The current CoAP gateway RFC models four endpoints:

  • /evaluate - for evaluation requests
  • /endorse - for managing endorsements
  • /submit - for processing transaction submissions
  • /events - for listening to events

My proposal is to update the RFC with the following changes:

  1. Add a /commit-status endpoint that accepts a gateway.CommitStatusRequest and returns a gateway.CommitStatusResponse.
  2. Clarify the behavior of the /submit endpoint. The RFC should explicitly state whether /submit returns a result upon successful submission to the orderer or waits for the transaction to be committed to the ledger.
  3. Update the API documentation to reflect the complete set of gateway methods, ensuring parity with the current gRPC gateway.

This would enable IoT devices to both submit transactions and check their status.

- Standardizes the document structure to improve readability
- Standardizes heading levels and adds consistent spacing between sections
- The technical content remains unchanged, but the overall document is now more consistent and easier to read.

Signed-off-by: bernardo.figueiredo <bernardo.figueiredo@voidsoftware.com>
- Add /commit-status endpoint with SignedCommitStatusRequest/CommitStatusResponse
- Clarify submit behavior: waits for orderer submission
- Update endpoint count and API documentation for complete gateway parity
- Add CommitStatusHandler to core components

Enables IoT devices to check transaction status asynchronously after submission.

Signed-off-by: bernardo.figueiredo <bernardo.figueiredo@voidsoftware.com>
@bernardo-rf
Copy link
Author

Following discussions, I have created a presentation that summarizes the goals and design of this RFC.

I am sharing the presentation material on the Hyperledger Fabric mailing list to kick off wider community discussion.

Please find the mailing list thread here: Link to the Mailing List Announcement

@pfi79
Copy link

pfi79 commented Dec 10, 2025

Following discussions, I have created a presentation that summarizes the goals and design of this RFC.

I am sharing the presentation material on the Hyperledger Fabric mailing list to kick off wider community discussion.

Please find the mailing list thread here: Link to the Mailing List Announcement

Thank you for your presentation.
But let me make a few comments.

  1. It seems that on the slide "Security Model: DTLS + Fabric Identity" you have a confusion.
    The fact is that the "client" has two private keys and 2 certificates, respectively. the first is for Identity, the second is for TLS. If you try to get an Identity from a certificate intended for TLS, you will most likely get an error. Total: the certificate with which you will pass the "DTLS handshake" is needed only for this and that's it. But inside the SignedProposal lies the Identity and it is through it that access to resources is made.
  2. If you're going to reuse proto files, it's probably better to do so. To develop a plug-in to protoc protocol-gen-grpc-coap, which would generate code for the client and server using proto files.
    Take a look at the protocol-gen-grpc-gateway as an example. This plugin makes it easy to add a RestAPI endpoint (if needed).
    That is, one of the important components is the possibility of re-generating the coap code in case the proto file is changed.

@bernardo-rf
Copy link
Author

bernardo-rf commented Dec 16, 2025

Hi @pfi79 ,

Thank you for your review. Your comments regarding the separation of security identities and the long-term maintenance strategy can significantly strengthened this proposal.

I agree that the best path forward involves adopting two key architectural changes to ensure the CoAP Gateway adheres to Hyperledger Fabric's security and maintenance standards.

I have drafted the specific changes below and plan to commit them to the RFC. Could you confirm if this proposed resolution addresses your concerns and if you are aligned with this direction before I finalize the updated RFC text?


1. Resolution for Security Model Confusion (DTLS vs. Fabric Identity)

You have a point regarding separating the transport identity from the signing identity is the most secure and robust approach. We will be moving from a unified certificate model to a Dual-Certificate Model.

This addresses the security separation and clarifies the two-stage authentication process within the gateway:

Proposed New Flow:

  • Device Provisioning: The device will be provisioned with two distinct X.509 certificate/key pairs (both issued by a Fabric CA):

    • Transport Certificate (DTLS): Used only for the mutual DTLS handshake (connection authentication/confidentiality).
    • Fabric Identity Certificate (Signing): Used exclusively to sign the SignedProposal payload (transaction authorization).
  • Two-Stage Validation in the Gateway:

    • Stage 1: Transport Authentication: The DTLS certificate is validated for connection access.
    • Stage 2: Transaction Authorization (Crucial): The gateway extracts and verifies the signature of the Fabric Signing Certificate found within the SignedProposal payload. All Fabric ACLs and endorsement policies will be enforced against this identity.

Key Updates to RFC Text:

The "Guide-level explanation," "Protocol and Security," and "Client Authentication" sections will be updated to explicitly describe this two-stage process and the requirement for two certificates.


2. Resolution for Protocol Generation and Maintainability

I agree that a manual implementation for Protobuf message handling creates an unacceptable maintenance burden.

Proposed Solution: protoc Generation Utility

We will introduce a code generation utility to ensure the CoAP API remains synchronized with the upstream Gateway Protobuf definitions.

  • Implementation Plan: We will prioritize the development of this utility in Phase 1: Core Implementation.

    Develop the protoc generation utility to establish the API structure before implementing the core handlers.

  • Detailed Design: The RFC will be updated to include the Protocol Generator Utility as a core component, ensuring that CoAP request/response marshalling/unmarshalling boilerplate is automatically generated.


These changes are designed to make the CoAP Gateway proposal both secure and sustainable for the project. Please let me know if these resolutions meet your point of view. I will hold off on committing the final RFC text until I receive your confirmation.

Thank you again for your time.

@pfi79
Copy link

pfi79 commented Dec 16, 2025

Yes, I agree with what you wrote.

1. Resolution for Security Model Confusion (DTLS vs. Fabric Identity)

You have a point regarding separating the transport identity from the signing identity is the most secure and robust approach. We will be moving from a unified certificate model to a Dual-Certificate Model.

This addresses the security separation and clarifies the two-stage authentication process within the gateway:

Proposed New Flow:

  • Device Provisioning: The device will be provisioned with two distinct X.509 certificate/key pairs (both issued by a Fabric CA):

    • Transport Certificate (DTLS): Used only for the mutual DTLS handshake (connection authentication/confidentiality).
    • Fabric Identity Certificate (Signing): Used exclusively to sign the SignedProposal payload (transaction authorization).
  • Two-Stage Validation in the Gateway:

    • Stage 1: Transport Authentication: The DTLS certificate is validated for connection access.
    • Stage 2: Transaction Authorization (Crucial): The gateway extracts and verifies the signature of the Fabric Signing Certificate found within the SignedProposal payload. All Fabric ACLs and endorsement policies will be enforced against this identity.

Key Updates to RFC Text:

The "Guide-level explanation," "Protocol and Security," and "Client Authentication" sections will be updated to explicitly describe this two-stage process and the requirement for two certificates.

In fact, leads what you came up with to what is in fabric. Therefore, it minimizes changes.

2. Resolution for Protocol Generation and Maintainability

I agree that a manual implementation for Protobuf message handling creates an unacceptable maintenance burden.

Proposed Solution: protoc Generation Utility

We will introduce a code generation utility to ensure the CoAP API remains synchronized with the upstream Gateway Protobuf definitions.

  • Implementation Plan: We will prioritize the development of this utility in Phase 1: Core Implementation.

    Develop the protoc generation utility to establish the API structure before implementing the core handlers.

  • Detailed Design: The RFC will be updated to include the Protocol Generator Utility as a core component, ensuring that CoAP request/response marshalling/unmarshalling boilerplate is automatically generated.

I agree with almost everything. But most likely, you will not have a choice of the order of "file generation". Because the files are generated in the fabric-protos repository, from which the changes will be transferred to fabric-protos-go-apiv2. And only then will they appear in fabric.

I understand that creating a utility that would work with protoc is long and expensive. But you can come up with something simpler, if only you could run fabric-protos in the repository.

@bernardo-rf
Copy link
Author

I agree with almost everything. But most likely, you will not have a choice of the order of "file generation". Because the files are generated in the fabric-protos repository, from which the changes will be transferred to fabric-protos-go-apiv2. And only then will they appear in fabric.

I understand that creating a utility that would work with protoc is long and expensive. But you can come up with something simpler, if only you could run fabric-protos in the repository.

@pfi79 I completely agree with the concern regarding the file generation order and the maintenance overhead of a custom protoc plugin. After reviewing the repository lifecycle (fabric-protos -> fabric-protos-go-apiv2 -> fabric), I've moved away from the plugin-based generation approach.

Instead, I am proposing a Descriptor-Driven Approach. This approach leverages the protobuf descriptors that are already compiled into the fabric-protos-go-apiv2 module used by the Peer.

How it solves the maintenance issue:

  • We don't need to add new generation steps to fabric-protos. The CoAP server lives entirely within the Peer and consumes the official Go bindings.
  • The CoAP server uses Go’s protoreflect API to discover the Gateway service's methods and types at runtime from the embedded descriptors.
  • If a new RPC is added to the Gateway service, the CoAP server will automatically support it once the dependency is updated. We avoid the utility creation while ensuring the CoAP server is always in perfect sync with the proto definitions.

Why this approach ?

descriptor-driven-approach-flow

Minimal Maintenance Burden

  • Schema coupling is explicit: we track the same fabric-protos-go-apiv2 version Fabric uses
  • Changes to Gateway proto are caught at compile-time, not runtime

Generic Implementation

  • Uses dynamicpb for generic message handling
  • Same code handles all Gateway RPCs without hard-coding types
  • Easy to extend to new methods without code changes

Compile-Time Safety

  • Breaking changes in Gateway proto break the build, not production
  • Clear dependency on fabric-protos-go-apiv2 version
  • Can fail fast at startup if schema mismatches are detected

This satisfies the need for a 'simpler' solution that doesn't break the existing build flow while being essentially maintenance-free as the API evolves.

@pfi79
Copy link

pfi79 commented Dec 19, 2025

I agree with almost everything. But most likely, you will not have a choice of the order of "file generation". Because the files are generated in the fabric-protos repository, from which the changes will be transferred to fabric-protos-go-apiv2. And only then will they appear in fabric.

I understand that creating a utility that would work with protoc is long and expensive. But you can come up with something simpler, if only you could run fabric-protos in the repository.

@pfi79 I completely agree with the concern regarding the file generation order and the maintenance overhead of a custom protoc plugin. After reviewing the repository lifecycle (fabric-protos -> fabric-protos-go-apiv2 -> fabric), I've moved away from the plugin-based generation approach.

Instead, I am proposing a Descriptor-Driven Approach. This approach leverages the protobuf descriptors that are already compiled into the fabric-protos-go-apiv2 module used by the Peer.

How it solves the maintenance issue:

  • We don't need to add new generation steps to fabric-protos. The CoAP server lives entirely within the Peer and consumes the official Go bindings.
  • The CoAP server uses Go’s protoreflect API to discover the Gateway service's methods and types at runtime from the embedded descriptors.
  • If a new RPC is added to the Gateway service, the CoAP server will automatically support it once the dependency is updated. We avoid the utility creation while ensuring the CoAP server is always in perfect sync with the proto definitions.

Why this approach ?

descriptor-driven-approach-flow ### Minimal Maintenance Burden * Schema coupling is explicit: we track the same `fabric-protos-go-apiv2` version Fabric uses * Changes to Gateway proto are caught at compile-time, not runtime

Generic Implementation

  • Uses dynamicpb for generic message handling
  • Same code handles all Gateway RPCs without hard-coding types
  • Easy to extend to new methods without code changes

Compile-Time Safety

  • Breaking changes in Gateway proto break the build, not production
  • Clear dependency on fabric-protos-go-apiv2 version
  • Can fail fast at startup if schema mismatches are detected

This satisfies the need for a 'simpler' solution that doesn't break the existing build flow while being essentially maintenance-free as the API evolves.

Thank you, and by and large, I support your initiative.

There are some minor nuances left for me.

When I suggested considering generation at the fabric-protos level, I was also thinking about generating functions for the client. At least for tests, and at the maximum for easier creation of clients to fabric via CoAP.

@bernardo-rf
Copy link
Author

bernardo-rf commented Jan 6, 2026

Thank you, and by and large, I support your initiative.

There are some minor nuances left for me.

When I suggested considering generation at the fabric-protos level, I was also thinking about generating functions for the client. At least for tests, and at the maximum for easier creation of clients to fabric via CoAP.

@pfi79 Thank you for the feedback. To address the concerns regarding the need for easy client/test creation, I am proposing a Descriptor-Driven Approach as the primary solution, complemented by a Two-Layer Client Architecture that lives entirely within the fabric repository.

This bypasses the fabric-protos generation bottleneck while delivering the "helper functions" you suggested.

Generic Descriptor-Driven Client

The core client uses the FileDescriptor already embedded in fabric-protos-go-apiv2. It discovers methods and types at runtime. It uses the descriptors to handle URI paths and marshaling generically. This is what ensures the system never breaks when protos change.

// coapgateway/inner_client.go
func (c *GenericClient) InvokeUnary(ctx context.Context, methodName string, req, resp proto.Message) error {
    // 1. Look up method info from the embedded descriptors (not hardcoded)
    methodInfo, err := c.registry.GetMethod(methodName) 
    if err != nil { return err }

    // 2. The path is built dynamically (e.g., /rpc/gateway.Gateway/Endorse)
    path := fmt.Sprintf("/rpc/%s/%s", c.registry.ServiceName, methodName)

    // 3. Marshal/Send/Unmarshal using the standard proto library
    payload, _ := proto.Marshal(req)
    coapResp, err := c.conn.Post(ctx, path, message.AppOctetStream, payload)
    
    // ... error handling and unmarshaling ...
    return proto.Unmarshal(coapResp.Body(), resp)
}

High-Level Type-Safe Wrapper

To fulfill the goal of making it easy for clients and tests to use CoAP, I’ve added a thin wrapper that provides the specific functions you mentioned (e.g., Endorse, Submit).

// coapgateway/client.go - The high-level wrapper for users/tests
type GatewayClient struct {
    inner *GenericClient // The descriptor-driven engine
}

// Named functions provide the "generated" feel without the generator complexity
func (c *GatewayClient) Endorse(ctx context.Context, req *gw.EndorseRequest) (*gw.EndorseResponse, error) {
    resp := &gw.EndorseResponse{}
    // inner.InvokeUnary handles the URI pathing and marshaling generically
    err := c.inner.InvokeUnary(ctx, "Endorse", req, resp)
    return resp, err
}

func (c *GatewayClient) Submit(ctx context.Context, req *gw.SubmitRequest) (*gw.SubmitResponse, error) {
    resp := &gw.SubmitResponse{}
    err := c.inner.InvokeUnary(ctx, "Submit", req, resp)
    return resp, err
}

Because this wrapper is 'logic-less', it only needs to be updated if a brand-new RPC method is added to the Gateway. Given the stability of the Gateway API, this is a much smaller burden than maintaining a custom protoc plugin and modifying the fabric-protos build pipeline.

  • For Tests: Developers get a standard client.Endorse(ctx, req) experience. It is type-safe, supports autocomplete, and feels like the existing SDK.

  • For Easier Client Creation: External Go developers can simply import this client package to interact with the Peer via CoAP without ever seeing a 'raw' CoAP packet or URI string.

The primary focus remains the Descriptor-Driven Approach, providing a maintenance-free server. The two-layer client is a complementary addition to ensure a quality developer experience.

Does this approach satisfy both the architectural maintenance requirements and the need for a high-quality client/test DX?

@pfi79
Copy link

pfi79 commented Jan 6, 2026

Because this wrapper is 'logic-less', it only needs to be updated if a brand-new RPC method is added to the Gateway. Given the stability of the Gateway API, this is a much smaller burden than maintaining a custom protoc plugin and modifying the fabric-protos build pipeline.

  • For Tests: Developers get a standard client.Endorse(ctx, req) experience. It is type-safe, supports autocomplete, and feels like the existing SDK.
  • For Easier Client Creation: External Go developers can simply import this client package to interact with the Peer via CoAP without ever seeing a 'raw' CoAP packet or URI string.

The primary focus remains the Descriptor-Driven Approach, providing a maintenance-free server. The two-layer client is a complementary addition to ensure a quality developer experience.

Does this approach satisfy both the architectural maintenance requirements and the need for a high-quality client/test DX?

The fact is that the client should not import packages from the fabric repository.
There are fabric-lib-go and fabric-protos-go-apiv2 repositories for this.

If there is even a small chance sometime in the future that a new method will be added to Gateway, I kindly ask you to consider your solution for this option.

@bernardo-rf
Copy link
Author

@pfi79 After reflecting on your feedback and the broader adoption goals, it's clear that providing an easy, proto-based client generation option is essential for the success of this feature.

Without automated client generation, we risk:

  • Developers won't embrace a protocol requiring manual client implementation
  • Gateway API changes would require coordinated manual updates across multiple repositories
  • Manual implementations may diverge in subtle ways across different clients
  • Compared to gRPC's auto-generated clients, a manual CoAP approach would be a significant step backward

Given these considerations, I propose revisiting the protoc Generation Utility approach, specifically: a code generation plugin that integrates directly into the fabric-protos build pipeline, similar to how protoc-gen-go-grpc currently works.

Proposed Solution: protoc-gen-go-coap Plugin

Rather than creating a complex standalone tool, I propose developing a protoc plugin (protoc-gen-go-coap) that slots seamlessly into the existing fabric-protos generation workflow.

1. Works Within Existing Infrastructure
The fabric-protos repository already orchestrates multiple protoc plugins via buf.gen.yaml. Adding one more entry is straightforward:

  plugins:
    - local: protoc-gen-go-grpc
      out: bindings/go-apiv2
      opt:
        - paths=source_relative
        - require_unimplemented_servers=false
    - local: protoc-gen-go-coap      # <-- New plugin
      out: bindings/go-apiv2
      opt:
        - paths=source_relative

2. Minimal Development Complexity
Leveraging Google's protogen library means we don't need to parse proto files manually the heavy lifting is already done. The plugin focuses solely on generating Go client code and path constants.

3. Automatic Integration
Once configured, make genprotos automatically generates gateway_coap.pb.go alongside gateway_grpc.pb.go. No additional manual steps required.

4. Natural Pipeline Flow
Generated files appear in bindings/go-apiv2/gateway/ during the fabric-protos build, then flow naturally to `fabric-protos-go-apiv2 through your existing release process.

Generated Output Example

For gateway.proto, the plugin generates `gateway_coap.pb.go:

package gateway

// Path constants for CoAP URIs
const (
    Gateway_Endorse_CoAPPath      = "/rpc/gateway.Gateway/Endorse"
    Gateway_Submit_CoAPPath       = "/rpc/gateway.Gateway/Submit"
    Gateway_CommitStatus_CoAPPath = "/rpc/gateway.Gateway/CommitStatus"
    Gateway_Evaluate_CoAPPath     = "/rpc/gateway.Gateway/Evaluate"
)

// GatewayCoapClient is the CoAP client API for Gateway service
type GatewayCoapClient interface {
    Endorse(ctx context.Context, in *EndorseRequest, opts ...CoapOption) (*EndorseResponse, error)
    Submit(ctx context.Context, in *SubmitRequest, opts ...CoapOption) (*SubmitResponse, error)
    CommitStatus(ctx context.Context, in *SignedCommitStatusRequest, opts ...CoapOption) (*CommitStatusResponse, error)
    Evaluate(ctx context.Context, in *EvaluateRequest, opts ...CoapOption) (*EvaluateResponse, error)
}

// Constructor and implementation
func NewGatewayCoapClient(conn CoapConnection) GatewayCoapClient { /* ... */ }

// Internal implementation struct and methods...

Client-Side Benefits Output Example

External client usage becomes trivial:

import "github.com/hyperledger/fabric-protos-go-apiv2/gateway"

client := gateway.NewGatewayCoapClient(coapConn)

resp, err := client.Endorse(ctx, &gateway.EndorseRequest{
    TransactionId: "tx123",
    ChannelId:     "mychannel",
    // ...
})

This approach is directly consistent with:

  • Follows the same pattern as gRPC, Java, and Node.js bindings already present in `fabric-protos
  • External clients importing fabric-protos-go-apiv2/gateway get full type-safe client methods with IDE autocomplete
  • When Gateway service adds new methods, regeneration is automatic
  • Matches the ergonomics developers expect from gRPC clients

Server-Side Benefits Output Example

With the protoc generation approach, the server-side implementation becomes simple: a lightweight CoAP adapter that lives within the peer process and proxies to the existing gRPC Gateway implementation.

Where the code lives:

fabric/
├── internal/peer/gateway/
│   ├── gateway.go              # Existing gRPC GatewayServer (UNCHANGED)
│   ├── endorser.go             # Existing business logic (UNCHANGED)
│   └── coap/
│       ├── adapter.go          # New: CoAP-to-gRPC adapter
│       └── server.go           # New: CoAP server initialization
└── core/peer/
    └── peer.go                 # Peer startup: add CoAP listener

fabric-protos/
└── bindings/go-apiv2/gateway/
    ├── gateway.pb.go           # Existing
    ├── gateway_grpc.pb.go      # Existing  
    └── gateway_coap.pb.go      # New: Generated constants & client

All existing endorsement, ACL enforcement, and transaction validation logic remains untouched in the current `GatewayServer implementation.

Each handler follows the same straightforward pattern:

// Every handler follows this simple pattern
func (a *CoAPAdapter) handle{Method}(w coap.ResponseWriter, req *coap.Request) {
    // 1. Unmarshal CoAP request
    request := &gateway.{Method}Request{}
    proto.Unmarshal(req.Body(), request)
    
    // 2. Call existing gRPC implementation (all business logic here)
    response, err := a.grpcGateway.{Method}(req.Context(), request)
    
    // 3. Marshal CoAP response
    payload, _ := proto.Marshal(response)
    w.Write(payload)
}

Summary

By generating both client methods AND path constants in fabric-protos-go-apiv2, we eliminate manual maintenance burden on both sides while keeping the fabric peer implementation trivial. The adapter living inside the peer is actually an advantage it's a thin translation layer with zero network overhead, direct access to the GatewayServer instance, and seamless authentication/authorization context sharing.

@pfi79
Copy link

pfi79 commented Jan 8, 2026

@pfi79 After reflecting on your feedback and the broader adoption goals, it's clear that providing an easy, proto-based client generation option is essential for the success of this feature.

Without automated client generation, we risk:

  • Developers won't embrace a protocol requiring manual client implementation
  • Gateway API changes would require coordinated manual updates across multiple repositories
  • Manual implementations may diverge in subtle ways across different clients
  • Compared to gRPC's auto-generated clients, a manual CoAP approach would be a significant step backward

Given these considerations, I propose revisiting the protoc Generation Utility approach, specifically: a code generation plugin that integrates directly into the fabric-protos build pipeline, similar to how protoc-gen-go-grpc currently works.

Proposed Solution: protoc-gen-go-coap Plugin

Rather than creating a complex standalone tool, I propose developing a protoc plugin (protoc-gen-go-coap) that slots seamlessly into the existing fabric-protos generation workflow.

1. Works Within Existing Infrastructure The fabric-protos repository already orchestrates multiple protoc plugins via buf.gen.yaml. Adding one more entry is straightforward:

  plugins:
    - local: protoc-gen-go-grpc
      out: bindings/go-apiv2
      opt:
        - paths=source_relative
        - require_unimplemented_servers=false
    - local: protoc-gen-go-coap      # <-- New plugin
      out: bindings/go-apiv2
      opt:
        - paths=source_relative

2. Minimal Development Complexity Leveraging Google's protogen library means we don't need to parse proto files manually the heavy lifting is already done. The plugin focuses solely on generating Go client code and path constants.

3. Automatic Integration Once configured, make genprotos automatically generates gateway_coap.pb.go alongside gateway_grpc.pb.go. No additional manual steps required.

4. Natural Pipeline Flow Generated files appear in bindings/go-apiv2/gateway/ during the fabric-protos build, then flow naturally to `fabric-protos-go-apiv2 through your existing release process.

Generated Output Example

For gateway.proto, the plugin generates `gateway_coap.pb.go:

package gateway

// Path constants for CoAP URIs
const (
    Gateway_Endorse_CoAPPath      = "/rpc/gateway.Gateway/Endorse"
    Gateway_Submit_CoAPPath       = "/rpc/gateway.Gateway/Submit"
    Gateway_CommitStatus_CoAPPath = "/rpc/gateway.Gateway/CommitStatus"
    Gateway_Evaluate_CoAPPath     = "/rpc/gateway.Gateway/Evaluate"
)

// GatewayCoapClient is the CoAP client API for Gateway service
type GatewayCoapClient interface {
    Endorse(ctx context.Context, in *EndorseRequest, opts ...CoapOption) (*EndorseResponse, error)
    Submit(ctx context.Context, in *SubmitRequest, opts ...CoapOption) (*SubmitResponse, error)
    CommitStatus(ctx context.Context, in *SignedCommitStatusRequest, opts ...CoapOption) (*CommitStatusResponse, error)
    Evaluate(ctx context.Context, in *EvaluateRequest, opts ...CoapOption) (*EvaluateResponse, error)
}

// Constructor and implementation
func NewGatewayCoapClient(conn CoapConnection) GatewayCoapClient { /* ... */ }

// Internal implementation struct and methods...

Client-Side Benefits Output Example

External client usage becomes trivial:

import "github.com/hyperledger/fabric-protos-go-apiv2/gateway"

client := gateway.NewGatewayCoapClient(coapConn)

resp, err := client.Endorse(ctx, &gateway.EndorseRequest{
    TransactionId: "tx123",
    ChannelId:     "mychannel",
    // ...
})

This approach is directly consistent with:

  • Follows the same pattern as gRPC, Java, and Node.js bindings already present in `fabric-protos
  • External clients importing fabric-protos-go-apiv2/gateway get full type-safe client methods with IDE autocomplete
  • When Gateway service adds new methods, regeneration is automatic
  • Matches the ergonomics developers expect from gRPC clients

Server-Side Benefits Output Example

With the protoc generation approach, the server-side implementation becomes simple: a lightweight CoAP adapter that lives within the peer process and proxies to the existing gRPC Gateway implementation.

Where the code lives:

fabric/
├── internal/peer/gateway/
│   ├── gateway.go              # Existing gRPC GatewayServer (UNCHANGED)
│   ├── endorser.go             # Existing business logic (UNCHANGED)
│   └── coap/
│       ├── adapter.go          # New: CoAP-to-gRPC adapter
│       └── server.go           # New: CoAP server initialization
└── core/peer/
    └── peer.go                 # Peer startup: add CoAP listener

fabric-protos/
└── bindings/go-apiv2/gateway/
    ├── gateway.pb.go           # Existing
    ├── gateway_grpc.pb.go      # Existing  
    └── gateway_coap.pb.go      # New: Generated constants & client

All existing endorsement, ACL enforcement, and transaction validation logic remains untouched in the current `GatewayServer implementation.

Each handler follows the same straightforward pattern:

// Every handler follows this simple pattern
func (a *CoAPAdapter) handle{Method}(w coap.ResponseWriter, req *coap.Request) {
    // 1. Unmarshal CoAP request
    request := &gateway.{Method}Request{}
    proto.Unmarshal(req.Body(), request)
    
    // 2. Call existing gRPC implementation (all business logic here)
    response, err := a.grpcGateway.{Method}(req.Context(), request)
    
    // 3. Marshal CoAP response
    payload, _ := proto.Marshal(response)
    w.Write(payload)
}

Summary

By generating both client methods AND path constants in fabric-protos-go-apiv2, we eliminate manual maintenance burden on both sides while keeping the fabric peer implementation trivial. The adapter living inside the peer is actually an advantage it's a thin translation layer with zero network overhead, direct access to the GatewayServer instance, and seamless authentication/authorization context sharing.

I fully support you.
Please make appropriate changes to the document.

Key changes:
- Automated client generation via protoc plugin
- Clarify dual-certificate model follows standard Fabric pattern
- Update endpoints to RPC-style (/rpc/gateway.Gateway/{Method})
- Add implementation details (peer lifecycle, context flow, error mapping)
- Add developer experience section with code examples
- Remove duplicate MSP configuration (uses peer-level config)
- Streamline security explanations to emphasize consistency with gRPC

Signed-off-by: bernardo.figueiredo <bernardo.figueiredo@voidsoftware.com>
@bernardo-rf
Copy link
Author

Technical Update: Standardized Generation & Security Model

Based on consensus with @pfi79 , I have pivoted the RFC to a static generation model and clarified the security handshake. These changes ensure the CoAP Gateway follows the existing Fabric toolchain without manual maintenance.

1. Toolchain Integration (fabric-protos)

  • Replaced the previous approach with a lightweight protoc-gen-go-coap plugin.
  • Bindings are generated in fabric-protos and flow through apiv2 to fabric, matching the gRPC pattern.
  • Zero manual updates required when gateway.proto changes.

2. Dual-Certificate Security

  • Explicitly separated Transport Identity (DTLS) from Signing Identity (MSP).
  • This mirrors the standard pattern used in the gRPC gateway, ensuring no security regressions.

3. Implementation Details

  • The gateway acts as a thin adapter with no business logic duplication; it delegates directly to the internal GatewayServer.

Summary

  • Automated sync with Protobuf definitions.
  • Follows established Fabric patterns.
  • Auto-generated client interfaces for IoT developers.

The RFC is updated and ready for review.

@jt-nti
Copy link
Member

jt-nti commented Jan 9, 2026

Hi, I've just been catching up on the discussions related to fabric-protos. I like the idea of adding support for resource constrained devices, and extending the fabric-protos build pipeline to generate a coap client implementation seems really interesting but I don't think that the coap client code should be added to the existing fabric-protos-go-apiv2 go bindings.

The generated go, java and node bindings currently only contain what's defined in the protobuf definitions, which is a good thing and in my opinion should not change. I think it would be surprising for a project that only requires the go bindings to get a coap client implementation as well. (All the actual client implementations are in separate projects.)

Using a new protoc plugin in the fabric-protos build to generate a client implementation definitely sounds interesting but it probably only makes sense if the code can be 100% generated, which I think is the case based on...

Zero manual updates required when gateway.proto changes.

If not, you're swapping having to manually update a client project, with maintaining a protoc plugin somewhere and having to manually update the client implementation anyway. (Mixing generated and hand written code can cause its own problems.)

@pfi79
Copy link

pfi79 commented Jan 9, 2026

Hi, I've just been catching up on the discussions related to fabric-protos. I like the idea of adding support for resource constrained devices, and extending the fabric-protos build pipeline to generate a coap client implementation seems really interesting but I don't think that the coap client code should be added to the existing fabric-protos-go-apiv2 go bindings.

The generated go, java and node bindings currently only contain what's defined in the protobuf definitions, which is a good thing and in my opinion should not change. I think it would be surprising for a project that only requires the go bindings to get a coap client implementation as well. (All the actual client implementations are in separate projects.)

Using a new protoc plugin in the fabric-protos build to generate a client implementation definitely sounds interesting but it probably only makes sense if the code can be 100% generated, which I think is the case based on...

Zero manual updates required when gateway.proto changes.

If not, you're swapping having to manually update a client project, with maintaining a protoc plugin somewhere and having to manually update the client implementation anyway. (Mixing generated and hand written code can cause its own problems.)

I may have misunderstood you.

fabric-protos-go-apev2 is used by both fabric and clients and libraries that implement changecodes.
That is, there is a single point at which changes in the proto protocol are immediately reflected in the code, for example on go.

If you're worried that there won't be such an implementation for other languages (node and java). There has already been such a precedent, and I am primarily interested in the implementation on go. In the future, it will be possible to open questions about the implementation of this in other languages.

If you are worried that the generation will take place directly in fabric-protos-go-apiv2. No way. All changes are only in fabric-protos.

How do I make a plugin? There are options here. They can be discussed. Of course, I want it to be official as protocol-gen-go. But I think it won't be a big deal if the plugin is customized first.

@bestbeforetoday
Copy link
Member

Similar to James, I am nervous of adding CoAP bindings to fabric-protos-go-apiv2. This would force an unnecessary dependency on existing (gRPC-based) consumers: chaincode, admin SDK, client API and all the client applications that use those APIs. Could CoAP bindings (generated in fabric-protos from the same protobuf definitions) not be published to a sister repository for client consumption? For example, a fabric-coap-go repository.

@pfi79
Copy link

pfi79 commented Jan 10, 2026

Similar to James, I am nervous of adding CoAP bindings to fabric-protos-go-apiv2. This would force an unnecessary dependency on existing (gRPC-based) consumers: chaincode, admin SDK, client API and all the client applications that use those APIs. Could CoAP bindings (generated in fabric-protos from the same protobuf definitions) not be published to a sister repository for client consumption? For example, a fabric-coap-go repository.

I don't understand why you're afraid of this.

Could you explain in more detail?

@pfi79
Copy link

pfi79 commented Jan 10, 2026

Similar to James, I am nervous of adding CoAP bindings to fabric-protos-go-apiv2. This would force an unnecessary dependency on existing (gRPC-based) consumers: chaincode, admin SDK, client API and all the client applications that use those APIs. Could CoAP bindings (generated in fabric-protos from the same protobuf definitions) not be published to a sister repository for client consumption? For example, a fabric-coap-go repository.

The thing is that the implementation of CoAP directly depends on proto files.
Do you want changes in fabric-protos to be written to the fabric-coap-go repository in the same way?
But the generated coap files will not work without grpc files.

Let me show you with a simpler example, I recently did an exercise (adding the rest api if we already have grpc)
How do I add an implementation?
So (see the protocol-gen-grpc-gateway section):

version: v2
plugins:
  - local: protoc-gen-go
    out: bindings/go-apiv2
    opt: paths=source_relative
  - local: protoc-gen-go-grpc
    out: bindings/go-apiv2
    opt:
      - paths=source_relative
      - require_unimplemented_servers=false
  - local: protoc-gen-grpc-gateway
    out: bindings/go-apiv2
    opt:
      - paths=source_relative
      - generate_unbound_methods=true

Now let's look at the code that was generated - *.pb.gw.go. It directly calls functions from the *_grpc.pb.go file. As a rule, such files are located nearby.

For example, here is the code for peer/peer.proto (see c.cc.Invoke):

// Copyright the Hyperledger Fabric contributors. All rights reserved.
//
// SPDX-License-Identifier: Apache-2.0

// Code generated by protoc-gen-go-grpc. DO NOT EDIT.
// versions:
// - protoc-gen-go-grpc v1.5.1
// - protoc             (unknown)
// source: peer/peer.proto

package peer

import (
	context "context"
	grpc "google.golang.org/grpc"
	codes "google.golang.org/grpc/codes"
	status "google.golang.org/grpc/status"
)

// This is a compile-time assertion to ensure that this generated file
// is compatible with the grpc package it is being compiled against.
// Requires gRPC-Go v1.64.0 or later.
const _ = grpc.SupportPackageIsVersion9

const (
	Endorser_ProcessProposal_FullMethodName = "/protos.Endorser/ProcessProposal"
)

// EndorserClient is the client API for Endorser service.
//
// For semantics around ctx use and closing/ending streaming RPCs, please refer to https://pkg.go.dev/google.golang.org/grpc/?tab=doc#ClientConn.NewStream.
type EndorserClient interface {
	ProcessProposal(ctx context.Context, in *SignedProposal, opts ...grpc.CallOption) (*ProposalResponse, error)
}

type endorserClient struct {
	cc grpc.ClientConnInterface
}

func NewEndorserClient(cc grpc.ClientConnInterface) EndorserClient {
	return &endorserClient{cc}
}

func (c *endorserClient) ProcessProposal(ctx context.Context, in *SignedProposal, opts ...grpc.CallOption) (*ProposalResponse, error) {
	cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
	out := new(ProposalResponse)
	err := c.cc.Invoke(ctx, Endorser_ProcessProposal_FullMethodName, in, out, cOpts...)
	if err != nil {
		return nil, err
	}
	return out, nil
}

// EndorserServer is the server API for Endorser service.
// All implementations should embed UnimplementedEndorserServer
// for forward compatibility.
type EndorserServer interface {
	ProcessProposal(context.Context, *SignedProposal) (*ProposalResponse, error)
}

// UnimplementedEndorserServer should be embedded to have
// forward compatible implementations.
//
// NOTE: this should be embedded by value instead of pointer to avoid a nil
// pointer dereference when methods are called.
type UnimplementedEndorserServer struct{}

func (UnimplementedEndorserServer) ProcessProposal(context.Context, *SignedProposal) (*ProposalResponse, error) {
	return nil, status.Errorf(codes.Unimplemented, "method ProcessProposal not implemented")
}
func (UnimplementedEndorserServer) testEmbeddedByValue() {}

// UnsafeEndorserServer may be embedded to opt out of forward compatibility for this service.
// Use of this interface is not recommended, as added methods to EndorserServer will
// result in compilation errors.
type UnsafeEndorserServer interface {
	mustEmbedUnimplementedEndorserServer()
}

func RegisterEndorserServer(s grpc.ServiceRegistrar, srv EndorserServer) {
	// If the following call pancis, it indicates UnimplementedEndorserServer was
	// embedded by pointer and is nil.  This will cause panics if an
	// unimplemented method is ever invoked, so we test this at initialization
	// time to prevent it from happening at runtime later due to I/O.
	if t, ok := srv.(interface{ testEmbeddedByValue() }); ok {
		t.testEmbeddedByValue()
	}
	s.RegisterService(&Endorser_ServiceDesc, srv)
}

func _Endorser_ProcessProposal_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
	in := new(SignedProposal)
	if err := dec(in); err != nil {
		return nil, err
	}
	if interceptor == nil {
		return srv.(EndorserServer).ProcessProposal(ctx, in)
	}
	info := &grpc.UnaryServerInfo{
		Server:     srv,
		FullMethod: Endorser_ProcessProposal_FullMethodName,
	}
	handler := func(ctx context.Context, req interface{}) (interface{}, error) {
		return srv.(EndorserServer).ProcessProposal(ctx, req.(*SignedProposal))
	}
	return interceptor(ctx, in, info, handler)
}

// Endorser_ServiceDesc is the grpc.ServiceDesc for Endorser service.
// It's only intended for direct use with grpc.RegisterService,
// and not to be introspected or modified (even as a copy)
var Endorser_ServiceDesc = grpc.ServiceDesc{
	ServiceName: "protos.Endorser",
	HandlerType: (*EndorserServer)(nil),
	Methods: []grpc.MethodDesc{
		{
			MethodName: "ProcessProposal",
			Handler:    _Endorser_ProcessProposal_Handler,
		},
	},
	Streams:  []grpc.StreamDesc{},
	Metadata: "peer/peer.proto",
}

Yes, we don't need the rest api yet, but in CoaP-API I would like to see the same brevity and conciseness of the generated code. Therefore, it should be located next to the *_grpc.pb.go file.

You can not generate a file for all proto files, but for now only for the desired gateway.proto

@pfi79
Copy link

pfi79 commented Jan 10, 2026

In the case of fabric-protos, generation is divided by programming languages (go, java, and node).
CoAP is not a new programming language.

@jt-nti
Copy link
Member

jt-nti commented Jan 12, 2026

The fabric-protos-go-apiv2 Go module, and other artefacts published from the fabric-protos/bindings directory are intended to specifically be generated language bindings for the fabric protos, so that other projects don't need to worry about managing and generating bindings themselves. They are not intended to be a collection of different possible clients or other code that dependant projects may not need.

The language bindings can be used for all kinds of things, such as extracting information from blocks received via mqtt for example. If the current bindings start pulling in unrelated and unwanted dependencies, it's likely that people would stop using them and go back to generating bindings themselves, which wouldn't be great.

Using the fabric-protos repo to generate other modules like a coap client sounds good to me if that's possible, and may even provide a blueprint that other use cases could follow, but the output should be separate to the currently published bindings. @bestbeforetoday's suggestion of a fabric-coap-go repository makes sense to me. The new coap module would just include a dependency on the existing bindings the same way as the fabric gateway does.

Perhaps moving bindings into a new more generic templates folder, along with new directories for coap and any other future extensions, would be a good way to broaden the current scope.

If you're worried that there won't be such an implementation for other languages (node and java).

I'm not worried about coap not covering all the same languages as the existing bindings. (As an aside I'm hoping that fabric-protos will eventually expand to cover bindings for more languages, such as C# and Rust.)

@pfi79
Copy link

pfi79 commented Jan 12, 2026

I will ask one more clarifying question.
Let's forget about CoAP for a minute.

How do you feel about the protocol-gen-grpc-gateway plugin? If we need to create a rest api based on proto files in fabric, how do you see it? Will you record this in a separate repository?

The protocol-gen-grpc-gateway plugin requires direct access to proto files and the direct presence (i.e., they should be nearby) of _grpc.pb.go files. And there will be dependencies too "github.com/grpc-ecosystem/grpc-gateway/v2/runtime" and "github.com/grpc-ecosystem/grpc-gateway/v2/utilities"

Try to do what you suggest for CoAP, do for gateway.

The fabric-protos-go-apiv2 Go module, and other artefacts published from the fabric-protos/bindings directory are intended to specifically be generated language bindings for the fabric protos, so that other projects don't need to worry about managing and generating bindings themselves. They are not intended to be a collection of different possible clients or other code that dependant projects may not need.

Are you afraid that in fabric-protos-go-apiv2, a new dependency type will appear in go.mod "github.com/...../CoaP "

Using the fabric-protos repo to generate other modules like a coap client sounds good to me if that's possible, and may even provide a blueprint that other use cases could follow, but the output should be separate to the currently published bindings. @bestbeforetoday's suggestion of a fabric-coap-go repository makes sense to me. The new coap module would just include a dependency on the existing bindings the same way as the fabric gateway does.

I consider your example with fabric-gaytway incorrect. It uses client functions from gateway/gateway_grpc.pb.go go

Here are the dependencies he's dragging in:

package gateway

import (
	context "context"
	grpc "google.golang.org/grpc"
	codes "google.golang.org/grpc/codes"
	status "google.golang.org/grpc/status"
)

............


type gatewayClient struct {
	cc grpc.ClientConnInterface
}

func NewGatewayClient(cc grpc.ClientConnInterface) GatewayClient {
	return &gatewayClient{cc}
}

func (c *gatewayClient) Endorse(ctx context.Context, in *EndorseRequest, opts ...grpc.CallOption) (*EndorseResponse, error) {
	cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
	out := new(EndorseResponse)
	err := c.cc.Invoke(ctx, Gateway_Endorse_FullMethodName, in, out, cOpts...)
	if err != nil {
		return nil, err
	}
	return out, nil
}

func (c *gatewayClient) Submit(ctx context.Context, in *SubmitRequest, opts ...grpc.CallOption) (*SubmitResponse, error) {
	cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
	out := new(SubmitResponse)
	err := c.cc.Invoke(ctx, Gateway_Submit_FullMethodName, in, out, cOpts...)
	if err != nil {
		return nil, err
	}
	return out, nil
}

That is, I can now redirect your claims to already existing entities.
The only difference is that one has already been implemented and the community is used to it, while the other has not yet been implemented.

@jt-nti
Copy link
Member

jt-nti commented Jan 16, 2026

How do you feel about the protocol-gen-grpc-gateway plugin? If we need to create a rest api based on proto files in fabric, how do you see it? Will you record this in a separate repository?

For the output, yes, definitely, a RESTful Gateway API should be published as a new module, not included in the existing abric-protos-go-apiv2 module.

For the build, no, it makes sense to go in fabric-protos. Since protocol-gen-grpc-gateway already exists, I'll create an example PR when I get the chance.

@pfi79
Copy link

pfi79 commented Jan 16, 2026

How do you feel about the protocol-gen-grpc-gateway plugin? If we need to create a rest api based on proto files in fabric, how do you see it? Will you record this in a separate repository?

For the output, yes, definitely, a RESTful Gateway API should be published as a new module, not included in the existing abric-protos-go-apiv2 module.

For the build, no, it makes sense to go in fabric-protos. Since protocol-gen-grpc-gateway already exists, I'll create an example PR when I get the chance.

Thank you. Could you provide a link to your version? Maybe then I'll have no complaints about your option.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants