Skip to content

Commit

Permalink
docs: update protobuf links (backport #22182) (#22186)
Browse files Browse the repository at this point in the history
Co-authored-by: Wukingbow <[email protected]>
  • Loading branch information
mergify[bot] and Wukingbow authored Oct 9, 2024
1 parent c3a6f35 commit 601a75a
Show file tree
Hide file tree
Showing 12 changed files with 32 additions and 32 deletions.
4 changes: 2 additions & 2 deletions codec/unknownproto/unknown_fields.go
Original file line number Diff line number Diff line change
Expand Up @@ -222,7 +222,7 @@ var checks = [...]map[descriptorpb.FieldDescriptorProto_Type]bool{
descriptorpb.FieldDescriptorProto_TYPE_MESSAGE: true,
// The following types can be packed repeated.
// ref: "Only repeated fields of primitive numeric types (types which use the varint, 32-bit, or 64-bit wire types) can be declared "packed"."
// ref: https://developers.google.com/protocol-buffers/docs/encoding#packed
// ref: https://protobuf.dev/programming-guides/encoding/#packed
descriptorpb.FieldDescriptorProto_TYPE_INT32: true,
descriptorpb.FieldDescriptorProto_TYPE_INT64: true,
descriptorpb.FieldDescriptorProto_TYPE_UINT32: true,
Expand Down Expand Up @@ -255,7 +255,7 @@ var checks = [...]map[descriptorpb.FieldDescriptorProto_Type]bool{
}

// canEncodeType returns true if the wireType is suitable for encoding the descriptor type.
// See https://developers.google.com/protocol-buffers/docs/encoding#structure.
// See https://protobuf.dev/programming-guides/encoding/#structure.
func canEncodeType(wireType protowire.Type, descType descriptorpb.FieldDescriptorProto_Type) bool {
if iwt := int(wireType); iwt < 0 || iwt >= len(checks) {
return false
Expand Down
4 changes: 2 additions & 2 deletions docs/architecture/adr-019-protobuf-state-encoding.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ and JSON object encoding over the wire bringing parity between logical objects a
From the Amino docs:

> Amino is an object encoding specification. It is a subset of Proto3 with an extension for interface
> support. See the [Proto3 spec](https://developers.google.com/protocol-buffers/docs/proto3) for more
> support. See the [Proto3 spec](https://protobuf.dev/programming-guides/proto3/) for more
> information on Proto3, which Amino is largely compatible with (but not with Proto2).
>
> The goal of the Amino encoding protocol is to bring parity into logic objects and persistence objects.
Expand Down Expand Up @@ -56,7 +56,7 @@ made to address client-side encoding.

## Decision

We will adopt [Protocol Buffers](https://developers.google.com/protocol-buffers) for serializing
We will adopt [Protocol Buffers](https://protobuf.dev) for serializing
persisted structured data in the Cosmos SDK while providing a clean mechanism and developer UX for
applications wishing to continue to use Amino. We will provide this mechanism by updating modules to
accept a codec interface, `Marshaler`, instead of a concrete Amino codec. Furthermore, the Cosmos SDK
Expand Down
2 changes: 1 addition & 1 deletion docs/architecture/adr-023-protobuf-naming.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ Accepted

## Context

Protocol Buffers provide a basic [style guide](https://developers.google.com/protocol-buffers/docs/style)
Protocol Buffers provide a basic [style guide](https://protobuf.dev/programming-guides/style/)
and [Buf](https://buf.build/docs/style-guide) builds upon that. To the
extent possible, we want to follow industry accepted guidelines and wisdom for
the effective usage of protobuf, deviating from those only when there is clear
Expand Down
24 changes: 12 additions & 12 deletions docs/architecture/adr-027-deterministic-protobuf-serialization.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ Fully deterministic structure serialization, which works across many languages a
is needed when signing messages. We need to be sure that whenever we serialize
a data structure, no matter in which supported language, the raw bytes
will stay the same.
[Protobuf](https://developers.google.com/protocol-buffers/docs/proto3)
[Protobuf](https://protobuf.dev/programming-guides/proto3/)
serialization is not bijective (i.e. there exist a practically unlimited number of
valid binary representations for a given protobuf document)<sup>1</sup>.

Expand Down Expand Up @@ -55,7 +55,7 @@ reject documents containing maps as invalid input.
### Background - Protobuf3 Encoding

Most numeric types in protobuf3 are encoded as
[varints](https://developers.google.com/protocol-buffers/docs/encoding#varints).
[varints](https://protobuf.dev/programming-guides/encoding/#varints).
Varints are at most 10 bytes, and since each varint byte has 7 bits of data,
varints are a representation of `uint70` (70-bit unsigned integer). When
encoding, numeric values are casted from their base type to `uint70`, and when
Expand All @@ -74,15 +74,15 @@ encoding malleability.
### Serialization rules

The serialization is based on the
[protobuf3 encoding](https://developers.google.com/protocol-buffers/docs/encoding)
[protobuf3 encoding](https://protobuf.dev/programming-guides/encoding/)
with the following additions:

1. Fields must be serialized only once in ascending order
2. Extra fields or any extra data must not be added
3. [Default values](https://developers.google.com/protocol-buffers/docs/proto3#default)
3. [Default values](https://protobuf.dev/programming-guides/proto3/#default)
must be omitted
4. `repeated` fields of scalar numeric types must use
[packed encoding](https://developers.google.com/protocol-buffers/docs/encoding#packed)
[packed encoding](https://protobuf.dev/programming-guides/encoding/#packed)
5. Varint encoding must not be longer than needed:
* No trailing zero bytes (in little endian, i.e. no leading zeroes in big
endian). Per rule 3 above, the default value of `0` must be omitted, so
Expand Down Expand Up @@ -288,27 +288,27 @@ the need of implementing a custom serializer that adheres to this standard (and
implementation detail and the details of any particular implementation may
change in the future. Therefore, protocol buffer parsers must be able to parse
fields in any order._ from
https://developers.google.com/protocol-buffers/docs/encoding#order
* <sup>2</sup> https://developers.google.com/protocol-buffers/docs/encoding#signed_integers
https://protobuf.dev/programming-guides/encoding/#order
* <sup>2</sup> https://protobuf.dev/programming-guides/encoding/#signed_integers
* <sup>3</sup> _Note that for scalar message fields, once a message is parsed
there's no way of telling whether a field was explicitly set to the default
value (for example whether a boolean was set to false) or just not set at all:
you should bear this in mind when defining your message types. For example,
don't have a boolean that switches on some behavior when set to false if you
don't want that behavior to also happen by default._ from
https://developers.google.com/protocol-buffers/docs/proto3#default
https://protobuf.dev/programming-guides/proto3/#default
* <sup>4</sup> _When a message is parsed, if the encoded message does not
contain a particular singular element, the corresponding field in the parsed
object is set to the default value for that field._ from
https://developers.google.com/protocol-buffers/docs/proto3#default
https://protobuf.dev/programming-guides/proto3/#default
* <sup>5</sup> _Also note that if a scalar message field is set to its default,
the value will not be serialized on the wire._ from
https://developers.google.com/protocol-buffers/docs/proto3#default
https://protobuf.dev/programming-guides/proto3/#default
* <sup>6</sup> _For enums, the default value is the first defined enum value,
which must be 0._ from
https://developers.google.com/protocol-buffers/docs/proto3#default
https://protobuf.dev/programming-guides/proto3/#default
* <sup>7</sup> _For message fields, the field is not set. Its exact value is
language-dependent._ from
https://developers.google.com/protocol-buffers/docs/proto3#default
https://protobuf.dev/programming-guides/proto3/#default
* Encoding rules and parts of the reasoning taken from
[canonical-proto3 Aaron Craelius](https://github.com/regen-network/canonical-proto3)
6 changes: 3 additions & 3 deletions docs/architecture/adr-031-msg-service.md
Original file line number Diff line number Diff line change
Expand Up @@ -77,7 +77,7 @@ message MsgSubmitProposalResponse {
```

While this is most commonly used for gRPC, overloading protobuf `service` definitions like this does not violate
the intent of the [protobuf spec](https://developers.google.com/protocol-buffers/docs/proto3#services) which says:
the intent of the [protobuf spec](https://protobuf.dev/programming-guides/proto3/#services) which says:
> If you don’t want to use gRPC, it’s also possible to use protocol buffers with your own RPC implementation.
With this approach, we would get an auto-generated `MsgServer` interface:

Expand Down Expand Up @@ -175,7 +175,7 @@ Separate handler definition is no longer needed with this approach.

## Consequences

This design changes how a module functionality is exposed and accessed. It deprecates the existing `Handler` interface and `AppModule.Route` in favor of [Protocol Buffer Services](https://developers.google.com/protocol-buffers/docs/proto3#services) and Service Routing described above. This dramatically simplifies the code. We don't need to create handlers and keepers any more. Use of Protocol Buffer auto-generated clients clearly separates the communication interfaces between the module and a modules user. The control logic (aka handlers and keepers) is not exposed any more. A module interface can be seen as a black box accessible through a client API. It's worth to note that the client interfaces are also generated by Protocol Buffers.
This design changes how a module functionality is exposed and accessed. It deprecates the existing `Handler` interface and `AppModule.Route` in favor of [Protocol Buffer Services](https://protobuf.dev/programming-guides/proto3/#services) and Service Routing described above. This dramatically simplifies the code. We don't need to create handlers and keepers any more. Use of Protocol Buffer auto-generated clients clearly separates the communication interfaces between the module and a modules user. The control logic (aka handlers and keepers) is not exposed any more. A module interface can be seen as a black box accessible through a client API. It's worth to note that the client interfaces are also generated by Protocol Buffers.

This also allows us to change how we perform functional tests. Instead of mocking AppModules and Router, we will mock a client (server will stay hidden). More specifically: we will never mock `moduleA.MsgServer` in `moduleB`, but rather `moduleA.MsgClient`. One can think about it as working with external services (eg DBs, or online servers...). We assume that the transmission between clients and servers is correctly handled by generated Protocol Buffers.

Expand All @@ -196,6 +196,6 @@ Finally, closing a module to client API opens desirable OCAP patterns discussed
## References

* [Initial Github Issue \#7122](https://github.com/cosmos/cosmos-sdk/issues/7122)
* [proto 3 Language Guide: Defining Services](https://developers.google.com/protocol-buffers/docs/proto3#services)
* [proto 3 Language Guide: Defining Services](https://protobuf.dev/programming-guides/proto3/#services)
* [ADR 020](./adr-020-protobuf-transaction-encoding.md)
* [ADR 021](./adr-021-protobuf-query-encoding.md)
2 changes: 1 addition & 1 deletion docs/architecture/adr-044-protobuf-updates-guidelines.md
Original file line number Diff line number Diff line change
Expand Up @@ -73,7 +73,7 @@ and the following ones are NOT valid:

#### 2. Fields MAY be marked as `deprecated`, and nodes MAY implement a protocol-breaking change for handling these fields

Protobuf supports the [`deprecated` field option](https://developers.google.com/protocol-buffers/docs/proto#options), and this option MAY be used on any field, including `Msg` fields. If a node handles a Protobuf message with a non-empty deprecated field, the node MAY change its behavior upon processing it, even in a protocol-breaking way. When possible, the node MUST handle backwards compatibility without breaking the consensus (unless we increment the proto version).
Protobuf supports the [`deprecated` field option](https://protobuf.dev/programming-guides/proto2/), and this option MAY be used on any field, including `Msg` fields. If a node handles a Protobuf message with a non-empty deprecated field, the node MAY change its behavior upon processing it, even in a protocol-breaking way. When possible, the node MUST handle backwards compatibility without breaking the consensus (unless we increment the proto version).

As an example, the Cosmos SDK v0.42 to v0.43 update contained two Protobuf-breaking changes, listed below. Instead of bumping the package versions from `v1beta1` to `v1`, the SDK team decided to follow this guideline, by reverting the breaking changes, marking those changes as deprecated, and modifying the node implementation when processing messages with deprecated fields. More specifically:

Expand Down
4 changes: 2 additions & 2 deletions docs/architecture/adr-054-semver-compatible-modules.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ In order to achieve this, we need to solve the following problems:
2. circular dependencies between modules need to be broken to actually release
many modules in the SDK independently
3. pernicious minor version incompatibilities introduced through correctly
[evolving protobuf schemas](https://developers.google.com/protocol-buffers/docs/proto3#updating)
[evolving protobuf schemas](https://protobuf.dev/programming-guides/proto3/#updating)
without correct [unknown field filtering](./adr-020-protobuf-transaction-encoding.md#unknown-field-filtering)

Note that all the following discussion assumes that the proto file versioning and state machine versioning of a module
Expand Down Expand Up @@ -297,7 +297,7 @@ generate its own version of `MsgDoSomething` as `bar/internal/foo/v1.MsgDoSometh
inter-module router which would somehow convert it to the version which foo needs (ex. `foo/internal.MsgDoSomething`).

Currently, two generated structs for the same protobuf type cannot exist in the same go binary without special
build flags (see https://developers.google.com/protocol-buffers/docs/reference/go/faq#fix-namespace-conflict).
build flags (see https://protobuf.dev/reference/go/faq/#fix-namespace-conflict).
A relatively simple mitigation to this issue would be to set up the protobuf code to not register protobuf types
globally if they are generated in an `internal/` package. This will require modules to register their types manually
with the app-level level protobuf registry, this is similar to what modules already do with the `InterfaceRegistry`
Expand Down
2 changes: 1 addition & 1 deletion docs/build/building-modules/02-messages-and-queries.md
Original file line number Diff line number Diff line change
Expand Up @@ -97,7 +97,7 @@ A `query` is a request for information made by end-users of applications through

### gRPC Queries

Queries should be defined using [Protobuf services](https://developers.google.com/protocol-buffers/docs/proto#services). A `Query` service should be created per module in `query.proto`. This service lists endpoints starting with `rpc`.
Queries should be defined using [Protobuf services](https://protobuf.dev/programming-guides/proto2/). A `Query` service should be created per module in `query.proto`. This service lists endpoints starting with `rpc`.

Here's an example of such a `Query` service definition:

Expand Down
2 changes: 1 addition & 1 deletion docs/build/building-modules/15-depinject.md
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/api/cosmos/group/modul
```

:::note
Pulsar is optional. The official [`protoc-gen-go`](https://developers.google.com/protocol-buffers/docs/reference/go-generated) can be used as well.
Pulsar is optional. The official [`protoc-gen-go`](https://protobuf.dev/reference/go/go-generated/) can be used as well.
:::

## Dependency Definition
Expand Down
8 changes: 4 additions & 4 deletions docs/learn/advanced/05-encoding.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,9 +17,9 @@ While encoding in the Cosmos SDK used to be mainly handled by `go-amino` codec,
## Encoding

The Cosmos SDK supports two wire encoding protocols. Binary encoding is fulfilled by [Protocol
Buffers](https://developers.google.com/protocol-buffers), specifically the
Buffers](https://protobuf.dev/), specifically the
[gogoprotobuf](https://github.com/cosmos/gogoproto/) implementation, which is a subset of
[Proto3](https://developers.google.com/protocol-buffers/docs/proto3) with an extension for
[Proto3](https://protobuf.dev/programming-guides/proto3/) with an extension for
interface support. Text encoding is fulfilled by [Amino](https://github.com/tendermint/go-amino).

Due to Amino having significant performance drawbacks, being reflection-based, and not having
Expand Down Expand Up @@ -52,7 +52,7 @@ Modules are encouraged to utilize Protobuf encoding for their respective types.

### Guidelines for protobuf message definitions

In addition to [following official Protocol Buffer guidelines](https://developers.google.com/protocol-buffers/docs/proto3#simple), we recommend using these annotations in .proto files when dealing with interfaces:
In addition to [following official Protocol Buffer guidelines](https://protobuf.dev/programming-guides/proto3/#simple), we recommend using these annotations in .proto files when dealing with interfaces:

* use `cosmos_proto.accepts_interface` to annotate `Any` fields that accept interfaces
* pass the same fully qualified name as `protoName` to `InterfaceRegistry.RegisterInterface`
Expand Down Expand Up @@ -240,5 +240,5 @@ Protobuf types can be defined to encode:

#### Naming and conventions

We encourage developers to follow industry guidelines: [Protocol Buffers style guide](https://developers.google.com/protocol-buffers/docs/style)
We encourage developers to follow industry guidelines: [Protocol Buffers style guide](https://protobuf.dev/programming-guides/style/)
and [Buf](https://buf.build/docs/style-guide), see more details in [ADR 023](../../architecture/adr-023-protobuf-naming.md)
4 changes: 2 additions & 2 deletions docs/learn/beginner/00-app-anatomy.md
Original file line number Diff line number Diff line change
Expand Up @@ -181,7 +181,7 @@ Modules must implement [interfaces](../../build/building-modules/01-module-manag

### `Msg` Services

Each application module defines two [Protobuf services](https://developers.google.com/protocol-buffers/docs/proto#services): one `Msg` service to handle messages, and one gRPC `Query` service to handle queries. If we consider the module as a state-machine, then a `Msg` service is a set of state transition RPC methods.
Each application module defines two [Protobuf services](https://protobuf.dev/programming-guides/proto2/): one `Msg` service to handle messages, and one gRPC `Query` service to handle queries. If we consider the module as a state-machine, then a `Msg` service is a set of state transition RPC methods.
Each Protobuf `Msg` service method is 1:1 related to a Protobuf request type, which must implement `sdk.Msg` interface.
Note that `sdk.Msg`s are bundled in [transactions](../advanced/01-transactions.md), and each transaction contains one or multiple messages.

Expand All @@ -208,7 +208,7 @@ Each module should also implement the `RegisterServices` method as part of the [

gRPC `Query` services allow users to query the state using [gRPC](https://grpc.io). They are enabled by default, and can be configured under the `grpc.enable` and `grpc.address` fields inside <!-- markdown-link-check-disable-line -->[`app.toml`](../../user/run-node/01-run-node.md#configuring-the-node-using-apptoml-and-configtoml).

gRPC `Query` services are defined in the module's Protobuf definition files, specifically inside `query.proto`. The `query.proto` definition file exposes a single `Query` [Protobuf service](https://developers.google.com/protocol-buffers/docs/proto#services). Each gRPC query endpoint corresponds to a service method, starting with the `rpc` keyword, inside the `Query` service.
gRPC `Query` services are defined in the module's Protobuf definition files, specifically inside `query.proto`. The `query.proto` definition file exposes a single `Query` [Protobuf service](https://protobuf.dev/programming-guides/proto2/). Each gRPC query endpoint corresponds to a service method, starting with the `rpc` keyword, inside the `Query` service.

Protobuf generates a `QueryServer` interface for each module, containing all the service methods. A module's [`keeper`](#keeper) then needs to implement this `QueryServer` interface, by providing the concrete implementation of each service method. This concrete implementation is the handler of the corresponding gRPC query endpoint.

Expand Down
Loading

0 comments on commit 601a75a

Please sign in to comment.