Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Editorial: Various style and wording tweaks #797

Merged
merged 13 commits into from
Jan 31, 2025
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions docs/SpecCodingConventions.md
Original file line number Diff line number Diff line change
Expand Up @@ -148,4 +148,5 @@ Example:

* Dictionary members are referenced using dotted property syntax. e.g. _options.padding_
* Note that this is contrary to Web IDL + Infra; formally, a JavaScript object has been mapped to a Web IDL [dictionary](https://webidl.spec.whatwg.org/#idl-dictionaries) and then processed into an Infra [map](ordered) by the time a spec is using it. So formally the syntax _options["padding"]_ should be used.
* Dictionary members should be linked to, both in algorithms and in other text. e.g. `|options|.{{MLOptionsDict/member}}` (in the steps for an algorithm) or `*options*.{{MLOptionsDict/member}}` (outside an algorithm).
* Dictionary members should be given definitions somewhere in the text. This is usually done with a `<dl dfn-type=dict-member dfn-for=...>` for the dictionary as a whole, containing a `<dfn>` for each member.
22 changes: 11 additions & 11 deletions index.bs
Original file line number Diff line number Diff line change
Expand Up @@ -2495,7 +2495,7 @@ partial dictionary MLOpSupportLimits {
</dl>

<div class="note">
A *depthwise* conv2d operation is a variant of grouped convolution, used in models like the MobileNet, where the *options.groups* = inputChannels = outputChannels and the shape of filter tensor is *[options.groups, 1, height, width]*
A *depthwise* conv2d operation is a variant of grouped convolution, used in models like the MobileNet, where the *options*.{{MLConv2dOptions/groups}} = inputChannels = outputChannels and the shape of filter tensor is *[options.groups, 1, height, width]*
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should inputChannels and outputChannels be formatted? In *[options.groups, 1, height, width]*, should we use [*options*.{{MLConv2dOptions/groups}}, 1, *height*, *width*]?

I found the prose of MLConv2dOptions.inputLayout and MLConv2dOptions.filterLayout also refer to variables including inputChannels and outputChannels etc.,. Should we also format them separately?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Formatted *inputChannels* and *outputChannels* in that instance (in f714fc1) . As noted above, need to decide more generally on how to format shapes.

for {{MLConv2dFilterOperandLayout/"oihw"}} layout, *[height, width, 1, options.groups]* for {{MLConv2dFilterOperandLayout/"hwio"}} layout, *[options.groups, height, width, 1]* for {{MLConv2dFilterOperandLayout/"ohwi"}} layout and *[1, height, width, options.groups]* for {{MLConv2dFilterOperandLayout/"ihwo"}} layout.
</div>

Expand Down Expand Up @@ -3587,7 +3587,7 @@ partial dictionary MLOpSupportLimits {
<div dfn-for="MLGraphBuilder/gather(input, indices, options)" dfn-type=argument>
**Arguments:**
- <dfn>input</dfn>: an {{MLOperand}}. The input N-D tensor from which the values are gathered.
- <dfn>indices</dfn>: an {{MLOperand}}. The indices N-D tensor of the input values to gather. The values must be of type {{MLOperandDataType/"int32"}}, {{MLOperandDataType/"uint32"}} or {{MLOperandDataType/"int64"}}, and must be in the range -N (inclusive) to N (exclusive) where N is the size of the input dimension indexed by *options.axis*, and a negative index means indexing from the end of the dimension.
- <dfn>indices</dfn>: an {{MLOperand}}. The indices N-D tensor of the input values to gather. The values must be of type {{MLOperandDataType/"int32"}}, {{MLOperandDataType/"uint32"}} or {{MLOperandDataType/"int64"}}, and must be in the range -N (inclusive) to N (exclusive) where N is the size of the input dimension indexed by *options*.{{MLGatherOptions/axis}}, and a negative index means indexing from the end of the dimension.
- <dfn>options</dfn>: an optional {{MLGatherOptions}}. The optional parameters of the operation.

**Returns:** an {{MLOperand}}. The output N-D tensor of [=MLOperand/rank=] equal to the [=MLOperand/rank=] of *input* + the [=MLOperand/rank=] of *indices* - 1.
Expand Down Expand Up @@ -3849,7 +3849,7 @@ partial dictionary MLOpSupportLimits {
<dl dfn-type=dict-member dfn-for=MLGemmOptions>
: <dfn>c</dfn>
::
The third input tensor. It is either a scalar, or of the shape that is [=unidirectionally broadcastable=] to the shape *[M, N]*. When it is not specified, the computation is done as if *c* is a scalar 0.0.
The third input tensor. It is either a scalar, or of the shape that is [=unidirectionally broadcastable=] to the shape *[M, N]*. When it is not specified, the computation is done as if {{MLGemmOptions/c}} is a scalar 0.0.

: <dfn>alpha</dfn>
::
Expand All @@ -3870,8 +3870,8 @@ partial dictionary MLOpSupportLimits {

<div dfn-for="MLGraphBuilder/gemm(a, b, options)" dfn-type=argument>
**Arguments:**
- <dfn>a</dfn>: an {{MLOperand}}. The first input 2-D tensor with shape *[M, K]* if *aTranspose* is false, or *[K, M]* if *aTranspose* is true.
- <dfn>b</dfn>: an {{MLOperand}}. The second input 2-D tensor with shape *[K, N]* if *bTranspose* is false, or *[N, K]* if *bTranspose* is true.
- <dfn>a</dfn>: an {{MLOperand}}. The first input 2-D tensor with shape *[M, K]* if {{MLGemmOptions/aTranspose}} is false, or *[K, M]* if {{MLGemmOptions/aTranspose}} is true.
- <dfn>b</dfn>: an {{MLOperand}}. The second input 2-D tensor with shape *[K, N]* if {{MLGemmOptions/bTranspose}} is false, or *[N, K]* if {{MLGemmOptions/bTranspose}} is true.
- <dfn>options</dfn>: an optional {{MLGemmOptions}}. The optional parameters of the operation.

**Returns:** an {{MLOperand}}. The output 2-D tensor of shape *[M, N]* that contains the calculated product of all the inputs.
Expand Down Expand Up @@ -4979,7 +4979,7 @@ partial dictionary MLOpSupportLimits {

: <dfn>axes</dfn>
::
The indices to the input dimensions to reduce. When this member is not present, it is treated as if all dimensions except the first were given (e.g. for a 4-D input tensor, axes = [1,2,3]). That is, the reduction for the mean and variance values are calculated across all the input features for each independent batch. If empty, no dimensions are reduced.
The indices to the input dimensions to reduce. When this member is not present, it is treated as if all dimensions except the first were given (e.g. for a 4-D input tensor, {{MLLayerNormalizationOptions/axes}} = [1,2,3]). That is, the reduction for the mean and variance values are calculated across all the input features for each independent batch. If empty, no dimensions are reduced.
: <dfn>epsilon</dfn>
::
A small value to prevent computational error due to divide-by-zero.
Expand Down Expand Up @@ -6346,16 +6346,16 @@ partial dictionary MLOpSupportLimits {
<div dfn-for="MLGraphBuilder/averagePool2d(input, options), MLGraphBuilder/l2Pool2d(input, options), MLGraphBuilder/maxPool2d(input, options)" dfn-type=argument>
**Arguments:**
- <dfn>input</dfn>: an {{MLOperand}}. The input 4-D tensor. The logical shape
is interpreted according to the value of *options.layout*.
is interpreted according to the value of *options*.{{MLPool2dOptions/layout}}.
- <dfn>options</dfn>: an optional {{MLPool2dOptions}}. The optional parameters of the operation.

**Returns:** an {{MLOperand}}. The output 4-D tensor that contains the
result of the reduction. The logical shape is interpreted according to the
value of *layout*. More specifically, if the *options.roundingType* is {{MLRoundingType/"floor"}}, the spatial dimensions of the output tensor can be calculated as follows:
value of {{MLPool2dOptions/layout}}. More specifically, if the *options*.{{MLPool2dOptions/roundingType}} is {{MLRoundingType/"floor"}}, the spatial dimensions of the output tensor can be calculated as follows:

`output size = floor(1 + (input size - filter size + beginning padding + ending padding) / stride)`

or if *options.roundingType* is {{MLRoundingType/"ceil"}}:
or if *options*.{{MLPool2dOptions/roundingType}} is {{MLRoundingType/"ceil"}}:

`output size = ceil(1 + (input size - filter size + beginning padding + ending padding) / stride)`
</div>
Expand Down Expand Up @@ -6994,7 +6994,7 @@ partial dictionary MLOpSupportLimits {
: <dfn>sizes</dfn>
::
A list of length 2.
Specifies the target sizes for each input dimension from {{MLResample2dOptions/axes}} : *[sizeForFirstAxis, sizeForSecondAxis]*. When the target sizes are specified, {{MLResample2dOptions/scales}} is ignored, since the scaling factor values are derived from the target sizes of the input.
Specifies the target sizes for each input dimension from {{MLResample2dOptions/axes}}: *[sizeForFirstAxis, sizeForSecondAxis]*. When {{MLResample2dOptions/sizes}} is specified, {{MLResample2dOptions/scales}} is ignored, since the scaling factor values are derived from the target sizes of the input.

: <dfn>axes</dfn>
::
Expand Down Expand Up @@ -7582,7 +7582,7 @@ partial dictionary MLOpSupportLimits {
<div dfn-for="MLGraphBuilder/split(input, splits, options)" dfn-type=argument>
**Arguments:**
- <dfn>input</dfn>: an {{MLOperand}}. The input tensor.
- <dfn>splits</dfn>: an {{unsigned long}} or [=sequence=]<{{unsigned long}}>. If an {{unsigned long}}, it specifies the number of output tensors along the axis. The number must evenly divide the dimension size of *input* along *options.axis*. If a [=sequence=]<{{unsigned long}}>, it specifies the sizes of each output tensor along the *options.axis*. The sum of sizes must equal to the dimension size of *input* along *options.axis*.
- <dfn>splits</dfn>: an {{unsigned long}} or [=sequence=]<{{unsigned long}}>. If an {{unsigned long}}, it specifies the number of output tensors along the axis. The number must evenly divide the dimension size of *input* along *options*.{{MLSplitOptions/axis}}. If a [=sequence=]<{{unsigned long}}>, it specifies the sizes of each output tensor along the *options*.{{MLSplitOptions/axis}}. The sum of sizes must equal to the dimension size of *input* along *options*.{{MLSplitOptions/axis}}.
- <dfn>options</dfn>: an optional {{MLSplitOptions}}. The optional parameters of the operation.

**Returns:** [=sequence=]<{{MLOperand}}>. The split output tensors. If *splits* is an {{unsigned long}}, the [=list/size=] of the output is equal to *splits*. The shape of each output tensor is the same as *input* except the dimension size of *axis* equals to the quotient of dividing the dimension size of *input* along *axis* by *splits*. If *splits* is a [=sequence=]<{{unsigned long}}>, the [=list/size=] of the output equals the [=list/size=] of *splits*. The shape of the i-th output tensor is the same as *input* except along *axis* where the dimension size is *splits[i]*.
Expand Down