Skip to content

Commit

Permalink
Push docs and doxygen updates into gh-pages branch
Browse files Browse the repository at this point in the history
  • Loading branch information
Doxygen Bot (GitHub Action) committed Nov 11, 2024
1 parent 3e633e6 commit 0ad35e8
Show file tree
Hide file tree
Showing 4 changed files with 47 additions and 16 deletions.
2 changes: 1 addition & 1 deletion Dialects/krnl.md
Original file line number Diff line number Diff line change
Expand Up @@ -453,7 +453,7 @@ in the `value` dense element attribute.

Traits: `AlwaysSpeculatableImplTrait`, `MemRefsNormalizable`

Interfaces: `ConditionallySpeculatable`, `NoMemoryEffect (MemoryEffectOpInterface)`
Interfaces: `ConditionallySpeculatable`, `KrnlGlobalOpInterface`, `NoMemoryEffect (MemoryEffectOpInterface)`

Effects: `MemoryEffects::Effect{}`

Expand Down
30 changes: 15 additions & 15 deletions Dialects/onnx.md
Original file line number Diff line number Diff line change
Expand Up @@ -529,7 +529,7 @@ AveragePool consumes an input tensor X and applies average pooling across
```
output_spatial_shape[i] = ceil((input_spatial_shape[i] + pad_shape[i] - dilation[i] * (kernel_shape[i] - 1) - 1) / strides_spatial_shape[i] + 1)
```
if ceil_mode is enabled. `pad_shape[i]` is the sum of pads along axis `i`. Sliding windows that would start in the right padded region are ignored.
if ceil_mode is enabled. `pad_shape[i]` is the sum of pads along axis `i`.

`auto_pad` is a DEPRECATED attribute. If you are using them currently, the output spatial shape will be following when ceil_mode is enabled:
```
Expand Down Expand Up @@ -1701,15 +1701,15 @@ Effects: `MemoryEffects::Effect{}`

| Operand | Description |
| :-----: | ----------- |
| `X` | tensor of 16-bit float values or tensor of 32-bit float values or tensor of 64-bit float values
| `W` | tensor of 16-bit float values or tensor of 32-bit float values or tensor of 64-bit float values
| `B` | tensor of 16-bit float values or tensor of 32-bit float values or tensor of 64-bit float values or none type
| `X` | tensor of 16-bit float values or tensor of 16-bit float values or tensor of 32-bit float values or tensor of 64-bit float values
| `W` | tensor of 16-bit float values or tensor of 16-bit float values or tensor of 32-bit float values or tensor of 64-bit float values
| `B` | tensor of 16-bit float values or tensor of 16-bit float values or tensor of 32-bit float values or tensor of 64-bit float values or none type

#### Results:

| Result | Description |
| :----: | ----------- |
| `Y` | tensor of 16-bit float values or tensor of 32-bit float values or tensor of 64-bit float values
| `Y` | tensor of 16-bit float values or tensor of 16-bit float values or tensor of 32-bit float values or tensor of 64-bit float values

### `onnx.ConvTranspose` (ONNXConvTransposeOp)

Expand Down Expand Up @@ -2610,13 +2610,13 @@ Effects: `MemoryEffects::Effect{}`

| Operand | Description |
| :-----: | ----------- |
| `input` | tensor of 16-bit float values or tensor of 32-bit float values or tensor of 64-bit float values or tensor of bfloat16 type values
| `input` | tensor of bfloat16 type values or tensor of 16-bit float values or tensor of 32-bit float values or tensor of 64-bit float values

#### Results:

| Result | Description |
| :----: | ----------- |
| `output` | tensor of 16-bit float values or tensor of 32-bit float values or tensor of 64-bit float values or tensor of bfloat16 type values
| `output` | tensor of bfloat16 type values or tensor of 16-bit float values or tensor of 32-bit float values or tensor of 64-bit float values

### `onnx.Expand` (ONNXExpandOp)

Expand Down Expand Up @@ -3282,13 +3282,13 @@ Effects: `MemoryEffects::Effect{}`

| Operand | Description |
| :-----: | ----------- |
| `X` | tensor of 16-bit float values or tensor of 32-bit float values or tensor of 64-bit float values
| `X` | tensor of bfloat16 type values or tensor of 16-bit float values or tensor of 32-bit float values or tensor of 64-bit float values

#### Results:

| Result | Description |
| :----: | ----------- |
| `Y` | tensor of 16-bit float values or tensor of 32-bit float values or tensor of 64-bit float values
| `Y` | tensor of bfloat16 type values or tensor of 16-bit float values or tensor of 32-bit float values or tensor of 64-bit float values

### `onnx.GlobalMaxPool` (ONNXGlobalMaxPoolOp)

Expand Down Expand Up @@ -4817,7 +4817,7 @@ Effects: `MemoryEffects::Effect{}`

_ONNX MatMulInteger operation_

Matrix product that behaves like numpy.matmul: https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.matmul.html.
Matrix product that behaves like [numpy.matmul](https://numpy.org/doc/stable/reference/generated/numpy.matmul.html).
The production MUST never overflow. The accumulation may overflow if and only if in 32 bits.

Traits: `AlwaysSpeculatableImplTrait`
Expand Down Expand Up @@ -4845,7 +4845,7 @@ Effects: `MemoryEffects::Effect{}`

_ONNX MatMul operation_

Matrix product that behaves like numpy.matmul: https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.matmul.html
Matrix product that behaves like [numpy.matmul](https://numpy.org/doc/stable/reference/generated/numpy.matmul.html).

Traits: `AlwaysSpeculatableImplTrait`

Expand Down Expand Up @@ -4910,7 +4910,7 @@ MaxPool consumes an input tensor X and applies max pooling across
```
output_spatial_shape[i] = ceil((input_spatial_shape[i] + pad_shape[i] - dilation[i] * (kernel_shape[i] - 1) - 1) / strides_spatial_shape[i] + 1)
```
if ceil_mode is enabled. `pad_shape[i]` is the sum of pads along axis `i`. Sliding windows that would start in the right padded region are ignored.
if ceil_mode is enabled. `pad_shape[i]` is the sum of pads along axis `i`.

`auto_pad` is a DEPRECATED attribute. If you are using them currently, the output spatial shape will be following when ceil_mode is enabled:
```
Expand Down Expand Up @@ -6611,7 +6611,7 @@ Effects: `MemoryEffects::Effect{}`

_ONNX QLinearMatMul operation_

Matrix product that behaves like numpy.matmul: https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.matmul.html.
Matrix product that behaves like [numpy.matmul](https://numpy.org/doc/stable/reference/generated/numpy.matmul.html).
It consumes two quantized input tensors, their scales and zero points, scale and zero point of output,
and computes the quantized output. The quantization formula is y = saturate((x / y_scale) + y_zero_point).
For (x / y_scale), it is rounding to nearest ties to even. Refer to https://en.wikipedia.org/wiki/Rounding for details.
Expand Down Expand Up @@ -10215,13 +10215,13 @@ Effects: `MemoryEffects::Effect{}`

| Operand | Description |
| :-----: | ----------- |
| `input` | tensor of 16-bit float values or tensor of 32-bit float values or tensor of 64-bit float values or tensor of bfloat16 type values
| `input` | tensor of bfloat16 type values or tensor of 16-bit float values or tensor of 32-bit float values or tensor of 64-bit float values

#### Results:

| Result | Description |
| :----: | ----------- |
| `output` | tensor of 16-bit float values or tensor of 32-bit float values or tensor of 64-bit float values or tensor of bfloat16 type values
| `output` | tensor of bfloat16 type values or tensor of 16-bit float values or tensor of 32-bit float values or tensor of 64-bit float values

### `onnx.TfIdfVectorizer` (ONNXTfIdfVectorizerOp)

Expand Down
3 changes: 3 additions & 0 deletions Dialects/zhigh.md
Original file line number Diff line number Diff line change
Expand Up @@ -793,6 +793,8 @@ Effects: `MemoryEffects::Effect{}`
_ZHigh Stickified Constant operation_

This operator produces a constant tensor to store stickified data.
`value` attribute has original constant or stickified constant.
`stickified` attribute indicates the `value` is already stickified or not.
Stickified data is opaque and must be 4K-aligned. One who produces
the stickified data must make sure its size in bytes consistent with
the output tensor's size.
Expand All @@ -807,6 +809,7 @@ Effects: `MemoryEffects::Effect{}`

<table>
<tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr>
<tr><td><code>stickified</code></td><td>::mlir::BoolAttr</td><td>bool attribute</td></tr>
<tr><td><code>value</code></td><td>::mlir::Attribute</td><td>any attribute</td></tr>
<tr><td><code>alignment</code></td><td>::mlir::IntegerAttr</td><td>64-bit signless integer attribute</td></tr>
</table>
Expand Down
28 changes: 28 additions & 0 deletions Dialects/zlow.md
Original file line number Diff line number Diff line change
Expand Up @@ -752,6 +752,34 @@ Interfaces: `MemoryEffectOpInterface`
| `X` | memref of 16-bit float or 32-bit float values
| `Out` | memref of dlfloat16 type values

### `zlow.stickifiedConstant` (::onnx_mlir::zlow::ZLowStickifiedConstantOp)

_ZLow Stickified Constant operation._


Traits: `MemRefsNormalizable`

Interfaces: `KrnlGlobalOpInterface`

#### Attributes:

<table>
<tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr>
<tr><td><code>shape</code></td><td>::mlir::Attribute</td><td>any attribute</td></tr>
<tr><td><code>name</code></td><td>::mlir::StringAttr</td><td>string attribute</td></tr>
<tr><td><code>stickified</code></td><td>::mlir::BoolAttr</td><td>bool attribute</td></tr>
<tr><td><code>value</code></td><td>::mlir::Attribute</td><td>any attribute</td></tr>
<tr><td><code>layout</code></td><td>::mlir::StringAttr</td><td>string attribute</td></tr>
<tr><td><code>offset</code></td><td>::mlir::IntegerAttr</td><td>64-bit signless integer attribute</td></tr>
<tr><td><code>alignment</code></td><td>::mlir::IntegerAttr</td><td>64-bit signless integer attribute</td></tr>
</table>

#### Results:

| Result | Description |
| :----: | ----------- |
| `output` | memref of dlfloat16 type values

### `zlow.sub` (::onnx_mlir::zlow::ZLowSubOp)

_ZLow sub operation_
Expand Down

0 comments on commit 0ad35e8

Please sign in to comment.