Skip to content

Commit

Permalink
Update doc
Browse files Browse the repository at this point in the history
  • Loading branch information
majianjia committed Jun 7, 2019
1 parent 44fbd4b commit c1a9979
Show file tree
Hide file tree
Showing 4 changed files with 82 additions and 21 deletions.
6 changes: 3 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -81,6 +81,9 @@ Please check [examples](https://github.com/majianjia/nnom/tree/master/examples)
| SoftMax|Beta | SoftMax()| Softmax only has layer API|
| Activation|Beta| Activation()|A layer instance for activation|
| Input/Output |Beta | Input()/Output()| |
| Up Sampling | Beta|UpSample()||
| Zero Padding | Beta |ZeroPadding()||
| Cropping | Beta |Cropping()||

**RNN Layers**

Expand All @@ -90,7 +93,6 @@ Please check [examples](https://github.com/majianjia/nnom/tree/master/examples)
| Simple RNN | Under Dev. | SimpleCell()| Under Developpment |
| Gated Recurrent Network (GRU)| Under Dev. | GRUCell()| Under Developpment |


**Activations**

Activation can be used by itself as layer, or can be attached to the previous layer as ["actail"](docs/A_Temporary_Guide_to_NNoM.md#addictionlly-activation-apis) to reduce memory cost.
Expand All @@ -111,7 +113,6 @@ Activation can be used by itself as layer, or can be attached to the previous la
| Global Max Pooling | Beta|GlobalMaxPool()||
| Global Average Pooling | Beta|GlobalAvgPool()||
| Global Sum Pooling | Beta|GlobalSumPool()|A better alternative to Global average pooling in MCU before Softmax|
| Up Sampling | Beta|UpSample()||

**Matrix Operations Layers**

Expand All @@ -122,7 +123,6 @@ Activation can be used by itself as layer, or can be attached to the previous la
| Addition | Beta|Add()||
| Substraction | Beta|Sub()||


## Dependencies

NNoM now use the local pure C backend implementation by default. Thus, there is no special dependency needed.
Expand Down
49 changes: 41 additions & 8 deletions docs/api_layers.md
Original file line number Diff line number Diff line change
Expand Up @@ -104,7 +104,6 @@ This funtion is for 1D or 2D, mutiple channels depthwise convolution.

When it is used for 1D convolution, the H should be set to 1 constantly in kernel and stride.


---

## Dense()
Expand All @@ -124,7 +123,6 @@ A fully connected layer. It will flatten the data if the last output is mutiple-
- The layer instance
---
## UpSample()
Expand All @@ -137,21 +135,55 @@ A basic up sampling, using nearest interpolation

**Arguments**

- **kernel:** a shape subject return by `kernel()`, the interpolation size.
- **kernel:** a shape object returned by `kernel()`, the interpolation size.

**Return**

- The layer instance

---

## ZeroPadding()

~~~C
nnom_layer_t *ZeroPadding(nnom_border_t pad);
~~~
Pad zeros to the image for each edge (top, bottom, left, right)
**Arguments**
- **pad:** a border object returned by `border()`, contains top, bottom, left and right padding.
**Return**
- The layer instance
---
## Cropping()
~~~C
nnom_layer_t *Cropping(nnom_border_t pad);
~~~

It crops along spatial dimensions.

**Arguments**

- **pad:** a border object returned by `border()`, contains top, bottom, left and right size.

**Return**

- The layer instance

---

## Lambda()

~~~C
// Lambda Layers
// layer.run() , required

// layer.run() , compulsory
// layer.oshape(), optional, call default_output_shape() if left NULL
// layer.free() , optional, called while model is deleting, to free private resources
// parameters , private parameters for run method, left NULL if not needed.
Expand All @@ -165,9 +197,9 @@ Lambda layer is an anonymous layer (interface), which allows user to do customiz
**Arguments**
- **(*run)(nnom_layer_t *):** or so called run method, is the method to do the customized operation.
- **(*oshape)(nnom_layer_t *):** is to calculate the output shape according to the input shape during compiling. If this method is not presented, the input shape will be passed to the output shape.
- **(*free)(nnom_layer_t *):** is to free the resources allocated by users. this will be called when deleting models. Leave it NULL if no resources need to be released.
- **`(*run)(nnom_layer_t *)`:** or so called run method, is the method to do the customized operation.
- **`(*oshape)(nnom_layer_t *)`:** is to calculate the output shape according to the input shape during compiling. If this method is not presented, the input shape will be passed to the output shape.
- **`(*free)(nnom_layer_t *)`:** is to free the resources allocated by the users. This method will be called when the model is deleting. Leave it NULL if no resources need to be released.
- **parameters:** is the pointer to user configurations. User can access to it in all three methods above.
**Return**
Expand All @@ -176,6 +208,7 @@ Lambda layer is an anonymous layer (interface), which allows user to do customiz
**Notes**
- All methods with type `nnom_status_t` must return `NN_SUCCESS` to allow the inference process. Any return other than that will stop the inference of the model.
- When `oshape()` is presented, please refer to examples of other similar layers. The shape passing must be handle carefully.
- This method is called in compiling, thus it can also do works other than calculating output shape only. An exmaple is the `global_pooling_output_shape()` fills in the parameters left by `GlobalXXXPool()`
Expand Down
37 changes: 33 additions & 4 deletions docs/api_properties.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,8 +5,11 @@ Properties include some basic properties such as shape of the data buffer, Q-for

---

## typesdef
## Typedef

~~~C
#define nnom_shape_data_t uint16_t

typedef struct _nnom_shape
{
nnom_shape_data_t h, w, c;
Expand All @@ -29,6 +32,11 @@ typedef struct _nnom_qformat
int8_t n, m;
} nnom_qformat_t;

typedef struct _nnom_border_t
{
nnom_shape_data_t top, bottom, left, right;
} nnom_border_t;

~~~

---
Expand All @@ -46,7 +54,7 @@ nnom_shape_t shape(size_t h, size_t w, size_t c);
**Arguments**
- ** h:** size of H, or number of row, or y axis in image.
- ** w:** size of W, or number of row, or x axis in image.
- ** w:** size of W, or number of column, or x axis in image.
- ** c:** size of channel.
**Return**
Expand All @@ -66,7 +74,7 @@ Use in pooling or convolutional layer to specified the kernel size.
**Arguments**

- ** h:** size of kernel in H, or number of row, or y axis in image.
- ** w:** size of kernel in W, or number of row, or x axis in image.
- ** w:** size of kernel in W, or number of column, or x axis in image.

**Return**

Expand All @@ -85,7 +93,28 @@ Use in pooling or convolutional layer to specified the stride size.
**Arguments**
- ** h:** size of stride in H, or number of row, or y axis in image.
- ** w:** size of stride in W, or number of row, or x axis in image.
- ** w:** size of stride in W, or number of column, or x axis in image.
**Return**
- A shape instance.
---
## border()
~~~C
nnom_border_t border(size_t top, size_t bottom, size_t left, size_t right);
~~~

It pack the 4 padding/cropping value to a border object.

**Arguments**

- ** top:** the padding/cropping at the top edge of the image.
- ** bottom:** the padding/cropping at the bottom edge of the image.
- ** left:** the padding/cropping at the left edge of the image.
- ** right:** the padding/cropping at the right edge of the image.

**Return**

Expand Down
11 changes: 5 additions & 6 deletions docs/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -104,12 +104,15 @@ Check [Porting and optimising Guide](Porting_and_Optimisation_Guide.md) for deta
| Fully-connected | Beta| Dense()| |
| Lambda |Alpha| Lambda() |single input / single output anonymous operation|
| Batch Normalization |Beta | N/A| This layer is merged to the last Conv by the script|
| Input/Output |Beta | Input()/Output()| |
| Flatten|Beta | Flatten()| |
| SoftMax|Beta | SoftMax()| Softmax only has layer API|
| Activation|Beta| Activation()|A layer instance for activation|
| Input/Output |Beta | Input()/Output()| |
| Up Sampling | Beta|UpSample()||
| Zero Padding | Beta |ZeroPadding()||
| Cropping | Beta |Cropping()||

** RNN Layers **
**RNN Layers**

| Layers | Status |Layer API|Comments|
| ------ |-- |--|--|
Expand Down Expand Up @@ -137,7 +140,6 @@ Activation can be used by itself as layer, or can be attached to the previous la
| Global Max Pooling | Beta|GlobalMaxPool()||
| Global Average Pooling | Beta|GlobalAvgPool()||
| Global Sum Pooling | Beta|GlobalSumPool()|A better alternative to Global average pooling in MCU before Softmax|
| Up Sampling | Beta|UpSample()||

**Matrix Operations Layers**

Expand All @@ -147,6 +149,3 @@ Activation can be used by itself as layer, or can be attached to the previous la
| Multiple |Beta |Mult()||
| Addition | Beta|Add()||
| Substraction | Beta|Sub()||



0 comments on commit c1a9979

Please sign in to comment.