Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tensor for PTQ #2058

Merged
merged 50 commits into from
Oct 9, 2023
Merged
Show file tree
Hide file tree
Changes from 42 commits
Commits
Show all changes
50 commits
Select commit Hold shift + click to select a range
17e91ca
Add amax, amin, unstack functions
AlexanderDokuchaev Aug 5, 2023
d87a634
unstack
AlexanderDokuchaev Aug 5, 2023
20b75f6
unused func
AlexanderDokuchaev Aug 5, 2023
d82af3d
moveaxis
AlexanderDokuchaev Aug 7, 2023
427ffa3
mean
AlexanderDokuchaev Aug 8, 2023
4eefd0e
round
AlexanderDokuchaev Aug 16, 2023
5038dab
tensor for bc
AlexanderDokuchaev Aug 17, 2023
113a047
tensor fbc
AlexanderDokuchaev Aug 18, 2023
519a2e6
linter
AlexanderDokuchaev Aug 18, 2023
03b1bc1
fix pt
AlexanderDokuchaev Aug 18, 2023
82c699b
linter
AlexanderDokuchaev Aug 18, 2023
cc5bc75
Merge branch 'develop' into ad/ptq_tensor
AlexanderDokuchaev Aug 18, 2023
982f579
fix
AlexanderDokuchaev Aug 18, 2023
cf47cbd
fix
AlexanderDokuchaev Aug 18, 2023
9cc7582
fix
AlexanderDokuchaev Aug 18, 2023
840cb6a
fix
AlexanderDokuchaev Aug 19, 2023
c78668d
fix comments
AlexanderDokuchaev Sep 4, 2023
323fe0e
Merge branch 'develop' into ad/ptq_tensor
AlexanderDokuchaev Sep 4, 2023
5d3ebce
Disable warning on divide operator of numpy
AlexanderDokuchaev Sep 4, 2023
fe3dea1
fix
AlexanderDokuchaev Sep 5, 2023
2d5567b
fix name
AlexanderDokuchaev Sep 5, 2023
e29d568
statistical_functions.py
AlexanderDokuchaev Sep 5, 2023
4816448
fbc remote tensor processor
AlexanderDokuchaev Sep 7, 2023
9f1fe47
del __all__
AlexanderDokuchaev Sep 7, 2023
1fde239
allclose
AlexanderDokuchaev Sep 12, 2023
8a54875
Merge branch 'develop' into ad/ptq_tensor
AlexanderDokuchaev Sep 22, 2023
93eef06
use tensor.functions in tests
AlexanderDokuchaev Sep 22, 2023
55a0d86
-
AlexanderDokuchaev Sep 26, 2023
cc8d0ed
typehints
AlexanderDokuchaev Sep 26, 2023
5cd62cf
typehints
AlexanderDokuchaev Sep 26, 2023
29f80bd
Fix docstring
AlexanderDokuchaev Sep 26, 2023
6c21d17
hints
AlexanderDokuchaev Sep 26, 2023
7bd5265
float typehint
AlexanderDokuchaev Sep 26, 2023
33bb440
lint
AlexanderDokuchaev Sep 26, 2023
91df596
save device in unify_statistics
AlexanderDokuchaev Sep 27, 2023
cfd3f04
mean_per_channel
AlexanderDokuchaev Sep 27, 2023
11c6cba
_dispatch_list
AlexanderDokuchaev Sep 27, 2023
63d7a16
test_fn_mean_per_channel_incorrect_axis
AlexanderDokuchaev Sep 27, 2023
6d62e44
_dispatch_list in readme
AlexanderDokuchaev Sep 27, 2023
9767308
lint
AlexanderDokuchaev Sep 27, 2023
5393fa3
Merge branch 'develop' into ad/ptq_tensor
AlexanderDokuchaev Sep 27, 2023
35ee9a4
Add check to device in tests
AlexanderDokuchaev Sep 28, 2023
19640ad
functions to tensor namespace
AlexanderDokuchaev Oct 2, 2023
3ac8523
update import mean_per_channel
AlexanderDokuchaev Oct 2, 2023
94ed9a1
fix
AlexanderDokuchaev Oct 2, 2023
4090048
linter
AlexanderDokuchaev Oct 2, 2023
1feb2f5
-
AlexanderDokuchaev Oct 4, 2023
4988067
Merge branch 'develop' into ad/ptq_tensor
AlexanderDokuchaev Oct 4, 2023
43eeba0
dispath_list
AlexanderDokuchaev Oct 5, 2023
d5de2a4
typehint
AlexanderDokuchaev Oct 9, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
# Copyright (c) 2023 Intel Corporation
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

from nncf.experimental.tensor import Tensor
from nncf.experimental.tensor import functions as fns


def mean_per_channel(x: Tensor, axis: int) -> Tensor:
"""
Computes the mean of elements across given channel dimension of Tensor.

:param x: Tensor to reduce.
:param axis: The channel dimensions to reduce.
:return: Reduced Tensor.
"""
if len(x.shape) < 3:
return fns.mean(x, axis=0)
KodiaqQ marked this conversation as resolved.
Show resolved Hide resolved
pos_axis = axis + x.ndim if axis < 0 else axis
if pos_axis < 0 or pos_axis >= x.ndim:
raise ValueError(f"axis {axis} is out of bounds for array of dimension {x.ndim}")
axis = tuple(i for i in range(x.ndim) if i != pos_axis)
return fns.mean(x, axis=axis)
38 changes: 24 additions & 14 deletions nncf/experimental/tensor/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ making them more portable and reusable.

## Usage

The main idea is common algorithms should use wrapped tensors and provide to backend-specific function unwrapped tensor.
Common algorithms should use wrapped tensors and provide the unwrapped tensor to the backend-specific function.

### Initialization Tensor

Expand All @@ -32,6 +32,8 @@ tenor_b = Tensor(np.array([1,2]))
tensor_a + tenor_b # Tensor(array([2, 4]))
```

**NOTE** Division operations for the numpy backend are performed with warnings disabled for the same for all backends.

### Comparison operators

All math operations are overrided to operated with wrapped object and return `Tensor`
Expand All @@ -55,16 +57,16 @@ nncf_tensor.max() # Tensor(2)
All available functions you can found in [functions.py](functions.py).

```python
from nncf.experimental.tensor import functions
functions.max(nncf_tensor) # Tensor(2)
from nncf.experimental.tensor import functions as fns
fns.max(nncf_tensor) # Tensor(2)
```

**NOTE** A function requires at least one positional argument, which is used to dispatch the function
to the appropriate implementation depending on the type of argument.

```python
functions.max(nncf_tensor) # Correct
functions.max(a=nncf_tensor) # TypeError: wrapper requires at least 1 positional argument
fns.max(nncf_tensor) # Correct
fns.max(a=nncf_tensor) # TypeError: wrapper requires at least 1 positional argument
```

### Loop over Tensor
Expand Down Expand Up @@ -100,7 +102,7 @@ tensor_a[0:2] # Tensor(array([[1],[2]]))
class Tensor:
...
def foo(self, arg1: Type) -> "Tensor":
return functions.foo(self, arg1)
return fns.foo(self, arg1)
```

2. Add function to [function.py](function.py)
Expand All @@ -120,28 +122,36 @@ tensor_a[0:2] # Tensor(array([[1],[2]]))
return NotImplemented(f"Function `foo` is not implemented for {type(a)}")
```

3. Add function name to `__all__` in [function.py](function.py)
**NOTE** For the case when the first argument has type `List[Tensor]`, use the `_dispatch_list` function. This function dispatches function by first element in the first argument.

```python
@functools.singledispatch
def foo(x: List[Tensor], axis: int = 0) -> Tensor:
if isinstance(x, List):
unwrapped_x = [i.data for i in x]
return Tensor(_dispatch_list(foo, unwrapped_x, axis=axis))
raise NotImplementedError(f"Function `foo` is not implemented for {type(x)}")
```

4. Add backend specific implementation of method to:
3. Add backend specific implementation of method to:

- [numpy_function.py](numpy_function.py)
- [numpy_function.py](numpy_functions.py)

```python
@functions.foo.register(np.ndarray)
@functions.foo.register(np.number)
@_register_numpy_types(fns.foo)
def _(a: TType, arg1: Type) -> np.ndarray:
return np.foo(a, arg1)
```

- [torch_function.py](torch_function.py)
- [torch_function.py](torch_functions.py)

```python
@functions.foo.register(torch.Tensor)
@fns.foo.register(torch.Tensor)
def _(a: torch.Tensor, arg1: Type) -> torch.Tensor:
return torch.foo(a, arg1)
```

5. Add test of method to [test template](tests/shared/test_templates/template_test_nncf_tensor.py) for Tensor class
4. Add test of method to [test template](../../../tests/shared/test_templates/template_test_nncf_tensor.py) for Tensor class

### Add new backend

Expand Down
Loading