Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tensor for PTQ #2058

Merged
merged 50 commits into from
Oct 9, 2023
Merged
Show file tree
Hide file tree
Changes from 16 commits
Commits
Show all changes
50 commits
Select commit Hold shift + click to select a range
17e91ca
Add amax, amin, unstack functions
AlexanderDokuchaev Aug 5, 2023
d87a634
unstack
AlexanderDokuchaev Aug 5, 2023
20b75f6
unused func
AlexanderDokuchaev Aug 5, 2023
d82af3d
moveaxis
AlexanderDokuchaev Aug 7, 2023
427ffa3
mean
AlexanderDokuchaev Aug 8, 2023
4eefd0e
round
AlexanderDokuchaev Aug 16, 2023
5038dab
tensor for bc
AlexanderDokuchaev Aug 17, 2023
113a047
tensor fbc
AlexanderDokuchaev Aug 18, 2023
519a2e6
linter
AlexanderDokuchaev Aug 18, 2023
03b1bc1
fix pt
AlexanderDokuchaev Aug 18, 2023
82c699b
linter
AlexanderDokuchaev Aug 18, 2023
cc5bc75
Merge branch 'develop' into ad/ptq_tensor
AlexanderDokuchaev Aug 18, 2023
982f579
fix
AlexanderDokuchaev Aug 18, 2023
cf47cbd
fix
AlexanderDokuchaev Aug 18, 2023
9cc7582
fix
AlexanderDokuchaev Aug 18, 2023
840cb6a
fix
AlexanderDokuchaev Aug 19, 2023
c78668d
fix comments
AlexanderDokuchaev Sep 4, 2023
323fe0e
Merge branch 'develop' into ad/ptq_tensor
AlexanderDokuchaev Sep 4, 2023
5d3ebce
Disable warning on divide operator of numpy
AlexanderDokuchaev Sep 4, 2023
fe3dea1
fix
AlexanderDokuchaev Sep 5, 2023
2d5567b
fix name
AlexanderDokuchaev Sep 5, 2023
e29d568
statistical_functions.py
AlexanderDokuchaev Sep 5, 2023
4816448
fbc remote tensor processor
AlexanderDokuchaev Sep 7, 2023
9f1fe47
del __all__
AlexanderDokuchaev Sep 7, 2023
1fde239
allclose
AlexanderDokuchaev Sep 12, 2023
8a54875
Merge branch 'develop' into ad/ptq_tensor
AlexanderDokuchaev Sep 22, 2023
93eef06
use tensor.functions in tests
AlexanderDokuchaev Sep 22, 2023
55a0d86
-
AlexanderDokuchaev Sep 26, 2023
cc8d0ed
typehints
AlexanderDokuchaev Sep 26, 2023
5cd62cf
typehints
AlexanderDokuchaev Sep 26, 2023
29f80bd
Fix docstring
AlexanderDokuchaev Sep 26, 2023
6c21d17
hints
AlexanderDokuchaev Sep 26, 2023
7bd5265
float typehint
AlexanderDokuchaev Sep 26, 2023
33bb440
lint
AlexanderDokuchaev Sep 26, 2023
91df596
save device in unify_statistics
AlexanderDokuchaev Sep 27, 2023
cfd3f04
mean_per_channel
AlexanderDokuchaev Sep 27, 2023
11c6cba
_dispatch_list
AlexanderDokuchaev Sep 27, 2023
63d7a16
test_fn_mean_per_channel_incorrect_axis
AlexanderDokuchaev Sep 27, 2023
6d62e44
_dispatch_list in readme
AlexanderDokuchaev Sep 27, 2023
9767308
lint
AlexanderDokuchaev Sep 27, 2023
5393fa3
Merge branch 'develop' into ad/ptq_tensor
AlexanderDokuchaev Sep 27, 2023
35ee9a4
Add check to device in tests
AlexanderDokuchaev Sep 28, 2023
19640ad
functions to tensor namespace
AlexanderDokuchaev Oct 2, 2023
3ac8523
update import mean_per_channel
AlexanderDokuchaev Oct 2, 2023
94ed9a1
fix
AlexanderDokuchaev Oct 2, 2023
4090048
linter
AlexanderDokuchaev Oct 2, 2023
1feb2f5
-
AlexanderDokuchaev Oct 4, 2023
4988067
Merge branch 'develop' into ad/ptq_tensor
AlexanderDokuchaev Oct 4, 2023
43eeba0
dispath_list
AlexanderDokuchaev Oct 5, 2023
d5de2a4
typehint
AlexanderDokuchaev Oct 9, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
174 changes: 147 additions & 27 deletions nncf/experimental/tensor/functions.py
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ def device(a: TTensor) -> TensorDeviceType:

@functools.singledispatch
@_tensor_guard
def squeeze(a: TTensor, axis: Optional[Union[int, Tuple[int]]] = None) -> TTensor:
def squeeze(a: TTensor, axis: Optional[Union[int, Tuple[int]]] = None) -> Tensor:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you please explain why do you change this?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Functions in this file always return Tensor or list of Tensor

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is not entirely correct. I mean these functions return a Tensor object only if there is no registered version for the type of the passed argument.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Update all return type to Tensor to correctly works "pop-up suggestions" (or how it names when editor suggest functions by first symbols?).
So set Tensor type to first argument. But not sure about second, added like Union[torch.Tensor, float], because it can be used like torch.tensor(1) + 1 or fns.min(torch.tensor(1), 0). May be do you have anny suugestion about it?

"""
Remove axes of length one from a.

Expand All @@ -63,7 +63,7 @@ def squeeze(a: TTensor, axis: Optional[Union[int, Tuple[int]]] = None) -> TTenso

@functools.singledispatch
@_tensor_guard
def flatten(a: TTensor) -> TTensor:
def flatten(a: TTensor) -> Tensor:
"""
Return a copy of the tensor collapsed into one dimension.

Expand All @@ -75,7 +75,7 @@ def flatten(a: TTensor) -> TTensor:

@functools.singledispatch
@_tensor_guard
def max(a: TTensor, axis: Optional[Union[int, Tuple[int]]] = None) -> TTensor: # pylint: disable=redefined-builtin
def max(a: TTensor, axis: Optional[Union[int, Tuple[int]]] = None) -> Tensor: # pylint: disable=redefined-builtin
"""
Return the maximum of an array or maximum along an axis.

Expand All @@ -88,7 +88,20 @@ def max(a: TTensor, axis: Optional[Union[int, Tuple[int]]] = None) -> TTensor:

@functools.singledispatch
@_tensor_guard
def min(a: TTensor, axis: Optional[Union[int, Tuple[int]]] = None) -> TTensor: # pylint: disable=redefined-builtin
def amax(a: TTensor, axis: Optional[Union[int, Tuple[int]]] = None) -> Tensor: # pylint: disable=redefined-builtin
alexsu52 marked this conversation as resolved.
Show resolved Hide resolved
"""
Return the maximum of an array or maximum along an axis.

:param a: The input tensor.
:param axis: Axis or axes along which to operate. By default, flattened input is used.
:return: Maximum of a.
"""
return Tensor(amax(a.data, axis))


@functools.singledispatch
@_tensor_guard
def min(a: TTensor, axis: Optional[Union[int, Tuple[int]]] = None) -> Tensor: # pylint: disable=redefined-builtin
"""
Return the minimum of an array or minimum along an axis.

Expand All @@ -101,7 +114,20 @@ def min(a: TTensor, axis: Optional[Union[int, Tuple[int]]] = None) -> TTensor:

@functools.singledispatch
@_tensor_guard
def abs(a: TTensor) -> TTensor: # pylint: disable=redefined-builtin
def amin(a: TTensor, axis: Optional[Union[int, Tuple[int]]] = None) -> Tensor: # pylint: disable=redefined-builtin
AlexanderDokuchaev marked this conversation as resolved.
Show resolved Hide resolved
"""
Return the minimum of an array or minimum along an axis.

:param a: The input tensor.
:param axis: Axis or axes along which to operate. By default, flattened input is used.
:return: Minimum of a.
"""
return Tensor(amin(a.data, axis))


@functools.singledispatch
@_tensor_guard
def abs(a: TTensor) -> Tensor: # pylint: disable=redefined-builtin
"""
Calculate the absolute value element-wise.

Expand All @@ -113,7 +139,7 @@ def abs(a: TTensor) -> TTensor: # pylint: disable=redefined-builtin

@functools.singledispatch
@_tensor_guard
def astype(a: TTensor, data_type: TensorDataType) -> TTensor:
def astype(a: TTensor, data_type: TensorDataType) -> Tensor:
"""
Copy of the tensor, cast to a specified type.

Expand All @@ -139,7 +165,7 @@ def dtype(a: TTensor) -> TensorDataType:

@functools.singledispatch
@_tensor_guard
def reshape(a: TTensor, shape: List[int]) -> TTensor:
def reshape(a: TTensor, shape: List[int]) -> Tensor:
"""
Gives a new shape to a tensor without changing its data.

Expand All @@ -152,7 +178,7 @@ def reshape(a: TTensor, shape: List[int]) -> TTensor:

@functools.singledispatch
@_tensor_guard
def all(a: TTensor, axis: Optional[Union[int, Tuple[int]]] = None) -> TTensor: # pylint: disable=redefined-builtin
def all(a: TTensor, axis: Optional[Union[int, Tuple[int]]] = None) -> Tensor: # pylint: disable=redefined-builtin
"""
Test whether all tensor elements along a given axis evaluate to True.

Expand All @@ -165,7 +191,7 @@ def all(a: TTensor, axis: Optional[Union[int, Tuple[int]]] = None) -> TTensor:

@functools.singledispatch
@_tensor_guard
def allclose(a: TTensor, b: TTensor, rtol: float = 1e-05, atol: float = 1e-08, equal_nan: bool = False) -> TTensor:
def allclose(a: TTensor, b: TTensor, rtol: float = 1e-05, atol: float = 1e-08, equal_nan: bool = False) -> Tensor:
"""
Returns True if two arrays are element-wise equal within a tolerance.

Expand All @@ -191,7 +217,7 @@ def allclose(a: TTensor, b: TTensor, rtol: float = 1e-05, atol: float = 1e-08, e

@functools.singledispatch
@_tensor_guard
def any(a: TTensor, axis: Optional[Union[int, Tuple[int]]] = None) -> TTensor: # pylint: disable=redefined-builtin
def any(a: TTensor, axis: Optional[Union[int, Tuple[int]]] = None) -> Tensor: # pylint: disable=redefined-builtin
"""
Test whether any tensor elements along a given axis evaluate to True.

Expand All @@ -204,7 +230,7 @@ def any(a: TTensor, axis: Optional[Union[int, Tuple[int]]] = None) -> TTensor:

@functools.singledispatch
@_tensor_guard
def count_nonzero(a: TTensor, axis: Optional[Union[int, Tuple[int]]] = None) -> TTensor:
def count_nonzero(a: TTensor, axis: Optional[Union[int, Tuple[int]]] = None) -> Tensor:
"""
Counts the number of non-zero values in the tensor input.

Expand All @@ -218,7 +244,7 @@ def count_nonzero(a: TTensor, axis: Optional[Union[int, Tuple[int]]] = None) ->

@functools.singledispatch
@_tensor_guard
def isempty(a: TTensor) -> TTensor:
def isempty(a: TTensor) -> Tensor:
AlexanderDokuchaev marked this conversation as resolved.
Show resolved Hide resolved
"""
Return True if input tensor is empty.

Expand All @@ -230,7 +256,7 @@ def isempty(a: TTensor) -> TTensor:

@functools.singledispatch
@_tensor_guard
def isclose(a: TTensor, b: TTensor, rtol: float = 1e-05, atol: float = 1e-08, equal_nan: bool = False) -> TTensor:
def isclose(a: TTensor, b: TTensor, rtol: float = 1e-05, atol: float = 1e-08, equal_nan: bool = False) -> Tensor:
"""
Returns a boolean array where two arrays are element-wise equal within a tolerance.

Expand All @@ -256,7 +282,7 @@ def isclose(a: TTensor, b: TTensor, rtol: float = 1e-05, atol: float = 1e-08, eq

@functools.singledispatch
@_tensor_guard
def maximum(x1: TTensor, x2: TTensor) -> TTensor:
def maximum(x1: TTensor, x2: TTensor) -> Tensor:
"""
Element-wise maximum of tensor elements.

Expand All @@ -269,7 +295,7 @@ def maximum(x1: TTensor, x2: TTensor) -> TTensor:

@functools.singledispatch
@_tensor_guard
def minimum(x1: TTensor, x2: TTensor) -> TTensor:
def minimum(x1: TTensor, x2: TTensor) -> Tensor:
"""
Element-wise minimum of tensor elements.

Expand All @@ -282,7 +308,7 @@ def minimum(x1: TTensor, x2: TTensor) -> TTensor:

@functools.singledispatch
@_tensor_guard
def ones_like(a: TTensor) -> TTensor:
def ones_like(a: TTensor) -> Tensor:
"""
Return a tensor of ones with the same shape and type as a given tensor.

Expand All @@ -294,7 +320,7 @@ def ones_like(a: TTensor) -> TTensor:

@functools.singledispatch
@_tensor_guard
def where(condition: TTensor, x: TTensor, y: TTensor) -> TTensor:
def where(condition: TTensor, x: TTensor, y: TTensor) -> Tensor:
AlexanderDokuchaev marked this conversation as resolved.
Show resolved Hide resolved
"""
Return elements chosen from x or y depending on condition.

Expand All @@ -314,7 +340,7 @@ def where(condition: TTensor, x: TTensor, y: TTensor) -> TTensor:

@functools.singledispatch
@_tensor_guard
def zeros_like(a: TTensor) -> TTensor:
def zeros_like(a: TTensor) -> Tensor:
"""
Return an tensor of zeros with the same shape and type as a given tensor.

Expand All @@ -324,25 +350,119 @@ def zeros_like(a: TTensor) -> TTensor:
return Tensor(zeros_like(a.data))


@functools.singledispatch
def stack(x: List[TTensor], axis: int = 0) -> Tensor:
"""
Stacks a list or deque of Tensors rank-R tensors into one Tensor rank-(R+1) tensor.

:param x: List or deque of Tensors.
:param axis: The axis to stack along.
:return: Stacked Tensor.
"""
if isinstance(x, List):
unwrapped_x = [i.data for i in x]
vshampor marked this conversation as resolved.
Show resolved Hide resolved
# singledispatch cannot dispatch function by element in a list
vshampor marked this conversation as resolved.
Show resolved Hide resolved
res = stack.dispatch(type(unwrapped_x[0]))(unwrapped_x, axis=axis)
return Tensor(res)
raise NotImplementedError(f"Function `stack` is not implemented for {type(x)}")


@functools.singledispatch
@_tensor_guard
def unstack(a: Tensor, axis: int = 0) -> List[Tensor]:
"""
Unstack a Tensor into list.

:param a: Tensor to unstack.
:param axis: The axis to unstack along.
:return: List of Tensor.
"""
res = unstack(a.data, axis=axis)
return [Tensor(i) for i in res]


@functools.singledispatch
@_tensor_guard
def moveaxis(a: Tensor, source: Union[int, List[int]], destination: Union[int, List[int]]) -> Tensor:
"""
Move axes of an array to new positions.

:param a: The array whose axes should be reordered.
:param source: Original positions of the axes to move. These must be unique.
:param destination: Destination positions for each of the original axes. These must also be unique.
:return: Array with moved axes.
"""
return Tensor(moveaxis(a.data, source, destination))


@functools.singledispatch
@_tensor_guard
def mean(a: Tensor, axis: Union[int, List[int]] = None, keepdims: bool = False) -> Tensor:
"""
Compute the arithmetic mean along the specified axis.

:param a: Array containing numbers whose mean is desired.
:param axis: Axis or axes along which the means are computed.
:param keepdims: Destination positions for each of the original axes. These must also be unique.
:return: Array with moved axes.
"""
return Tensor(mean(a.data, axis, keepdims))


@functools.singledispatch
@_tensor_guard
def round(a: Tensor, decimals=0) -> Tensor: # pylint: disable=redefined-builtin
"""
Evenly round to the given number of decimals.

:param a: Input data.
:param decimals: Number of decimal places to round to (default: 0). If decimals is negative,
it specifies the number of positions to the left of the decimal point.
:return: An array of the same type as a, containing the rounded values.
"""
return Tensor(round(a.data, decimals))


def mean_per_channel(x: Tensor, axis: int) -> Tensor:
alexsu52 marked this conversation as resolved.
Show resolved Hide resolved
"""
Computes the mean of elements across given channel dimension of Tensor.

:param x: Tensor to reduce.
:param axis: The channel dimensions to reduce.
:return: Reduced Tensor.
"""
if len(x.shape) < 3:
return Tensor(mean(x.data, axis=0))
x = moveaxis(x.data, axis, 1)
t = x.reshape([x.shape[0], x.shape[1], -1])
return Tensor(mean(t, axis=(0, 2)))


__all__ = [
"device",
"squeeze",
"flatten",
"max",
"min",
"abs",
"astype",
"reshape",
"all",
"allclose",
"amax",
"amin",
"any",
"astype",
"count_nonzero",
"isempty",
"device",
"flatten",
"isclose",
"isempty",
"max",
"maximum",
"mean",
"mean_per_channel",
"min",
"minimum",
"ones_like",
"minimum",
"moveaxis",
"ones_like",
"reshape",
"round",
"squeeze",
"where",
"zeros_like",
]
Expand Down
Loading