-
Notifications
You must be signed in to change notification settings - Fork 240
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added TensorFlow support to nncf.Tensor
#3106
base: develop
Are you sure you want to change the base?
Added TensorFlow support to nncf.Tensor
#3106
Conversation
44d27af
to
79ded03
Compare
79ded03
to
c977245
Compare
c977245
to
9b6bf86
Compare
Please, update your branch from develop. |
9b6bf86
to
298891b
Compare
|
||
@numeric.squeeze.register(tf.Tensor) | ||
def _(a: tf.Tensor, axis: Optional[Union[int, Tuple[int, ...]]] = None) -> tf.Tensor: | ||
with tf.device(a.device): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I do not see a reason to strictly return the backend for the TF tensor.
@alexsu52, do you think this is necessary? It looks like the device management for TF is kinda auto
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As far as I know, TF has default device which is used to create any new tensor. Default is GPU if applicable. The function has to return tensor on the same device. I think using with tf.device(a.device):
is right.
nncf/tensor/functions/tf_numeric.py
Outdated
axis: Union[int, Tuple[int, ...]] = None, | ||
keepdims: bool = False, | ||
) -> tf.Tensor: | ||
numpy_a = np.array(a) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we add tensorflow_probability
package for calculation median here? @alexsu52
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think that this is good idea because it adds new dependence in NNCF.
@@ -112,7 +112,8 @@ def test_operators_tensor(self, op_name): | |||
assert res.dtype == res_nncf.data.dtype | |||
assert all(res == res_nncf.data) | |||
assert isinstance(res_nncf, Tensor) | |||
assert res_nncf.device == nncf_tensor_a.device | |||
if not (self.backend() == TensorBackend.tf and self.device() == TensorDeviceType.CPU): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you briefly explain the reason for these changes while you return specific device tensors for TF?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the result of binary operation with two CPU TF tensors is GPU TF tensor by default. it means that the following check would fail on CPU.
@@ -158,7 +164,7 @@ def test_comparison_tensor(self, op_name): | |||
res = fn(tensor_a, tensor_b) | |||
res_nncf = fn(nncf_tensor_a, nncf_tensor_b) | |||
|
|||
assert res == res_nncf | |||
assert res_nncf == res |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you clarify why you changed the order here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
with the old order comparison operator for tf.Tensor
was called here, it tries to convert nncf.Tensor
to tf.EagerTensor
. It causes a problem because it calls __len__
operator somewhere inside. I tried to implement __len__
operator for nncf.Tensor
as return len(self._data)
but the comparison still fails for scalar tensors. currently I have no idea how to resolve it and used changing order here as a quick workaround. I'd appreciate any other suggestions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for contributing. Please address my comments.
PS: Sorry for delay with reviewing.
nncf/tensor/functions/tf_numeric.py
Outdated
axis: Union[int, Tuple[int, ...]] = None, | ||
keepdims: bool = False, | ||
) -> tf.Tensor: | ||
numpy_a = np.array(a) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think that this is good idea because it adds new dependence in NNCF.
c687e6a
to
bd17d4a
Compare
Changes
tf_numeric.py
andtf_linalg.py
files with implementations of methods needed fornncf.Tensor
support.nncf.Tensor
.__ifloordiv__
operator fornncf.Tensor
.Reason for changes
Currently TensorFlow tensors are not supported by
nncf.Tensor
. It prevents #3041 from being done.Related tickets
#3041
Tests
TestTFNNCFTensorOperators
andTestGPUTFNNCFTensorOperators
classes were added totests/tensorflow/test_tensor.py
. Some changes were necessary fortests/cross_fw/test_templates/template_test_nncf_tensor.py
, mostly related to different device management in TensorFlow.