Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

make [ 1-based for tensors #519

Closed
wants to merge 1 commit into from
Closed

make [ 1-based for tensors #519

wants to merge 1 commit into from

Conversation

t-kalinowski
Copy link
Member

@t-kalinowski t-kalinowski commented Mar 13, 2022

closes #516

@t-kalinowski
Copy link
Member Author

Still todo:

  • Update [ tests so that all the [ tests are evaluated in 3 contexts:
    with R values, with eager tensors, and with graph mode tensors in a tf_function().
  • Update the warning and errors thrown when a 0 is encountered.

@t-kalinowski
Copy link
Member Author

t-kalinowski commented Mar 14, 2022

This is slightly more complicated than I initially estimated.
The complication is a slice like x[6:1] in R. There is seemingly no way to translate that to python, without turning the slice end into a None. That’s easy enough with R values or eager tensors, but is difficult to do in a tf.cond() with placeholder tensors because tf.cond requires that all branches return the same types.

You can get past the tf.cond() by constructing a potentially empty tf.experimental.Optional with a varient dtype, or using a tf.raw_ops.OptionalNone(), but then Tensor.__getitem__() won’t accept that as input.

There are two ways around, both with tradeoffs.
One is to wrap the call to Tensor.getitem in a series of tf.cond() or tf.switch(), for each combination of axes doing something like if end == 0: x[start::step] else x[start:end:step]. This doesn't seem right, and it seems like it would introduce significant performance implications by making a barrier for async TF operations.

Another alternative it to manually construct the end_mask bit mask as a tensor, and then calling tf.strided_slice() directly, bypassing Tensor.__getitem__(). This is a substantial refactoring and will have to wait for the time being.

@t-kalinowski
Copy link
Member Author

Follow up: tf.strided_slice() does not accept tensors for begin_mask, end_mask, or ellipsis_mask arguments, only ints. This means that these arguments must be fully known at tracing time, and cannot be dependent on graph values. Consequently, building up a tf.strided_slice() call in the R [.tensorflow.tensor method is not a viable approach for us here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[ should treat Tensors indices as 1-based
1 participant