-
Notifications
You must be signed in to change notification settings - Fork 213
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WIP: Add tutorials about ragged tensors. #823
base: master
Are you sure you want to change the base?
Conversation
A preview can be found at https://csukuangfj.github.io/k2/python_tutorials/ragged/basics.html# |
Looks cool!
…On Sat, Sep 11, 2021 at 5:56 PM Fangjun Kuang ***@***.***> wrote:
A preview can be found at
https://csukuangfj.github.io/k2/python_tutorials/ragged/basics.html#
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#823 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAZFLO3YI3NPQ22GMIDTWETUBMRTVANCNFSM5D2ZJRUA>
.
Triage notifications on the go with GitHub Mobile for iOS
<https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675>
or Android
<https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub>.
|
@csukuangfj Thanks for this tutorial! |
TensorFlow has sparse matrices and ragged tensors, see
PyTorch also has sparse matrices and nested tensors, see
We use the same terminology, i.e., row splits, row ids, etc, as the one used in A ragged tensor with 2 axes looks similar to a sparse matrix in CSR format, but they are different. From https://en.wikipedia.org/wiki/Sparse_matrix#Compressed_sparse_row_(CSR,_CRS_or_Yale_format) , a sparse matrix in CSR format has the following components:
The However, there is no PyTorch's sparse matrices use COO format. But anyway, they are still matrices with row indexes and column indexes. Also, ragged tensors in k2 are not designed for linear algebra operations, i.e., there are no matrix-vector or matrix-matrix multiplications. Instead, they are designed for efficiently manipulating irregular data structures on GPU. |
Many thanks for the clarification! A humble suggestion: you might consider including this information in the tutorial because I am hardly the last person to ask questions like this. |
2c20650
to
5fc2189
Compare
No description provided.