We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
No description provided.
The text was updated successfully, but these errors were encountered:
[Model] Llama 3B: Include tests with proper setups for main builing b…
e5aae0c
…locks + minor fix. #123 #124 #125 #126 #165 Main Llama blocks: - Embeddings - LM Head - MLP - RMS Norm - Rotary Embeddings - Self Attention
8d88845
…locks + minor fix. #123 #124 #125 #126 #165 (#167) Main Llama blocks: - Embeddings - LM Head - MLP - RMS Norm - Rotary Embeddings - Self Attention
There are some dynamic shapes on the TVM level when the pure block is extracted.
Sorry, something went wrong.
Update the llama Rotary embedding test to avoid generation of dynamic shapes from TVM pytorch frontend conversion (PR : Update the llama rotary embedding test cases #330)
Unsupported operation for lowering from TTForge to TTIR: sparse_matmul error will be thrown while lowering to TTIR, which can be resolved, once the Adding decompose for Indexing op using Transpose and Matmul PR is merged
Unsupported operation for lowering from TTForge to TTIR: sparse_matmul
Raised Issue for the concat op: tt-mlir - ttnn.concat fails when the concat dimension of the input tensors are not TILE aligned tt-mlir#795 tt-metal - [Bug Report] ttnn.concat - Tile padding along concatenated dim is not supported tt-metal#13667
chandrasekaranpradeep
No branches or pull requests
No description provided.
The text was updated successfully, but these errors were encountered: