Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[wip] unification of torchao.float8 with the rest of torchao #894

Open
vkuzo opened this issue Sep 16, 2024 · 1 comment
Open

[wip] unification of torchao.float8 with the rest of torchao #894

vkuzo opened this issue Sep 16, 2024 · 1 comment

Comments

@vkuzo
Copy link
Contributor

vkuzo commented Sep 16, 2024

context

Today, torchao.float8 has a separate API from the rest of torchao. This is for historical reasons:

  1. float8 development started in https://github.com/pytorch-labs/float8_experimental, which is now archived
  2. in early 2024 float8_experimental was migrated to torchao in move float8_experimental to torchao/float8 #551, to set us up for eventually unifying float8 with the rest of torchao
  3. float8 inference originally started in torchao.float8 but recently moved to quantize_ to better align with other inference APIs
  4. there are some requirements which are important for float8 today and are not yet easy in other torchao APIs, such as: persistent state (delayed scaling), distributed integrations, extensibility to larger graphs than ops surrounding a linear layer

next steps

We need to figure out the requirements for training, inference, and known future use cases for both. Then, we should align on how to best structure the torchao APIs to meet these requirements. Stay tuned, we will get this going after PTC 2024.

@jerryzh168
Copy link
Contributor

jerryzh168 commented Sep 17, 2024

I think we could separate the implementation for the model prepared for training and the model for inference. something like the following:

model = ...
# can be implemented with hooks, or module swaps etc. more friendly with training
prepare_for_training_(model, ...)

# training

# use tensor subclass for everything for better serialization/deserialization UX
convert_to_inference_(model)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants