-
Notifications
You must be signed in to change notification settings - Fork 468
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Community contribution - BetterTransformer
integration for more models!
#488
Community contribution - BetterTransformer
integration for more models!
#488
Comments
Hi @younesbelkada would love to contribute to this Issue and can work on FSMT. |
Hey @Sumanth077 , thanks a bunch for your interest in this issue! 🚀 Would love to assist you for the integration and let's try to make this happen! |
Thankyou for the reply @younesbelkada. Just opened a Draft Pull Request, haven't made any significant changes. In the Step 1: Identifying the source layer to change and in the BETTER_TRANFORMER_LAYERS_MAPPING_DICT, I couldn't find a mapping between the Module for the FSMT that can be converted to its BetterTransformer equivalent. Should I start creating that. Would love your assistance |
Hi @Sumanth077 , I have just replied on your PR, let's continue the discussion there ;) |
Hi, I would like to contribute as well. This would be my first contribution to open source, so I might need some hand holding 🤚 I followed the documentation and the progress made on FSMT in #494 to better understand the task. I looked into ViLT via
and as I understand the documentation, this should be the source layer to make changes to, including its attributes:
I could give the ViLTLayer a go, if it's ok with you @younesbelkada 🙂 |
Hi @ka00ri ! |
Hello, apologies for the delay, but I just opened up a draft PR to start discussion on how to add Better Transformer support for the ProphetNet encoder layer. I had a couple of questions about how to do this, so I was wondering who would would be the best person to reach out to regarding this. @michaelbenayoun @fxmarty @younesbelkada |
Hi @adit299 , thanks for adding the support for this architecture! Feel free to ask any question in the PR you opened. |
Hi @younesbelkada, could I pick up the RoFormer? |
@younesbelkada doing Detr - DetrLayer |
Hello @JanFidor |
@younesbelkada Hi, thanks for responding, I'm not 100% certain, but I think RemBert, RoFormer and RocBert are already implemented, as they're already added to init.py, overview.mdx and the test_file, if that's the case, the list of models left to implement would need to be updated, let me know if you agree! |
I see, thanks for clarifying. I will double check that and let you know |
Thanks for letting me know! Indeed these are already implemented |
Thanks for the suggestion, I'll get on it! |
Hi @fxmarty and @younesbelkada ! Thank you so much for your previous help and support on my implementation of I want to follow up on my PR on Specifically, I would like to check with you if it is still possible to work on this and have it reviewed and merged into the package. If it is, I would be happy to continue working on it. I realized the whole Thank you so much for your time and help, and I look forward to hearing back from you soon. Sincerely, |
@younesbelkada I would like to work upon flavalayer can you confirm whether it is done or not? |
Hi! @JanFidor will you finish with BLIP? I can do it if not, with the permission of @younesbelkada @fxmarty |
@younesbelkada I would like to work upon flavalayer can you confirm whether it is done or not? |
Hi, @mszsorondo Looking into the PRs, BLIP has been implemented in #1125. I just ticked it in the first post. |
@fxmarty any other model available for work? |
@fxmarty same here, if there´s still any model |
@younesbelkada Can, I work on ASTLayer?? |
Any plans to add support for MPT? |
please support florence2!!! |
BetterTransformer
integration for more models!BetterTransformer
API provides faster inference on CPU & GPU through a simple interface!Models can benefit from very interesting speedups using a one liner and by making sure to install the latest version of PyTorch. A complete guideline on how to convert a new model has been created on the BetterTransformer documentation!
Here is a list of models that could be potentially supported, pick one of the architecture below and let's discuss about the conversion!
Text models 🖊️ :
Bettertransformer
support for FSMT #494MobileBERT
support forBetterTransformer
#506MBart
support forBetterTransformer
#516 @ravenouseVision models 📷 :
BetterTransformer
support for ViLT architecture #508Audio models 🔉 :
Let us also know if you think that some architectures can be supported that we missed. Note that for encoder-decoder based models below, we expect to convert the encoder only.
Support for decoder-based models coming soon!
cc @michaelbenayoun @fxmarty
huggingface/transformers#20372
The text was updated successfully, but these errors were encountered: