Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

vllm-batch for lit5 and firstmistral is supported or not? #167

Open
sahel-sh opened this issue Jan 19, 2025 · 1 comment
Open

vllm-batch for lit5 and firstmistral is supported or not? #167

sahel-sh opened this issue Jan 19, 2025 · 1 comment
Assignees
Labels
documentation Improvements or additions to documentation

Comments

@sahel-sh
Copy link
Member

--vllm-batched is passed set in LiT5 and FirstMistral examples in the README. But later on we say:

vLLM, SGLang, and TensorRT-LLM backends are only supported for RankZephyr and RankVicuna models.

I think we should have a clear table for which flags are supported for which rerankers. For example, I assume --use_logits and --use_alpha only make sense with the listwise rerankers (or are they only supported with FirstMistral?).

@sahel-sh sahel-sh added the documentation Improvements or additions to documentation label Jan 19, 2025
@sahel-sh
Copy link
Member Author

@ronakice would you PTAL?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation
Projects
None yet
Development

No branches or pull requests

2 participants