You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
--vllm-batched is passed set in LiT5 and FirstMistral examples in the README. But later on we say:
vLLM, SGLang, and TensorRT-LLM backends are only supported for RankZephyr and RankVicuna models.
I think we should have a clear table for which flags are supported for which rerankers. For example, I assume --use_logits and --use_alpha only make sense with the listwise rerankers (or are they only supported with FirstMistral?).
The text was updated successfully, but these errors were encountered:
--vllm-batched
is passed set inLiT5
andFirstMistral
examples in the README. But later on we say:I think we should have a clear table for which flags are supported for which rerankers. For example, I assume
--use_logits
and--use_alpha
only make sense with the listwise rerankers (or are they only supported with FirstMistral?).The text was updated successfully, but these errors were encountered: