You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We truly appreciate your input. Our team will now conduct an internal review to assess the viability of integrating this new open-source product into our catalog. As we have limited resources, please understand that this evaluation process might take some time. To ensure that your idea receives the attention it deserves, we will apply the 'on-hold' label. This will prevent our automated system from prematurely closing the issue.
We will keep you updated on any developments regarding the addition of this product. Your patience and understanding are greatly appreciated.
carrodher
added
new-product
Request new product to be added into the catalog
on-hold
Issues or Pull Requests with this label will never be considered stale
labels
Nov 28, 2024
Hi, unfortunately after an internal review, and due to other priorities we have, this solution was not considered to be added to the catalog in the short/mid-term.
We apologize for the inconvenience. We will reconsider it in the future.
Name and Version
bitnami/vllm - 0.6.2, 0.6.3 and 0.6.4
Is it possible to add images for latest and previous versions?
Version 0.6.4 has some critical issues related gpu memory usage:
gpu_memory_utilization
Behavior in vLLM 0.6.4 vllm-project/vllm#10451And related default LLM that should be saved in docker image, looks like vLLM project uses facebook/opt-125m model.
What is the problem this feature will solve?
What is the feature you are proposing to solve the problem?
What alternatives have you considered?
The text was updated successfully, but these errors were encountered: