You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I tried testing some PDF file, but when i wanted to query/ask it give error that the context window limit is reached, so i believe you are adding the whole content of the file into the prompt, i can see that this will be helpful with the Claude 100K context window, but for the Chatgpt 4 at the moment this is limitation.
Is it possible to add support for the vector DB such as Supabase which will store the embedding and then it will be queried to retrieve the most relevant chunk(s) based on the user query.
The text was updated successfully, but these errors were encountered:
Hi,
I tried testing some PDF file, but when i wanted to query/ask it give error that the context window limit is reached, so i believe you are adding the whole content of the file into the prompt, i can see that this will be helpful with the Claude 100K context window, but for the Chatgpt 4 at the moment this is limitation.
Is it possible to add support for the vector DB such as Supabase which will store the embedding and then it will be queried to retrieve the most relevant chunk(s) based on the user query.
The text was updated successfully, but these errors were encountered: