You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, there is no way to send the data to the indexing process without creating a doc column from the input.
Need to fix the indexing error:
AttributeError: Table has no column with name doc.
Occurred here:
Line: query_context = index.query(embedded_query, k=3).select(
File: /home/bumurzokov/llm-app/src/prompt.py:14
When no doc column defined, it always fails at the index stage:
# Compute embeddings for each document using the OpenAI Embeddings API
embedded_data = contextful(context=documents, data_to_embed=documents.doc)
The text was updated successfully, but these errors were encountered:
If it is a technical requirement having always doc column for indexing, my suggestion can be somehow abstract this step in the library so that user can specify what fields to index and the LLM App automatically creates doc column under the hood for chosen fields.
If user does not specify any field to index, LLM App creates doc column with all fields for indexing.
The same we have already discussed with @janchorowski last week.
Currently, there is no way to send the data to the indexing process without creating a doc column from the input.
Need to fix the indexing error:
When no doc column defined, it always fails at the index stage:
The text was updated successfully, but these errors were encountered: