Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Llama Parse Slowness and Variation in Output Using Parsing instruction #589

Open
qasim-mobi opened this issue Jan 23, 2025 · 1 comment
Open
Labels
bug Something isn't working

Comments

@qasim-mobi
Copy link

qasim-mobi commented Jan 23, 2025

Describe the bug
Llama Parse gets extremely slowed down. It is taking several minutes even for returning the output and even took few minutes even for cached response and also another problem is that while using the accurate mode with same parsing instructions used earlier, Llama Parse is providing different output rather than which it was giving previously on same file.

Client:
Please remove untested options:

  • Python Library

Additional context
Add any additional context about the problem here.
What options did you use?

  • parsing_instruction
  • donot_unroll_column
  • accurate mode As well as vendor_multimodal_model
@qasim-mobi qasim-mobi added the bug Something isn't working label Jan 23, 2025
@hamzanouali42
Copy link

We are facing the same issue

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants