Skip to content

Releases: uptrain-ai/uptrain

v0.4.0

29 Sep 16:10
366c942
Compare
Choose a tag to compare

🚀 Release Notes - Version 0.4.0 🚀

We're thrilled to introduce the latest updates and enhancements in this release! 🌟

🆕 New Features and Improvements:

  1. Log_and_Evaluate: Experience seamless logging and evaluation with our new "log_and_evaluate" feature. 📊

  2. Evaluate_Experiments: Dive into deeper insights with "evaluate_experiments" to gain a better understanding of your model's performance. 📈

  3. Prebuilt Evaluations: We've added prebuilt evaluations for effortless and quick assessment. 📋

  4. EvalLLM Evaluator: Staying true to our open-source roots! 🎉 We've introduced the all-new EvalLLM evaluator for users wishing to use UpTrain for free, allowing evaluations without the need for UpTrain accounts. 💼

  5. Expanded LLM Support: Now supporting over 100+ Language Model Models (LLMs)! 🌐

  6. Token Limiter and Fallback: Ensure your models stay within token limits with the new token limiter, backed by a robust fallback mechanism. 🛡️

  7. New Evaluations:

    • 📜 Guideline Adherence: Evaluate your model's adherence to guidelines.
    • 🔄 Response Completeness wrt Context: Assess response completeness with respect to context.
  8. New JavaScript Client: Discover our all-new JavaScript client for a more seamless integration experience. 📦

  9. Enhanced Chart Descriptions: Charts just got more informative with added descriptions for better data comprehension. 📊📝

  10. Download Dataset and Results: Our client now allows you to conveniently download datasets and results with just a few clicks! 💾

We're committed to making your experience with our platform even better. Thank you for your continued support and feedback! 🙌

For more details and bug fixes, please refer to the full release documentation. Feel free to reach out to us with any questions or feedback. 📧💬

Happy experimenting! 🧪🔬🚀

v0.3.8

22 Aug 12:54
3aa7c69
Compare
Choose a tag to compare

Minor Fixes and Improvements

  • Faster processing
  • Improved API functionality
  • Add Pandas fallback for Polars

v0.3.7

09 Aug 10:49
e2496a6
Compare
Choose a tag to compare

We're thrilled to unveil the latest additions in UpTrain version 0.3.7, designed to elevate your evaluation and grading experience.

  • Enhanced Context Analysis 🔄
    • Say hello to the Context Relevance Operator, a powerful addition that allows you to assess model responses within their context, ensuring higher accuracy and coherence.
  • Fact-Based Scoring 📊
    • Introducing the Response Factual Score Operator, an invaluable tool for evaluating the accuracy of model-generated responses in terms of factual content.
  • Language and Tone Assessment 🗣️🎵
    • Dive deep into the nuances of language and communication style using the Language Critique and Tone Critique Operators. These features empower you to analyze the structure and tone of model outputs, ensuring they align perfectly with your requirements.
  • Comprehensive Response Analysis ✅
    • The Response Completeness Operator brings a new layer of evaluation, enabling you to gauge how thoroughly model responses address the given prompts. This ensures your outputs are not only accurate but also comprehensive.
  • Improved Documentation 📚
    • Updated user guides and documentation for seamless integration of new features.
  • Bug Fixes and Performance 🐞
    • In our commitment to providing you with a seamless experience, we've ironed out various bugs and optimized performance, making your journey with UpTrain even smoother.

v0.3.6

29 Jul 04:20
2ec5d01
Compare
Choose a tag to compare
  • Improved documentation 📑
  • Better Failure Handling 🛡️
    • The new failure handling significantly reduces failures due to API requests allowing you to perform robust model grading evaluations
  • Custom Operators 🧪
    • You can now define any custom Python functions as an Operator and use it to perform your evaluations using CustomOperator
  • New Readers for BigQuery and DuckDB 📖
    • Now, you can directly provide a query and UpTrain can read data from your BigQuery Database using BigQueryReader and DuckDB database using DuckDBReader
  • New Validation Retry Logic 💡
    • You can now set your own retry logic which dictates how to generate LLM responses in case of validation failures. This could be any Python function like modifying prompt, temperature, triggering a tool, returning a default response, etc.
    • You can also check if the LLM response is empty or not for the given query and get back a default message instead of the LLM response.

v0.3.5

14 Jul 19:35
549531b
Compare
Choose a tag to compare
  • We added a new class called ExperimentArgs that allows you to compare the output of multiple prompts easily
  • We have also revamped our plotting features:
    • Directly use the plot you need by using the new LineChart, BarChart, Histogram, and ScatterPlot classes
    • Create your own custom Plotly charts using the CustomPlotlyChart class
    • Show multiple charts using the MultiPlot class

v0.3.4

07 Jul 09:32
694e3f7
Compare
Choose a tag to compare

Fix missing stub issue

v0.3.3

07 Jul 09:29
010b684
Compare
Choose a tag to compare

Fix imports

v0.3.2

06 Jul 12:16
1eef587
Compare
Choose a tag to compare

Fix dependencies

v0.3.1

06 Jul 12:10
fac5b40
Compare
Choose a tag to compare

Fix dependencies

v0.3

06 Jul 12:03
e4a0950
Compare
Choose a tag to compare

UpTrain v0.3 Release Notes

📢 We are thrilled to announce the release of UpTrain v0.3, our revamped LLM evaluation, prompt experimentation, and monitoring toolkit.

🎉 This version introduces exciting new capabilities and improvements to enhance all your LLM needs. Here's an overview of the key features and updates:

Expanded Support for LLM Use Cases:
UpTrain now provides extensive support for various LLM use cases, enabling you to:
🔍 Perform evaluations to check your LLM responses on aspects such as correctness, structural integrity, bias, hallucination, etc.
🔬 Validate language models to safeguard users against inappropriate responses and toxic content
🧪 Conduct A/B testing to compare and find the best prompts quantitatively

New Checks Added:
We have added several new checks to UpTrain, further expanding its capabilities. The following Checks are now available for utilization:
🚀 We integrate with OpenAI Evals and allow you to run any of them reliably using UpTrain
📝 Create and run your own Custom Model Grading check suited to your needs
📊 Generate embeddings and run similarity and distribution checks
🔍 Perform both syntax and execution checks for LLM-generated SQL queries

Improved Configuration Interface:
UpTrain Checks now offer a neater interface for configuring checks to make it convenient to set up and manage. We made the following enhancements:
✨ All checks are now Pydantic models
🧩 Configure all your operators and plots in one place

Refactored Library with Performant Technologies:
To improve performance and provide a smoother experience, we have refined the UpTrain library by leveraging cutting-edge technologies:
🚀 Speedy data processing with Polars and Delta Tables
🐞 Easy debugging with Loguru logger
📊 Simplified plotting with Plotly charts

Enhanced Dashboard Experience:
With UpTrain v1.0, we have introduced a range of new functionalities to our dashboard interface, empowering you to gain valuable insights and visualize data effectively.
🧮 Sort and filter your tables
⚖️ Aggregate and compare using pivot tables

Upgrade to UpTrain v0.3 now to experience these exciting new features and improvements. We are confident they will enhance your LLM monitoring process and streamline your language model management. For more details, please look at the updated documentation and user guide. 🙌

Thank you for your continued support and feedback. We look forward to your valuable insights as we evolve and improve UpTrain.

Happy evaluation! 🚀🔍

The UpTrain Team