Skip to content

ifeojo/agent-evaluation

 
 

Repository files navigation

PyPI - Version PyPI - Python Version GitHub License security: bandit Code style: black Built with Material for MkDocs

Agent Evaluation

Agent Evaluation is a generative AI-powered framework for testing virtual agents.

Internally, Agent Evaluation implements an LLM agent (evaluator) that will orchestrate conversations with your own agent (target) and evaluate the responses during the conversation.

✨ Key features

  • Built-in support for popular AWS services including Amazon Bedrock, Amazon Q Business, and Amazon SageMaker. You can also bring your own agent to test using Agent Evaluation.
  • Orchestrate concurrent, multi-turn conversations with your agent while evaluating its responses.
  • Define hooks to perform additional tasks such as integration testing.
  • Can be incorporated into CI/CD pipelines to expedite the time to delivery while maintaining the stability of agents in production environments.

📚 Documentation

To get started, please visit the full documentation here. To contribute, please refer to CONTRIBUTING.md

👏 Contributors

Shout out to these awesome contributors:

About

A generative AI-powered framework for testing virtual agents.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 95.6%
  • Jinja 4.4%