Skip to content

Releases: spachava753/cpe

v0.16.2

03 Jan 22:52
Compare
Choose a tag to compare

v0.16.1

02 Jan 04:49
Compare
Choose a tag to compare

Full Changelog: v0.16.0...v0.16.1

v0.16.0

30 Dec 22:17
Compare
Choose a tag to compare

Full Changelog: v0.15.0...v0.16.0

CPE v0.16.0 Release Notes

New Features and Improvements

Enhanced Command Line Interface

  • You can now provide input directly as command line arguments in addition to using files or stdin
  • Multiple input sources can be combined (e.g., file input + command line arguments)
  • The -input flag is now optional, defaulting to using only command line arguments if not specified

Improved File Handling

  • Better detection of text-based source code files using MIME type detection
  • More accurate identification of file types without relying on file extensions

Enhanced CLI Tool Capabilities

  • Modified the bash tool description to encourage use if internet and package installers like pip and apt as necessary instead of discouraging it.

Experimental Features

  • Added support for experimental features through the CPE_EXPERIMENTAL environment variable
  • Introduced the disabled_related_files experimental flag for alternative file context gathering behavior

Technical Notes

  • Agent instructions update to show that the the tool is named cpe

v0.15.0

30 Dec 05:28
Compare
Choose a tag to compare

Full Changelog: v0.14.5...v0.15.0

CPE v0.15.0 Release Notes

⚙️ Configuration Improvements

  • Environment variable support for custom API endpoints
  • More flexible model configuration options
  • Improved error handling and retry logic for API calls

This release represents a significant internal refactor, with no major changes to user-facing behavior. The breaking change is that the model names have been renamed so that the model names no longer contains a period e.g. gemini-1.5-pro becomes gemini-1-5-pro when providing the model name as an argument.

v0.14.5

24 Dec 04:41
Compare
Choose a tag to compare

Full Changelog: v0.14.4...v0.14.5

v0.14.5 Release Notes

🔧 Fixes & Improvements

Gemini Provider Now Working

The Gemini provider is now fully functional! Previous versions had issues with the function calling implementation that prevented it from working correctly. Users can now use Gemini as an alternative to other LLM providers.

Improved Reliability

  • Increased the timeout for Gemini client initialization from 10 seconds to 5 minutes

v0.14.4

24 Dec 04:14
Compare
Choose a tag to compare

Full Changelog: v0.14.3...v0.14.4

Release Notes for v0.14.4

New Features

  • Added support for environment variable CPE_CUSTOM_URL as an alternative way to specify a custom API endpoint URL. This can be used instead of the -custom-url flag.

Improvements

  • Improved error message when using an unknown model - now mentions both the -custom-url flag and the CPE_CUSTOM_URL environment variable as options for specifying the endpoint.

Internal Changes

  • Refactored OpenAI and Anthropic provider implementations to use their official SDKs, which should provide better reliability and maintainability.

v0.14.3

23 Dec 18:03
Compare
Choose a tag to compare

Full Changelog: v0.14.2...v0.14.3

v0.14.3

Improvements

  • Added retry limit when calling Anthropic API to prevent infinite retry loops. The tool will now attempt up to 5 retries with a 1-minute wait between attempts before failing with a clear error message. This improves error handling when encountering rate limits (429), bad requests (400), or server errors (5xx)

v0.14.2

23 Dec 15:26
Compare
Choose a tag to compare

Full Changelog: v0.14.1...v0.14.2

v0.14.2 Release Notes

Improvements

  • Enhanced Error Recovery: Improved reliability when communicating with Claude (Anthropic) by automatically retrying on additional error conditions. The tool will now retry after a 1-minute delay when encountering HTTP 400 errors, in addition to the existing retry behavior for rate limits (429) and server errors (500-series).

This change makes the tool more resilient when communicating with Claude's API, potentially reducing interruptions during conversations due to transient API issues.

v0.14.1

23 Dec 05:17
Compare
Choose a tag to compare

Full Changelog: v0.14.0...v0.14.1

Release v0.14.1

  • Note: The Gemini provider is currently not working and will be fixed in a future release

Improvements

  • Added automatic retry logic for the Anthropic provider when encountering rate limits or server errors
    • The CLI will now automatically wait and retry after receiving a rate limit (429) or server error (5xx)
    • This makes the CLI more resilient to temporary API issues

v0.14.0

23 Dec 03:01
Compare
Choose a tag to compare

Full Changelog: v0.13.8...v0.14.0

Release Notes - v0.14.0

Important Notice

⚠️ The Gemini provider is currently not working due to an issue. Please use other providers (like OpenAI and Anthropic) for now.

Major Changes

  • Moved to a more flexible and powerful agentic architecture internally, replacing the previous constrained workflow. This change allows for more complicated and longer edits to the codebase.

Features and Improvements

  • Removed unnecessary debug flags and simplified CLI options
  • Improved error handling and logging throughout the application

Breaking Changes

  • Removed the -debug flag as it's no longer needed with the new architecture
  • Removed the -include-files flag as file selection is now only managed automatically by the agent

Internal Architecture

The tool has been completely restructured to use an agentic approach, where the AI assistant:

  • Has more autonomy in handling tasks
  • Can better understand context and requirements
  • Makes more intelligent decisions about when and how to access the codebase
  • Provides more transparent reasoning about its actions