Skip to content
This repository has been archived by the owner on Apr 19, 2024. It is now read-only.

Public Benchmarks Used For Evaluating Python Environment Resolvers using perp

License

Notifications You must be signed in to change notification settings

recursionpharma/perp_public_benchmarks

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

59 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

perp_public_benchmarks

Public Benchmarks Used For Evaluating Python Environment Resolvers using perp

The benchmarks run from this repo download packages from public conda channels (default and conda-forge) and public PyPI. To offer a broader comparison, we explored using conda/mamba both using only PyPI packages, and only Anaconda cloud packages.

The remaining tools only work with PyPI. All trials were run in our cloud-native CI/CD system as isolated Kubernetes jobs. Each job represents a separate build with no shared disks, no docker layer caching or build caching on disk volumes.

Compute node hardware was consistent for all builds. The data and web benchmarks have a large number of dependencies; the utility benchmark is small.

About

Public Benchmarks Used For Evaluating Python Environment Resolvers using perp

Resources

License

Stars

Watchers

Forks

Packages

No packages published