Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add a benchmark for typeshed-stats #268

Open
wants to merge 4 commits into
base: main
Choose a base branch
from

Conversation

AlexWaygood
Copy link
Member

@AlexWaygood AlexWaygood commented Mar 1, 2023

This PR adds a benchmark for typeshed-stats, a project of mine that provides a CLI tool to gather statistics and other information on typeshed. The benchmark takes around 5 seconds to complete. As discussed on the CPython core devs' discord, it may make an interesting benchmark for several reasons, among them:

  • It makes significant use of several newish features of Python that are currently underrepresented in the benchmark suite, such as pattern-matching, pathlib, f-strings, enums, typing and dataclasses.
  • It's real-world code that hasn't been particularly optimised for performance.

Rather than list typeshed-stats as a dependency for this benchmark, I've instead forked my own project and committed a modified version into this PR. typeshed-stats as it exists on PyPI makes network requests using aiohttp, which would lead to indeterminacy in the benchmark; the version of typeshed-stats that I've included in this PR doesn't make network requests. I also made a few other changes to reduce the number of third-party dependencies, such as switching from attrs to dataclasses, and removing the use of rich. Other than that, however, I've tried to keep the changes to a minimum, so that it accurately reflects the "real" version of typeshed-stats. That means that there are a few unused functions in this PR that could potentially be removed, if we think that would be better.

Two more decisions I made when making this PR:

  • typeshed-stats as it exists on PyPI uses tomli on 3.10, and the stdlib tomllib on 3.11. In this PR, I just use tomllib unconditionally. That means that this benchmark has 0 external dependencies, but it also means that the benchmark only runs on 3.11+. If we wanted, I could add a dependency on tomli and change it so that we use tomli unconditionally, which would make it 3.10-compatible.
  • I wanted to benchmark the use of typeshed-stats as a CLI tool, so I've added the whole project to a subdirectory in the benchmark and then listed it as a local dependency in the requirements.txt file for the benchmark. That was the only way I could see of getting the runner.bench_command() function to work, but it doesn't look like any other benchmarks do this currently. Let me know what you think about this!

@AlexWaygood
Copy link
Member Author

AlexWaygood commented Mar 1, 2023

The CI is failing for the Python 3.7-3.10 workflows for the same reason as #200 (comment): the requires-python field in the pyproject.toml file for the benchmark doesn't have any effect. I've specified requires-python = ">= 3.11" in the pyproject.toml file for this benchmark, but that hasn't resulted in the benchmark being skipped for <3.11.

The CI is failing on the 3.12 workflow because of the same reason that the CI is failing on main: greenlet is not yet compatible with Python 3.12, and greenlet is a dependency for some of the other benchmarks (#263).

Copy link
Member

@brandtbucher brandtbucher left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That seems like a lot of data files to me... GitHub won't even let me review the actual benchmark.

These are just copied verbatim from typeshed, I assume? If so, maybe we could just install a pinned version in our requirements (not sure if typeshed is pip-installable?) or maybe clone the typeshed repo into a temporary location in the benchmark setup (we would need to be careful that this only happens once per pyperformance run, not once per pyperf run/process... not sure how hard that is). Or maybe we could do something clever with git (like making typeshed a submodule, pinned to a specific revision)?

I'd just like to avoid adding ~230k new lines of code (and ~4k files) if at all possible.

@AlexWaygood
Copy link
Member Author

That seems like a lot of data files to me... GitHub won't even let me review the actual benchmark.

Yup, very much aware — I tried to keep a clean git history here to help out with reviewing (added the data files in the first commit, the project files in the second commit, and the actual code for running the benchmark in the third commit).

These are just copied verbatim from typeshed, I assume?

Correct

not sure if typeshed is pip-installable?) or maybe clone the typeshed repo into a temporary location in the benchmark setup

Typeshed isn't pip-installable, no, but very much open to the second option. I floated this issue on discord before filing the PR, though, and I think @mdboom might have a different opinion on this question.

Note that this is basically the same approach we took with the docutils benchmark (#216), where we just copied-and-pasted all of docutils' docs as data files into this repo.

@AlexWaygood
Copy link
Member Author

Or maybe we could do something clever with git (like making typeshed a submodule, pinned to a specific revision)?

This is an interesting idea. I'll look into it.

@brandtbucher
Copy link
Member

I tried to keep a clean git history here to help out with reviewing (added the data files in the first commit, the project files in the second commit, and the actual code for running the benchmark in the third commit).

Ah, nice. Thanks.

Typeshed isn't pip-installable, no, but very much open to the second option. I floated this issue on discord before filing the PR, though, and I think @mdboom might have a different opinion on this question.

Note that this is basically the same approach we took with the docutils benchmark (#216), where we just copied-and-pasted all of docutils' docs as data files into this repo.

Well, that one seems like a lot less data, haha. We have lots of benchmarks that vendor a few dozen data files in this way... maybe, if we want to do this approach, we could just run against a small subset of typeshed? Not sure how that affects runtime.

But we could wait for @mdboom to weigh in, or see if git submodules are an acceptable solution.

(Side-note: why isn't typeshed pip-installable? Does each tool need to vendor its own huge copy? I imagine there's some good reason...)

@AlexWaygood
Copy link
Member Author

AlexWaygood commented Mar 1, 2023

Well, that one seems like a lot less data, haha.

You have to understand that my secret ulterior motive is to become the number-one contributor to pyperformance in terms of lines contributed ;)

(Side-note: why isn't typeshed pip-installable? Does each tool need to vendor its own huge copy? I imagine there's some good reason...)

A bunch of reasons. The stubs for the stdlib are ~50k lines of code (so, just under a quarter of typeshed). These do need to be vendored by each tool, because updating typeshed's stdlib stubs often results in internal tests breaking for mypy/pyright/etc. Some tools also apply their own patches to typeshed's stdlib stubs (e.g. take a look at python/mypy#13987 -- quite a few tests failed at first; there had to be 2 followup PRs to revert specific typeshed changes to mypy's vendored version of the stdlib stubs).

The other ~three-quarters of typeshed consists of stubs for various third-party packages that don't have inline types at runtime (yet). These parts of typeshed are pip-installable, but they're packaged into separate stubs packages (types-requests for our requests stubs; types-colorama for our colorama stubs, etc. etc.) and auto-uploaded to PyPI on a regular basis. The machinery for auto-uploading the stubs packages is located in a separate repository for security reasons: https://github.com/typeshed-internal/stub_uploader. This means that people who just need stubs for requests (~1000 lines of stubs) can install just the stubs for requests in isolation; they don't need to pip install ~179,000 lines of stubs that are completely irrelevant to whatever project they're working on.

@AlexWaygood
Copy link
Member Author

AlexWaygood commented Mar 1, 2023

maybe, if we want to do this approach, we could just run against a small subset of typeshed?

Some of our third-party stubs in typeshed are much larger (in terms of lines of code) than others. If we deleted just typeshed's stubs for pywin32, SQLAlchemy, influxdb-client and openpyxl from the vendored copy of typeshed here, it would reduce the size of this PR by around 52,000 lines of code. I certainly wouldn't mind doing that; we'd still be running typeshed-stats on a version of typeshed with 144 packages and 180,000 lines of code.

But anyway, I'll look into git submodules.

@hugovk
Copy link
Member

hugovk commented Mar 1, 2023

If you just need to get a big bunch of files from a repo, a couple of methods we've used for Pillow are git cloning with --depth=1, or downloading and unzipping the zipball for a tag or branch. Both are pretty fast, and avoid the fiddliness of submodules.

@mdboom
Copy link
Contributor

mdboom commented May 9, 2023

Sorry I missed this PR the first time around. I did say that usually we just include the data in the repo, but for something this large, maybe it's pushing it. If installing from PyPI works here, that seems like the best approach (since support for installing dependencies is already baked into pyperformance). If not on PyPI, would specifying a git dependency in requirements.txt work? Cloning directly from git may work, as long as it can be done at dependency install time so it doesn't happen repeatedly. Ultimately, I don't care too much about the method as long as the exact version can be pinned (so the benchmark performance doesn't change as the typestubs are changed upstream).

@AlexWaygood
Copy link
Member Author

AlexWaygood commented May 9, 2023

I said to @brandtbucher I'd investigate git submodules, but never did, so this PR is waiting on me at the moment, as far as I'm concerned, don't worry 🙂

I'll try to get back to this PR soon, though I'm quite busy at the moment...

@AA-Turner
Copy link
Member

For comparison, #200 added ~1.5x the lines of this PR, so there's precdent for just committing the data directly there, if that would unblock this.

A

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants