Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add simple memory allocation benchmark #132

Open
wants to merge 5 commits into
base: main
Choose a base branch
from
Open

Conversation

tiran
Copy link
Member

@tiran tiran commented Feb 8, 2022

No description provided.

@pablogsal
Copy link
Member

I'm happy landing this. Let's give some days if @vstinner wants to take a look.

@vstinner
Copy link
Member

vstinner commented Feb 8, 2022

Does it belong to pyperformance which is supposed to be a set of "macro benchmarks"?

IMO it's to have benchmarks outside pyperformance. @ericsnowcurrently wants to add the ability to plug your own benchmark suite in pyperformance using configuration files, but I didn't follow his PR recently.

@pablogsal
Copy link
Member

Does it belong to pyperformance which is supposed to be a set of "macro benchmarks"?

I would say yes: we have things that are quite small in what they do anyway. C'mon, we even have unpacking tests:

for _ in range_it:
# 400 unpackings
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack
a, b, c, d, e, f, g, h, i, j = to_unpack

@vstinner
Copy link
Member

vstinner commented Feb 8, 2022

C'mon, we even have unpacking tests:

Maybe these tests don't belong to pyperformance :-(

I would say yes: we have things that are quite small in what they do anyway.

It's up to you.

@ericsnowcurrently
Copy link
Member

Does it belong to pyperformance which is supposed to be a set of "macro benchmarks"?

@ericsnowcurrently wants to add the ability to plug your own benchmark suite in pyperformance

My PR landed a while back and is part of the most recent release.

That said, we already have a bunch of micro benchmarks so I'm not opposed to adding more. (We can separate the bunch later.) However, it would be worth marking them as "micro" sooner rather than later so they are easy to skip. That would involve adding/updating tags = "micro" in the [tool.pyperformance] section of each benchmark's pyproject.toml. For this PR it would probably be okay to do so just for the new benchmark.

@tiran tiran marked this pull request as ready for review February 8, 2022 17:15
@pablogsal
Copy link
Member

@tiran THe benchmark fails for pypy because sys.getsizeof doesn't work. Check the CI output

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants