You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
(Apologies if this not the proper place/way for me to ask this question.)
I recently discovered that @benchmark does not work the way I thought it did--and hence is not benchmarking what I thought it was. I would be very grateful if one of the authors (perhaps @jrevels?) could take a few moments to clarify how it works and the rationale behind it.
Suppose we have defined a function f and run the following:
x = 456;
y = 789;
@btime f(123, $x, y)
I read somewhere that the aim of BenchmarkTools is to estimate the cost of executing a given expression in typical use; in Julia the scope is usually a function. The manual also gives the impression that interpolation replaces subexpressions with their values (as it does in string and macro interpolation). Thus I imagined that @btime f(123, $x, y) would expand to something like
wrapper(y_) = f(123, 456, y_);
wrapper(y); # get compilation out of the way
start_timer()
wrapper(y)
stop_timer()
That is, (1) the expression in question is wrapped in a function, (2) literal constants remain as they are, (3) interpolated variables are replaced by their (globally evaluated) values, and (4) uninterpolated arguments in the original expression become parameters of the wrapping function.
However, from looking at generate_benchmark_definition(), it seems that the expanded code is something more like
wrapper(x_) = f(123, x_, y);
wrapper(x); # get compilation out of the way
start_timer()
wrapper(x)
stop_timer()
That is, (3') interpolated arguments become parameters of the wrapping function, and (4') uninterpolated arguments remain as global-scope bindings (evaluated during the benchmark).
Is this second description accurate (modulo details like gc and repetition)?
If so, what are/were the reasons for implementing it that way instead of how I originally imagined?
Thank you for your insights.
The text was updated successfully, but these errors were encountered:
(Apologies if this not the proper place/way for me to ask this question.)
I recently discovered that
@benchmark
does not work the way I thought it did--and hence is not benchmarking what I thought it was. I would be very grateful if one of the authors (perhaps @jrevels?) could take a few moments to clarify how it works and the rationale behind it.Suppose we have defined a function
f
and run the following:I read somewhere that the aim of BenchmarkTools is to estimate the cost of executing a given expression in typical use; in Julia the scope is usually a function. The manual also gives the impression that interpolation replaces subexpressions with their values (as it does in string and macro interpolation). Thus I imagined that
@btime f(123, $x, y)
would expand to something likeThat is, (1) the expression in question is wrapped in a function, (2) literal constants remain as they are, (3) interpolated variables are replaced by their (globally evaluated) values, and (4) uninterpolated arguments in the original expression become parameters of the wrapping function.
However, from looking at
generate_benchmark_definition()
, it seems that the expanded code is something more likeThat is, (3') interpolated arguments become parameters of the wrapping function, and (4') uninterpolated arguments remain as global-scope bindings (evaluated during the benchmark).
Is this second description accurate (modulo details like gc and repetition)?
If so, what are/were the reasons for implementing it that way instead of how I originally imagined?
Thank you for your insights.
The text was updated successfully, but these errors were encountered: