Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add a script for generating the quick-tuning perfconfigs list #1689

Open
wants to merge 5 commits into
base: develop
Choose a base branch
from

Conversation

djramic
Copy link
Contributor

@djramic djramic commented Oct 28, 2024

This PR includes a script for finding quick-tuning perfconfigs in the following way:

  • For each tuning problem, it finds the top n perfcofnigs based on a given threshold
  • Using a linear programming model, it identifies the smallest subset of perfcofnigs that covers all problems.

Additionally, there are option to automatically generated the QuickTuningPerfconfigs.inc file that contains selected perfconfigs, and its integration into the codebase.

closes : ROCm/rocMLIR-internal#1641
closes : ROCm/rocMLIR-internal#1258
closes : ROCm/rocMLIR-internal#1518

@djramic djramic force-pushed the quicktuning_gen_script branch from bf24e45 to 588592a Compare November 18, 2024 10:20
Copy link

codecov bot commented Nov 18, 2024

Codecov Report

Attention: Patch coverage is 74.00000% with 13 lines in your changes missing coverage. Please review.

Project coverage is 77.70%. Comparing base (5f51701) to head (738bbaf).
Report is 15 commits behind head on develop.

Files with missing lines Patch % Lines
...lir/lib/Dialect/Rock/Tuning/GridwiseGemmParams.cpp 74.00% 11 Missing and 2 partials ⚠️
Additional details and impacted files
@@             Coverage Diff             @@
##           develop    #1689      +/-   ##
===========================================
- Coverage    77.76%   77.70%   -0.07%     
===========================================
  Files          100      100              
  Lines        27866    27897      +31     
  Branches      4063     4072       +9     
===========================================
+ Hits         21671    21678       +7     
- Misses        4540     4552      +12     
- Partials      1655     1667      +12     
Flag Coverage Δ
mfma 77.70% <74.00%> (-0.07%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.


def get_top_n_perfconfigs_per_problems(self, df, targetColumns):
"""
Identifies the top perfcofnigs for each problem based on a threshold
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

perfcofnigs -> perfconfigs

"""
Finds the minimal set of perfconfigs that cover all
problems using set cover optimizaiton.
Returns : A dictionary containing data types as keys and thier
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thier -> their

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why does this PR change this? and other tests?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since affix-params gathers perfconfigs from the list, it is needed to adjust the tests after updating the list

params = {initParameters, nInitParameters};
if (opType == KernelType::Gemm) {
switch (dataTypeA.getIntOrFloatBitWidth()) {
case 8:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this would apply to both f8 and i8 here, is that the goal?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here I 've just add select conv or gemm list based on opType. I think we're using non-accel configs for fp8


// BEGIN_GEMM_Wmma_i8_DECS
static constexpr size_t nInitParametersForward8BitGemm = 15;
static const InitParamsAccel initParametersForward8BitGemm[nInitParametersForward8BitGemm];
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this only for forward? otherwise we can keep names consistent.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Forward is relevant for conv, but for gemm it doesn't seem quite right to keep it there. I'll correct that.

parser.add_argument("--th",
required=False,
type=float,
default=0.93)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is 0.93 the number used here? from what I heard 0.95 is what we consider noise, by using 0.93 we could be getting worse performance?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's correct, but as I understood it, we needed faster quick-tuning, and it was acceptable to sacrifice a few percent of performance. So, I decided to extend on 0.93 since it noticeable reduced the number of configs.


# Create coverage matrix
A = np.zeros((n, m), dtype=int)
for problem, perfconfig_list in problems_to_perfconfigs.items():
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: just enumerate here and don't use problem_to_index

A = np.zeros((n, m), dtype=int)
for problem, perfconfig_list in problems_to_perfconfigs.items():
i = problem_to_index[problem]
for perfconfig in perfconfig_list:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same here

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

An indexing map is required here to provide a global index for the perfconfig, rather than using the index from the given problem

# Linear programming model to minimize the number of perfconfigs
prob = pulp.LpProblem("SetCoverProblems", pulp.LpMinimize)
x = pulp.LpVariable.dicts("x", range(m), cat='Binary')
prob += pulp.lpSum([x[j]] for j in range(m))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nice! could you add a comment here to explain, I think we are setting the objective function here, right?

x = pulp.LpVariable.dicts("x", range(m), cat='Binary')
prob += pulp.lpSum([x[j]] for j in range(m))
for i in range(n):
prob += pulp.lpSum([A[i][j] * x[j]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same here, I guess these are the constraints?

@djramic djramic requested a review from causten as a code owner December 10, 2024 18:03
@djramic djramic force-pushed the quicktuning_gen_script branch from a478ee0 to 738bbaf Compare December 11, 2024 19:38
@djramic djramic force-pushed the quicktuning_gen_script branch from bbefcf6 to 8e5cf9c Compare December 12, 2024 14:20
@causten causten requested a review from dhernandez0 December 12, 2024 15:51
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants