-
Notifications
You must be signed in to change notification settings - Fork 40
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add a script for generating the quick-tuning perfconfigs list #1689
base: develop
Are you sure you want to change the base?
Conversation
bf24e45
to
588592a
Compare
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## develop #1689 +/- ##
===========================================
- Coverage 77.76% 77.59% -0.17%
===========================================
Files 100 100
Lines 27866 27897 +31
Branches 4063 4072 +9
===========================================
- Hits 21671 21648 -23
- Misses 4540 4584 +44
- Partials 1655 1665 +10
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚨 Try these New Features:
|
|
||
def get_top_n_perfconfigs_per_problems(self, df, targetColumns): | ||
""" | ||
Identifies the top perfcofnigs for each problem based on a threshold |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
perfcofnigs -> perfconfigs
""" | ||
Finds the minimal set of perfconfigs that cover all | ||
problems using set cover optimizaiton. | ||
Returns : A dictionary containing data types as keys and thier |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thier -> their
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why does this PR change this? and other tests?
params = {initParameters, nInitParameters}; | ||
if (opType == KernelType::Gemm) { | ||
switch (dataTypeA.getIntOrFloatBitWidth()) { | ||
case 8: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this would apply to both f8 and i8 here, is that the goal?
|
||
// BEGIN_GEMM_Wmma_i8_DECS | ||
static constexpr size_t nInitParametersForward8BitGemm = 15; | ||
static const InitParamsAccel initParametersForward8BitGemm[nInitParametersForward8BitGemm]; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is this only for forward? otherwise we can keep names consistent.
parser.add_argument("--th", | ||
required=False, | ||
type=float, | ||
default=0.93) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is 0.93 the number used here? from what I heard 0.95 is what we consider noise, by using 0.93 we could be getting worse performance?
|
||
# Create coverage matrix | ||
A = np.zeros((n, m), dtype=int) | ||
for problem, perfconfig_list in problems_to_perfconfigs.items(): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: just enumerate here and don't use problem_to_index
A = np.zeros((n, m), dtype=int) | ||
for problem, perfconfig_list in problems_to_perfconfigs.items(): | ||
i = problem_to_index[problem] | ||
for perfconfig in perfconfig_list: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same here
# Linear programming model to minimize the number of perfconfigs | ||
prob = pulp.LpProblem("SetCoverProblems", pulp.LpMinimize) | ||
x = pulp.LpVariable.dicts("x", range(m), cat='Binary') | ||
prob += pulp.lpSum([x[j]] for j in range(m)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nice! could you add a comment here to explain, I think we are setting the objective function here, right?
x = pulp.LpVariable.dicts("x", range(m), cat='Binary') | ||
prob += pulp.lpSum([x[j]] for j in range(m)) | ||
for i in range(n): | ||
prob += pulp.lpSum([A[i][j] * x[j] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same here, I guess these are the constraints?
This PR includes a script for finding quick-tuning perfconfigs in the following way:
Additionally, there are option to automatically generated the QuickTuningPerfconfigs.inc file that contains selected perfconfigs, and its integration into the codebase.
closes : ROCm/rocMLIR-internal#1641
closes : ROCm/rocMLIR-internal#1258
closes : ROCm/rocMLIR-internal#1518