-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[GPU] activations scaling to resolve accuracy issues for infer precision of f16 #27265
Open
e-ddykim
wants to merge
63
commits into
openvinotoolkit:master
Choose a base branch
from
e-ddykim:static_scaling
base: master
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
+1,036
−53
Open
Changes from all commits
Commits
Show all changes
63 commits
Select commit
Hold shift + click to select a range
f5243a3
added the static scaling feature
e-ddykim 7b63ad9
added a new rt_info scale_factor
e-ddykim d2e8476
fp16 scaling for vae decoder of sdxl
e-ddykim 7d974b6
resolved accuracy issue in transformer of flux.1
e-ddykim 20c9fbf
removed unnecessary codes
e-ddykim e2bd654
removed unnecessary codes
e-ddykim 8cd7ab5
renamed to ActivationsScaling
e-ddykim 70d72d4
updated code style
e-ddykim 71bd3a4
updated to use multiple MatcherPass
e-ddykim ac4bf53
updated code style
e-ddykim 37df7dc
updated code style
e-ddykim b35121b
added unit tests
e-ddykim 890a9b2
update code style
e-ddykim 497ebbc
updated code style
e-ddykim 1dbfc74
updated code style
e-ddykim 807e822
updated code style
e-ddykim 48e39fe
updated for transformer of FLUX.1
e-ddykim ac2be53
disabled FullyConnectedPerLayerScaling
e-ddykim 1577d35
added unit tests
e-ddykim e9b0aeb
fixed code style
e-ddykim 91f1c50
Enable FullyConnectedHorizontalFusion with activations scaling
andrew-k-park 535ad70
updated ScaleDownMultipleLayers
e-ddykim 0748df4
updated code style
e-ddykim d6adf1c
reading ACTIVATIONS_SCALE_FACTOR from rt_info
e-ddykim 9141bc9
updated to use LPT
e-ddykim 29e3b55
fixed for flux.1 dynamic model
e-ddykim fb9f1de
fix merging faults
e-ddykim 7b0f25b
fixes for flux.1
e-ddykim b5480c8
update not to add redundant Convert
e-ddykim 343e7b3
updated apply_rt_info
e-ddykim 5dfa23b
added a new ScaleDownFusion pass
e-ddykim c0705fd
added a new param useDefaultTransformation for activations scaling
e-ddykim 5f15e38
update code style
e-ddykim 3d9b5ff
update code style
e-ddykim dd7d943
updated clamp_fp16 tests
e-ddykim 892159c
code cleanup
e-ddykim 52cf49e
code cleanup
e-ddykim cea1ef3
update code style
e-ddykim ce1df9b
remove redundant code
e-ddykim a00c05f
updated activations scaling tests
e-ddykim 1afec64
updated ScaleDownFusion
e-ddykim c1100c3
fixed ScaleDownFusionTest
e-ddykim d6f9ff6
added MulNormTransformation and NormMulTransformation
e-ddykim cf2aa96
removed apply_rt_info
e-ddykim 825c433
updated activations scaling unit tests
e-ddykim dfa2295
updated code style
e-ddykim 43b82ca
updated AddTransformation to use output_type instead of fp32
e-ddykim 1140074
added a new EliminateMultiplyX1 pass
e-ddykim 1599897
update code style
e-ddykim 874ce80
added a new MulMulTransformation
e-ddykim 926a3fe
added MulDownTransformation
e-ddykim 46b283b
fixed code style
e-ddykim 8c58418
added a functional test
e-ddykim a86119d
applied reviews
e-ddykim 1fb1eeb
merged master
e-ddykim 2e7b2e2
applied reviews
e-ddykim 4307547
updated to preserve the original output precision
e-ddykim 17da8a5
updated per reviews
e-ddykim b5d9099
reverted to apply activations_scale_factor from rt_info
e-ddykim 6771211
added MulMulTransformationTest
e-ddykim 9a3cda3
updated MulShareTransformation
e-ddykim e667219
updated scaling tests
e-ddykim c3d6519
applied reviews
e-ddykim File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
104 changes: 104 additions & 0 deletions
104
...mmon/transformations/include/transformations/common_optimizations/activations_scaling.hpp
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,104 @@ | ||
// Copyright (C) 2024 Intel Corporation | ||
// SPDX-License-Identifier: Apache-2.0 | ||
// | ||
|
||
#pragma once | ||
|
||
#include <memory> | ||
|
||
#include "openvino/pass/matcher_pass.hpp" | ||
#include "transformations_visibility.hpp" | ||
|
||
namespace ov { | ||
namespace pass { | ||
|
||
class TRANSFORMATIONS_API ActivationsScaling; | ||
|
||
namespace activations_scaling { | ||
|
||
class TRANSFORMATIONS_API ScaleDownSingleLayer; | ||
class TRANSFORMATIONS_API EliminateScalarMul; | ||
class TRANSFORMATIONS_API MulConcatTransformation; | ||
class TRANSFORMATIONS_API MulShareTransformation; | ||
class TRANSFORMATIONS_API MoveDownScalarMul; | ||
|
||
} // namespace activations_scaling | ||
} // namespace pass | ||
} // namespace ov | ||
|
||
// ActivationsScaling makes activation values smaller to prevent overflow due to the limited range of FP16 | ||
// This feature is controlled by ov::hint::activations_scale_factor. | ||
// For example, when this property is set as 16, activations are divided by 16. | ||
// If ov::hint::activations_scale_factor is less than or equal to zero, it is disabled. | ||
|
||
// Add scale_down and scale_up layers around Convolution and MatMul nodes | ||
// Conv/MatMul | ||
// ==> | ||
// Multiply(scale_down by scale_factor) --> Conv/MatMul --> Multiply(scale_up by scale_factor) | ||
class ov::pass::activations_scaling::ScaleDownSingleLayer : public ov::pass::MatcherPass { | ||
public: | ||
OPENVINO_MATCHER_PASS_RTTI("ScaleDownSingleLayer", "0"); | ||
ScaleDownSingleLayer(float scale_factor, ov::element::Type scaled_prec); | ||
}; | ||
|
||
// Normalization and ShapeOf have the following property. | ||
// | ||
// Norm(input * const_a) = Norm(input) | ||
// | ||
// So, we can skip Multiply that is connected to Normalization and ShapeOf. | ||
// | ||
// input --> Multiply --> Normalization/ShapeOf | ||
// ==> | ||
// input --> Normalization/ShapeOf | ||
class ov::pass::activations_scaling::EliminateScalarMul : public ov::pass::MatcherPass { | ||
public: | ||
OPENVINO_MATCHER_PASS_RTTI("EliminateScalarMul", "0"); | ||
EliminateScalarMul(); | ||
}; | ||
|
||
// input_a const_a input_b const_b input_c const_c | ||
// \ / \ / \ / | ||
// Multiply_a Multiply_b Multiply_c | ||
// \ | / | ||
// \ | / | ||
// ---------- Concat ------------ | ||
// ==> | ||
// (const_a (const_b (const_c | ||
// input_a /const_c) input_b /const_c) input_c /const_c) | ||
// \ / \ / \ / | ||
// Multiply_a Multiply_b Multiply_c | ||
// \ | / | ||
// \ | / | ||
// ---------- Concat ------------ | ||
// | const_c | ||
// | / | ||
// Multiply | ||
class ov::pass::activations_scaling::MulConcatTransformation : public ov::pass::MatcherPass { | ||
public: | ||
OPENVINO_MATCHER_PASS_RTTI("MulConcatTransformation", "0"); | ||
MulConcatTransformation(); | ||
}; | ||
|
||
// input input | ||
// / \ | | ||
// Norm Mul ==> Mul (expect to be fused into the input layer) | ||
// | | / \_ | ||
// op_a op_b Norm op_b | ||
// | | ||
// op_a | ||
class ov::pass::activations_scaling::MulShareTransformation : public ov::pass::MatcherPass { | ||
public: | ||
OPENVINO_MATCHER_PASS_RTTI("MulShareTransformation", "0"); | ||
MulShareTransformation(); | ||
}; | ||
|
||
// input_b scalar input_a input_b | ||
// \ / \ / | ||
// input_a Mul_b ==> Mul_a' scalar | ||
// \ / \ / | ||
// Mul_a Mul_b' (expect to be merged with Mul_a') | ||
class ov::pass::activations_scaling::MoveDownScalarMul : public ov::pass::MatcherPass { | ||
public: | ||
OPENVINO_MATCHER_PASS_RTTI("MoveDownScalarMul", "0"); | ||
MoveDownScalarMul(); | ||
}; |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This transformation duplicates
ConcatTransformation
behavior. I'd suggest to reenableConcatTransformation
(it is currently disabled), and removeMulConcatTransformation
. The subgraph test you provided passes successfully with these changesThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @e-ddykim for the help: It was found that a current
ConcatTransformation
implementation doesn't handle all the cases which this transformation is able to handle.I created a ticket for
ConcatTransformation
improvement: CVS-160325. After it is implemented, we will be able to removeMulConcatTransformation
and reuseConcatTransformation