Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[GPU] activations scaling to resolve accuracy issues for infer precision of f16 #27265

Open
wants to merge 63 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
63 commits
Select commit Hold shift + click to select a range
f5243a3
added the static scaling feature
e-ddykim Oct 16, 2024
7b63ad9
added a new rt_info scale_factor
e-ddykim Oct 16, 2024
d2e8476
fp16 scaling for vae decoder of sdxl
e-ddykim Oct 24, 2024
7d974b6
resolved accuracy issue in transformer of flux.1
e-ddykim Oct 27, 2024
20c9fbf
removed unnecessary codes
e-ddykim Oct 27, 2024
e2bd654
removed unnecessary codes
e-ddykim Oct 27, 2024
8cd7ab5
renamed to ActivationsScaling
e-ddykim Oct 28, 2024
70d72d4
updated code style
e-ddykim Oct 28, 2024
71bd3a4
updated to use multiple MatcherPass
e-ddykim Oct 29, 2024
ac4bf53
updated code style
e-ddykim Oct 29, 2024
37df7dc
updated code style
e-ddykim Oct 29, 2024
b35121b
added unit tests
e-ddykim Oct 29, 2024
890a9b2
update code style
e-ddykim Oct 29, 2024
497ebbc
updated code style
e-ddykim Oct 29, 2024
1dbfc74
updated code style
e-ddykim Oct 29, 2024
807e822
updated code style
e-ddykim Oct 29, 2024
48e39fe
updated for transformer of FLUX.1
e-ddykim Nov 4, 2024
ac2be53
disabled FullyConnectedPerLayerScaling
e-ddykim Nov 4, 2024
1577d35
added unit tests
e-ddykim Nov 4, 2024
e9b0aeb
fixed code style
e-ddykim Nov 4, 2024
91f1c50
Enable FullyConnectedHorizontalFusion with activations scaling
andrew-k-park Nov 5, 2024
535ad70
updated ScaleDownMultipleLayers
e-ddykim Nov 11, 2024
0748df4
updated code style
e-ddykim Nov 11, 2024
d6adf1c
reading ACTIVATIONS_SCALE_FACTOR from rt_info
e-ddykim Nov 12, 2024
9141bc9
updated to use LPT
e-ddykim Nov 20, 2024
29e3b55
fixed for flux.1 dynamic model
e-ddykim Nov 26, 2024
fb9f1de
fix merging faults
e-ddykim Nov 26, 2024
7b0f25b
fixes for flux.1
e-ddykim Nov 28, 2024
b5480c8
update not to add redundant Convert
e-ddykim Nov 29, 2024
343e7b3
updated apply_rt_info
e-ddykim Nov 29, 2024
5dfa23b
added a new ScaleDownFusion pass
e-ddykim Dec 2, 2024
c0705fd
added a new param useDefaultTransformation for activations scaling
e-ddykim Dec 2, 2024
5f15e38
update code style
e-ddykim Dec 2, 2024
3d9b5ff
update code style
e-ddykim Dec 2, 2024
dd7d943
updated clamp_fp16 tests
e-ddykim Dec 2, 2024
892159c
code cleanup
e-ddykim Dec 2, 2024
52cf49e
code cleanup
e-ddykim Dec 3, 2024
cea1ef3
update code style
e-ddykim Dec 3, 2024
ce1df9b
remove redundant code
e-ddykim Dec 3, 2024
a00c05f
updated activations scaling tests
e-ddykim Dec 3, 2024
1afec64
updated ScaleDownFusion
e-ddykim Dec 4, 2024
c1100c3
fixed ScaleDownFusionTest
e-ddykim Dec 4, 2024
d6f9ff6
added MulNormTransformation and NormMulTransformation
e-ddykim Dec 8, 2024
cf2aa96
removed apply_rt_info
e-ddykim Dec 8, 2024
825c433
updated activations scaling unit tests
e-ddykim Dec 8, 2024
dfa2295
updated code style
e-ddykim Dec 8, 2024
43b82ca
updated AddTransformation to use output_type instead of fp32
e-ddykim Dec 10, 2024
1140074
added a new EliminateMultiplyX1 pass
e-ddykim Dec 10, 2024
1599897
update code style
e-ddykim Dec 10, 2024
874ce80
added a new MulMulTransformation
e-ddykim Dec 16, 2024
926a3fe
added MulDownTransformation
e-ddykim Dec 17, 2024
46b283b
fixed code style
e-ddykim Dec 18, 2024
8c58418
added a functional test
e-ddykim Dec 23, 2024
a86119d
applied reviews
e-ddykim Dec 24, 2024
1fb1eeb
merged master
e-ddykim Dec 24, 2024
2e7b2e2
applied reviews
e-ddykim Jan 2, 2025
4307547
updated to preserve the original output precision
e-ddykim Jan 8, 2025
17da8a5
updated per reviews
e-ddykim Jan 8, 2025
b5d9099
reverted to apply activations_scale_factor from rt_info
e-ddykim Jan 8, 2025
6771211
added MulMulTransformationTest
e-ddykim Jan 8, 2025
9a3cda3
updated MulShareTransformation
e-ddykim Jan 9, 2025
e667219
updated scaling tests
e-ddykim Jan 9, 2025
c3d6519
applied reviews
e-ddykim Jan 9, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -252,11 +252,13 @@ class LP_TRANSFORMATIONS_API LayerTransformation : public ov::pass::MatcherPass
element::Type deqPrecision = element::f32,
const std::vector<ov::element::Type> defaultPrecisions =
{ ov::element::u8, ov::element::i8 },
const bool reshapeIgnorePerTensorQuantizationCheck = false) :
const bool reshapeIgnorePerTensorQuantizationCheck = false,
const bool scalingMode = false) :
updatePrecisions(updatePrecisions),
deqPrecision(deqPrecision),
defaultPrecisions(defaultPrecisions),
reshapeIgnorePerTensorQuantizationCheck(reshapeIgnorePerTensorQuantizationCheck) {}
reshapeIgnorePerTensorQuantizationCheck(reshapeIgnorePerTensorQuantizationCheck),
scalingMode(scalingMode) {}

Params& setUpdatePrecisions(const bool updatePrecisions) {
this->updatePrecisions = updatePrecisions;
Expand All @@ -281,6 +283,8 @@ class LP_TRANSFORMATIONS_API LayerTransformation : public ov::pass::MatcherPass
std::vector<ov::element::Type> defaultPrecisions;
// to support GPU workarround to keep Reshape and MatMul in FP32
bool reshapeIgnorePerTensorQuantizationCheck;
// to support Activations Scaling
bool scalingMode;
};

class PrecisionDetails {
Expand Down Expand Up @@ -352,6 +356,7 @@ class LP_TRANSFORMATIONS_API LayerTransformation : public ov::pass::MatcherPass
element::Type deqPrecision;
std::vector<ov::element::Type> defaultPrecisions;
bool reshapeIgnorePerTensorQuantizationCheck;
bool scalingMode;

static constexpr char originalLayerPostfix[] = "_original";
TransformationContext* context;
Expand Down
13 changes: 7 additions & 6 deletions src/common/low_precision_transformations/src/add.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -214,14 +214,15 @@ bool AddTransformation::transform(TransformationContext& context, ov::pass::patt
newSubtractFullPathValues),
newMultiplyFullPathValues);

auto output_type = scalingMode ? add->get_output_element_type(0) : element::f32;
newAddOrSubtract = std::make_shared<ov::op::TypeRelaxed<ov::opset1::Add>>(
std::vector<element::Type>{element::f32, element::f32}, std::vector<element::Type>{ element::f32 },
ov::op::TemporaryReplaceOutputType(inputs[0], element::f32).get(),
ov::op::TemporaryReplaceOutputType(inputs[1], element::f32).get());
std::vector<element::Type>{output_type, output_type}, std::vector<element::Type>{ output_type },
ov::op::TemporaryReplaceOutputType(inputs[0], output_type).get(),
ov::op::TemporaryReplaceOutputType(inputs[1], output_type).get());
newMultiply = std::make_shared<ov::op::TypeRelaxed<ov::opset1::Multiply>>(
std::vector<element::Type>{element::f32, element::f32}, std::vector<element::Type>{ add->get_output_element_type(0) },
ov::op::TemporaryReplaceOutputType(newAddOrSubtract, element::f32).get(),
ov::op::TemporaryReplaceOutputType(multiplyEmptyPathValues, element::f32).get());
std::vector<element::Type>{output_type, output_type}, std::vector<element::Type>{ add->get_output_element_type(0) },
ov::op::TemporaryReplaceOutputType(newAddOrSubtract, output_type).get(),
ov::op::TemporaryReplaceOutputType(multiplyEmptyPathValues, output_type).get());

NetworkHelper::insertDequantizationAfter(add, newMultiply, newAddOrSubtract);
NetworkHelper::copyInfo(add, newAddOrSubtract);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -45,6 +45,7 @@ LayerTransformation::LayerTransformation(const Params& params) :
deqPrecision(params.deqPrecision),
defaultPrecisions(params.defaultPrecisions),
reshapeIgnorePerTensorQuantizationCheck(params.reshapeIgnorePerTensorQuantizationCheck),
scalingMode(params.scalingMode),
context(nullptr) {}

void LayerTransformation::setContext(TransformationContext* context) noexcept {
Expand Down
29 changes: 18 additions & 11 deletions src/common/low_precision_transformations/src/multiply_partial.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -79,16 +79,17 @@ bool MultiplyPartialTransformation::transform(TransformationContext& context, ov
auto constParent = multiply->input_value(multiplyBranch.first == 0 ? 1 : 0);
auto multiplyParentParent = multiplyParent.get_node_shared_ptr()->input_value(multiplyBranch.second);
auto multiplyParentConst = multiplyParent.get_node_shared_ptr()->input_value(multiplyBranch.second == 0 ? 1 : 0);
auto input_data_type = scalingMode ? multiply->get_output_element_type(0) : element::f32;

newMultiply = std::make_shared<ov::op::TypeRelaxed<ov::opset1::Multiply>>(
std::vector<ov::element::Type>{ element::f32, element::f32 },
std::vector<ov::element::Type>{ input_data_type, input_data_type },
std::vector<ov::element::Type>{ multiply->get_output_element_type(0) },
ov::op::TemporaryReplaceOutputType(multiplyParentParent, element::f32).get(),
ov::op::TemporaryReplaceOutputType(multiplyParentParent, input_data_type).get(),
ov::op::TemporaryReplaceOutputType(
fold<ov::opset1::Multiply>(
foldConvert(multiplyParentConst, element::f32),
foldConvert(constParent, element::f32)),
element::f32).get());
foldConvert(multiplyParentConst, input_data_type),
foldConvert(constParent, input_data_type)),
input_data_type).get());

NetworkHelper::copyInfo(multiplyParent.get_node_shared_ptr(), newMultiply);
NetworkHelper::copyInfo(multiply, newMultiply);
Expand Down Expand Up @@ -133,24 +134,30 @@ bool MultiplyPartialTransformation::transform(TransformationContext& context, ov


// before: Y = (SC1 * (X1 - SH1)) * (SC2 * X2)
// after : Y = (SC1' * (X1 - SH1)) * (X2) , where :
// SC1' = SC1 * SC2
// if scalingMode == false
// after : Y = (SC1' * (X1 - SH1)) * (X2) , where :
// SC1' = SC1 * SC2
// else
// after : Y = ((X1 - SH1) * X2) * SC1' , where :
// SC1' = SC1 * SC2
auto newMultiplyValuesFullPath = fold<ov::opset1::Multiply>(multiplyValuesEmptyPath, multiplyValuesFullPath);
OutputVector inputs{ {}, {} };
inputs[emptyPathIndex] = dequantizationEmptyPath.data;
inputs[emptyPathIndex] = scalingMode ? newMultiplyValuesFullPath : dequantizationEmptyPath.data;
auto input_for_fullPath = scalingMode ? dequantizationEmptyPath.data.get_node_shared_ptr() :
newMultiplyValuesFullPath;

ov::Output<ov::Node> parent0 = dequantizationFullPath.subtract == nullptr ?
(dequantizationFullPath.convert == nullptr ? dequantizationFullPath.data : dequantizationFullPath.convert) :
dequantizationFullPath.subtract;

inputs[fullPathIndex] =
parent0.get_node()->get_output_element_type(0) == newMultiplyValuesFullPath->get_output_element_type(0) ?
std::make_shared<ov::opset1::Multiply>(parent0, newMultiplyValuesFullPath) :
parent0.get_node()->get_output_element_type(0) == input_for_fullPath->get_output_element_type(0) ?
std::make_shared<ov::opset1::Multiply>(parent0, input_for_fullPath) :
std::make_shared<ov::op::TypeRelaxed<ov::opset1::Multiply>>(
std::vector<element::Type>{element::f32, element::f32},
std::vector<element::Type>{element::f32},
ov::op::TemporaryReplaceOutputType(parent0, element::f32).get(),
ov::op::TemporaryReplaceOutputType(newMultiplyValuesFullPath, element::f32).get());
ov::op::TemporaryReplaceOutputType(input_for_fullPath, element::f32).get());

newMultiply = std::make_shared<ov::op::TypeRelaxed<ov::opset1::Multiply>>(
std::vector<element::Type>{element::f32, element::f32},
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -218,7 +218,6 @@ std::shared_ptr<Node> NetworkHelper::swapMultiplyAndAdd(std::shared_ptr<ov::opse
if (multiplyConst == nullptr)
return addAfterMultiply;

const auto x = multiply->input_value(multiplyInputBranch);
auto a = as_type_ptr<ov::opset1::Constant>(multiply->get_input_node_shared_ptr(multiplyInputBranch == 0 ? 1 : 0));
auto b = as_type_ptr<ov::opset1::Constant>(addAfterMultiply->get_input_node_shared_ptr(multiplyBranch == 0 ? 1 : 0));
std::shared_ptr<ov::opset1::Constant> bDivA;
Expand Down Expand Up @@ -263,15 +262,15 @@ std::shared_ptr<Node> NetworkHelper::swapMultiplyAndAdd(std::shared_ptr<ov::opse
bDivA = as_type_ptr<ov::opset1::Constant>(foldConvert(bDivA->output(0), a->get_element_type()));
}

OutputVector inputs{ {}, {} };
inputs[0] = x;
inputs[1] = bDivA->output(0);

const auto& add_input = multiply->input_value(multiplyInputBranch);
// Note: precision is copied to a separate variable intentionally,
// since TemporaryReplaceOutputType replaces add_input's precision, whereas we need to set the original precision on newAdd's output
const auto add_output_precision = add_input.get_element_type();
std::shared_ptr<ov::opset1::Add> newAdd = std::make_shared<ov::op::TypeRelaxed<ov::opset1::Add>>(
std::vector<element::Type>{element::f32, element::f32},
std::vector<element::Type>{ x.get_element_type() },
ov::op::TemporaryReplaceOutputType(inputs[0], element::f32).get(),
ov::op::TemporaryReplaceOutputType(inputs[1], element::f32).get());
std::vector<element::Type>{ add_output_precision },
ov::op::TemporaryReplaceOutputType(add_input, element::f32).get(),
ov::op::TemporaryReplaceOutputType(bDivA, element::f32).get());
copyInfo(addAfterMultiply, newAdd);

auto newMultiply = std::make_shared<ov::op::TypeRelaxed<ov::opset1::Multiply>>(
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,104 @@
// Copyright (C) 2024 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//

#pragma once

#include <memory>

#include "openvino/pass/matcher_pass.hpp"
#include "transformations_visibility.hpp"

namespace ov {
namespace pass {

class TRANSFORMATIONS_API ActivationsScaling;

namespace activations_scaling {

class TRANSFORMATIONS_API ScaleDownSingleLayer;
class TRANSFORMATIONS_API EliminateScalarMul;
class TRANSFORMATIONS_API MulConcatTransformation;
class TRANSFORMATIONS_API MulShareTransformation;
class TRANSFORMATIONS_API MoveDownScalarMul;

} // namespace activations_scaling
} // namespace pass
} // namespace ov

// ActivationsScaling makes activation values smaller to prevent overflow due to the limited range of FP16
// This feature is controlled by ov::hint::activations_scale_factor.
// For example, when this property is set as 16, activations are divided by 16.
// If ov::hint::activations_scale_factor is less than or equal to zero, it is disabled.

// Add scale_down and scale_up layers around Convolution and MatMul nodes
// Conv/MatMul
// ==>
// Multiply(scale_down by scale_factor) --> Conv/MatMul --> Multiply(scale_up by scale_factor)
class ov::pass::activations_scaling::ScaleDownSingleLayer : public ov::pass::MatcherPass {
public:
OPENVINO_MATCHER_PASS_RTTI("ScaleDownSingleLayer", "0");
ScaleDownSingleLayer(float scale_factor, ov::element::Type scaled_prec);
};

// Normalization and ShapeOf have the following property.
//
// Norm(input * const_a) = Norm(input)
//
// So, we can skip Multiply that is connected to Normalization and ShapeOf.
//
// input --> Multiply --> Normalization/ShapeOf
// ==>
// input --> Normalization/ShapeOf
class ov::pass::activations_scaling::EliminateScalarMul : public ov::pass::MatcherPass {
public:
OPENVINO_MATCHER_PASS_RTTI("EliminateScalarMul", "0");
EliminateScalarMul();
};

// input_a const_a input_b const_b input_c const_c
// \ / \ / \ /
// Multiply_a Multiply_b Multiply_c
// \ | /
// \ | /
// ---------- Concat ------------
// ==>
// (const_a (const_b (const_c
// input_a /const_c) input_b /const_c) input_c /const_c)
// \ / \ / \ /
// Multiply_a Multiply_b Multiply_c
// \ | /
// \ | /
// ---------- Concat ------------
// | const_c
// | /
// Multiply
class ov::pass::activations_scaling::MulConcatTransformation : public ov::pass::MatcherPass {
public:
OPENVINO_MATCHER_PASS_RTTI("MulConcatTransformation", "0");
MulConcatTransformation();
};
Comment on lines +76 to +80
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This transformation duplicates ConcatTransformation behavior. I'd suggest to reenable ConcatTransformation (it is currently disabled), and remove MulConcatTransformation. The subgraph test you provided passes successfully with these changes

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @e-ddykim for the help: It was found that a current ConcatTransformation implementation doesn't handle all the cases which this transformation is able to handle.
I created a ticket for ConcatTransformation improvement: CVS-160325. After it is implemented, we will be able to remove MulConcatTransformation and reuse ConcatTransformation


// input input
// / \ |
// Norm Mul ==> Mul (expect to be fused into the input layer)
// | | / \_
// op_a op_b Norm op_b
// |
// op_a
class ov::pass::activations_scaling::MulShareTransformation : public ov::pass::MatcherPass {
public:
OPENVINO_MATCHER_PASS_RTTI("MulShareTransformation", "0");
MulShareTransformation();
};

// input_b scalar input_a input_b
// \ / \ /
// input_a Mul_b ==> Mul_a' scalar
// \ / \ /
// Mul_a Mul_b' (expect to be merged with Mul_a')
class ov::pass::activations_scaling::MoveDownScalarMul : public ov::pass::MatcherPass {
public:
OPENVINO_MATCHER_PASS_RTTI("MoveDownScalarMul", "0");
MoveDownScalarMul();
};
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ namespace ov {

TRANSFORMATIONS_API void mark_as_dequantization_node(const std::shared_ptr<Node>& node);

TRANSFORMATIONS_API bool is_dequantization_node(const std::shared_ptr<Node>& node);
TRANSFORMATIONS_API bool is_dequantization_node(const std::shared_ptr<const Node>& node);

/**
* @ingroup ov_runtime_attr_api
Expand Down
Loading
Loading