You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When the GetRegistrationFromOpCode function parses a maliciously crafted model structure, if the builtin_code is tflite::BuiltinOperator_CALL_ONCE, it will enter an infinite loop in the subsequent inference process:tflite::Subgraph::Invoke -> tflite::Subgraph::InvokeImpl -> tflite::Subgraph::OpInvoke -> tflite::ops::builtin::call_once_kernel::Eval.
// op_resolver.ccTfLiteStatusGetRegistrationFromOpCode(
constOperatorCode*opcode, constOpResolver&op_resolver,
ErrorReporter*error_reporter, constTfLiteRegistration**registration) {
TfLiteStatusstatus=kTfLiteOk;
*registration=nullptr;
auto builtin_code=GetBuiltinCode(opcode);
intversion=opcode->version();
if (builtin_code>BuiltinOperator_MAX) {
TF_LITE_REPORT_ERROR(
error_reporter,
"Op builtin_code out of range: %d. Are you using old TFLite binary ""with newer model?",
builtin_code);
status=kTfLiteError;
} elseif (builtin_code!=BuiltinOperator_CUSTOM) {
*registration=op_resolver.FindOp(builtin_code, version); // here
At the time of the crash, the call stack would look like this:
#67807 0x00005555555ee106 in tflite::Subgraph::InvokeImpl() ()
#67808 0x00005555555ee3fe in tflite::Subgraph::Invoke() ()
#67809 0x000055555566cc30 in tflite::ops::builtin::call_once_kernel::Eval(TfLiteContext*, TfLiteNode*) ()
#67810 0x00005555555ee106 in tflite::Subgraph::InvokeImpl() ()
#67811 0x00005555555ee3fe in tflite::Subgraph::Invoke() ()
#67812 0x000055555566cc30 in tflite::ops::builtin::call_once_kernel::Eval(TfLiteContext*, TfLiteNode*) ()
#67813 0x00005555555ee106 in tflite::Subgraph::InvokeImpl() ()
#67814 0x00005555555ee3fe in tflite::Subgraph::Invoke() ()
#67815 0x000055555566cc30 in tflite::ops::builtin::call_once_kernel::Eval(TfLiteContext*, TfLiteNode*) ()
#67816 0x00005555555ee106 in tflite::Subgraph::InvokeImpl() ()
#67817 0x00005555555ee3fe in tflite::Subgraph::Invoke() ()
#67818 0x000055555566cc30 in tflite::ops::builtin::call_once_kernel::Eval(TfLiteContext*, TfLiteNode*) ()
#67819 0x00005555555ee106 in tflite::Subgraph::InvokeImpl() ()
#67820 0x00005555555ee3fe in tflite::Subgraph::Invoke() ()
#67821 0x000055555566cc30 in tflite::ops::builtin::call_once_kernel::Eval(TfLiteContext*, TfLiteNode*) ()
#67822 0x00005555555ee106 in tflite::Subgraph::InvokeImpl() ()
#67823 0x00005555555ee3fe in tflite::Subgraph::Invoke() ()
#67824 0x000055555566cc30 in tflite::ops::builtin::call_once_kernel::Eval(TfLiteContext*, TfLiteNode*) ()
#67825 0x00005555555ee106 in tflite::Subgraph::InvokeImpl() ()
#67826 0x00005555555ee3fe in tflite::Subgraph::Invoke() ()
#67827 0x000055555566cc30 in tflite::ops::builtin::call_once_kernel::Eval(TfLiteContext*, TfLiteNode*) ()
#67828 0x00005555555ee106 in tflite::Subgraph::InvokeImpl() ()
#67829 0x00005555555ee3fe in tflite::Subgraph::Invoke() ()
#67830 0x000055555566cc30 in tflite::ops::builtin::call_once_kernel::Eval(TfLiteContext*, TfLiteNode*) ()
#67831 0x00005555555ee106 in tflite::Subgraph::InvokeImpl() ()
#67832 0x00005555555ee3fe in tflite::Subgraph::Invoke() ()
#67833 0x000055555566cc30 in tflite::ops::builtin::call_once_kernel::Eval(TfLiteContext*, TfLiteNode*) ()
#67834 0x00005555555ee106 in tflite::Subgraph::InvokeImpl() ()
#67835 0x00005555555ee3fe in tflite::Subgraph::Invoke() ()
#67836 0x000055555566cc30 in tflite::ops::builtin::call_once_kernel::Eval(TfLiteContext*, TfLiteNode*) ()
#67837 0x00005555555ee106 in tflite::Subgraph::InvokeImpl() ()
#67838 0x00005555555ee3fe in tflite::Subgraph::Invoke() ()
#67839 0x000055555566cc30 in tflite::ops::builtin::call_once_kernel::Eval(TfLiteContext*, TfLiteNode*) ()
#67840 0x00005555555ee106 in tflite::Subgraph::InvokeImpl() ()
#67841 0x00005555555ee3fe in tflite::Subgraph::Invoke() ()
#67842 0x000055555566cc30 in tflite::ops::builtin::call_once_kernel::Eval(TfLiteContext*, TfLiteNode*) ()
#67843 0x00005555555ee106 in tflite::Subgraph::InvokeImpl() ()
#67844 0x00005555555ee3fe in tflite::Subgraph::Invoke() ()
#67845 0x000055555566cc30 in tflite::ops::builtin::call_once_kernel::Eval(TfLiteContext*, TfLiteNode*) ()
#67846 0x00005555555ee106 in tflite::Subgraph::InvokeImpl() ()
#67847 0x00005555555ee3fe in tflite::Subgraph::Invoke() ()
#67848 0x000055555566cc30 in tflite::ops::builtin::call_once_kernel::Eval(TfLiteContext*, TfLiteNode*) ()
#67849 0x00005555555ee106 in tflite::Subgraph::InvokeImpl() ()
#67850 0x00005555555ee3fe in tflite::Subgraph::Invoke() ()
#67851 0x000055555566cc30 in tflite::ops::builtin::call_once_kernel::Eval(TfLiteContext*, TfLiteNode*) ()
#67852 0x00005555555ee106 in tflite::Subgraph::InvokeImpl() ()
#67853 0x00005555555ee3fe in tflite::Subgraph::Invoke() ()
#67854 0x000055555566cc30 in tflite::ops::builtin::call_once_kernel::Eval(TfLiteContext*, TfLiteNode*) ()
#67855 0x00005555555ee106 in tflite::Subgraph::InvokeImpl() ()
When I use the benchmark tool for PoC validation, it causes the TensorFlow Lite inference process to be subjected to a DOS(coredump).
❯ ./benchmark_model --graph=./subgraph_infinite_loop.tflite
INFO: STARTING!
INFO: Log parameter values verbosely: [0]
INFO: Graph: [./subgraph_infinite_loop.tflite]
INFO: Loaded model ./subgraph_infinite_loop.tflite
INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
INFO: The input model file size (MB): 0.000488
INFO: Initialized session in 0.731ms.
INFO: Running benchmark for at least 1 iterations and at least 0.5 seconds but terminate if exceeding 150 seconds.
[1] 6892 segmentation fault (core dumped) ./benchmark_model --graph=./subgraph_infinite_loop.tflite
### Relevant log output
_No response_</details>
The text was updated successfully, but these errors were encountered:
This issue originally reported by @SiriusHsh has been moved to this dedicated repository for LiteRT to enhance issue tracking and prioritization. To ensure continuity, we have created this new issue on your behalf.
We appreciate your understanding and look forward to your continued involvement.
Click to expand!
Issue Type
Bug
Have you reproduced the bug with TF nightly?
No
Source
source
Tensorflow Version
tf 2.14.0
Custom Code
No
OS Platform and Distribution
Ubuntu 18.04.6
Mobile device
No response
Python version
Python 3.8.3
Bazel version
bazel 5.3.0
GCC/Compiler version
gcc 7.5.0
CUDA/cuDNN version
No response
GPU model and memory
No response
Current Behaviour?
When the
GetRegistrationFromOpCode
function parses a maliciously crafted model structure, if thebuiltin_code
istflite::BuiltinOperator_CALL_ONCE
, it will enter an infinite loop in the subsequent inference process:tflite::Subgraph::Invoke -> tflite::Subgraph::InvokeImpl -> tflite::Subgraph::OpInvoke -> tflite::ops::builtin::call_once_kernel::Eval
.At the time of the crash, the call stack would look like this:
subgraph_infinite_loop.zip
Standalone code to reproduce the issue
The text was updated successfully, but these errors were encountered: