-
Notifications
You must be signed in to change notification settings - Fork 101
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Treelite gives different predictions than base XGBoost model #585
Comments
It looks like the floating-point error starts to creep in, from two sources:
The check passes if you relax the required tolerance: np.testing.assert_almost_equal(treelite.gtil.predict(tl_model, data=X).squeeze(), bst.predict(dtrain), decimal=2) |
The check is just there to showcase that the scores are not equal. I was under the impression that GTIL always returns the same scores. |
I double checked and
GTIL may evaluate trees and leaf nodes in different order as XGBoost. Addition of floating-point values is not associative ( To minimize error due to floating-point arithmetic, consider scaling the target by using |
I'll probably have to add a note to the documentation for GTIL about possibility of floating-point error and how to mitigate it. |
I noticed that my model returns different scores than the original model. I was able to boil the issue down to using a
base_score
during training. Can it be that this value is not being translated?Code to replicate the issue:
The text was updated successfully, but these errors were encountered: