-
Notifications
You must be signed in to change notification settings - Fork 87
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make EvalML compatible with Woodwork changes #4066
Conversation
Codecov Report
@@ Coverage Diff @@
## main #4066 +/- ##
=======================================
+ Coverage 99.7% 99.7% +0.1%
=======================================
Files 349 349
Lines 37509 37514 +5
=======================================
+ Hits 37391 37396 +5
Misses 118 118
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. |
…m/alteryx/evalml into make_compat_with_object_inference
…make_compat_with_object_inference
Reformat doc
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice!
…m/alteryx/evalml into make_compat_with_object_inference
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM pending prediction explanations discussion
@@ -131,6 +131,7 @@ | |||
"outputs": [], | |||
"source": [ | |||
"X[\"Categorical\"] = [str(i % 4) for i in range(len(X))]\n", | |||
"X[\"Categorical\"] = X[\"Categorical\"].astype(\"category\")\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do you want to add a comment here and below explaining why we're setting it as category
instead of leaving it as object
? Might be good to remember.
@@ -1148,7 +1148,20 @@ def test_json_serialization( | |||
num_to_explain=1, | |||
output_format="dict", | |||
) | |||
assert json.loads(json.dumps(best_worst)) == best_worst | |||
|
|||
def str_to_num_decoder(dct): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you explain why we need this now? do we need to add support directly into prediction explanations?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm concerned by this change as well. I think json.loads(json.dumps(best_worst)) == best_worst
is an axiom we should maintain, rather than patching it over for the sake of the upgrade. What change predicated this, and how can we account for it in prediction explanations instead?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is because the output of explain_predictions_best_worst
results in the keys of predicted_values
being integers, since they're numeric values being represented as strings. Woodwork's new inference will infer strings as their appropriate numeric logical type if their incoming dtype is object
and no other logical type is specified.
'predicted_values': {'error_name': 'Cross Entropy',
'error_value': 9.967001210947085e-07,
'index_id': 33,
'predicted_value': 1,
'probabilities': {0: 0.0, 1: 1.0},
'target_value': 1},
but JSON doesn't allow non-string keys, so it replaces them with
'predicted_values': {'error_name': 'Cross Entropy',
'error_value': 9.967001210947085e-07,
'index_id': 33,
'predicted_value': 1,
'probabilities': {'0': 0.0, '1': 1.0}, # String keys
'target_value': 1},
What I added here was an object_hook
to be called instead of the standard dict
which converts the string
key back to an int
just so they can be compared for the test.
I think what's happening is that this is creating keys and assigning them to the probabilities
key even if they're integers. I can modify this to stringify all keys since JSON will end up converting them anyways. I just thought it would be better for the output of explain_predictions_best_worst
to represent the actual values even if JSON would end up stringifying them. @eccabay @jeremyliweishih does that make sense?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
wait @ParthivNaresh why does woodwork inference matter when we're testing if dumping and loading the same json will result in the same dictionary?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jeremyliweishih Because JSON only allows key names to be strings, so if the values themselves aren't strings, then they get dumped as integers and read as strings. I think somewhere in the prediction explanation process, the original dtype of y
is lost and reinferred, which means if the original dtype was a string, it would be reinferred as an integer.
I tried replacing
elif problem_type == problem_type.BINARY:
X, y = X_y_binary
y = pd.Series(y).astype("str")
pipeline = logistic_regression_binary_pipeline
with
elif problem_type == problem_type.BINARY:
X, y = X_y_binary
y = ww.init_series(pd.Series(y), logical_type="Unknown")
pipeline = logistic_regression_binary_pipeline
or explicitly setting it as string dtype (string
instead of str
), but it still writes out the dictionary in best_worst
as an integer
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just thought it would be better for the output of explain_predictions_best_worst to represent the actual values even if JSON would end up stringifying them
Doesn't your second comment contradict this, so explain_predictions_best_worst
is still modifying the actual values, but in the wrong direction?
My preference remains that we keep the assertion that json.loads(json.dumps(best_worst)) == best_worst
, and we update the code to maintain this rather than the test.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
explain_predictions_best_worst
is modifying this somewhere but that's not the expectation. If I pass in a string dtype (which Woodwork won't infer as numeric, but Categorical in this case) then that would be the expected output.
I'm taking a look into explain_predictions_best_worst
to see where this unexpected reinference is happening
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@eccabay I think the issue was that ww.init_series
was being used to determine the unique classes that existed in a classification pipeline. If the column names were strings then they got inferred as integers because of this.
if input_type == "string": | ||
order = ["col_2", "col_3_id", "col_1_id"] | ||
order_msg = "Columns 'col_2', 'col_3_id', 'col_1_id' are 95.0% or more likely to be an ID column" | ||
else: | ||
order = ["col_2", "col_1_id", "col_3_id"] | ||
order_msg = "Columns 'col_2', 'col_1_id', 'col_3_id' are 95.0% or more likely to be an ID column" | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What change caused this? It's a surprising distinction to make
Also, a nitpicky clarity change - we can replace the separate order messages here with order_msg = f"Columns '{order[0]}', '{order[1]}', '{order[2]}' are 95% or more..."
just to make it clearer what the difference is!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In this case, col_3_id
is a random series of string-represented numbers, which under the new inference will become the Integer
ltype
. Following this logic in validate, since they're all unique values and since the column ends in id
, it gets identified as a possible ID column.
Previously however, since this column wasn't a numeric logical type, the logic here didn't catch it, but this did.
This just ends up resulting in a different ordering by which these columns are caught as possible ID columns by the data check
@@ -1148,7 +1148,20 @@ def test_json_serialization( | |||
num_to_explain=1, | |||
output_format="dict", | |||
) | |||
assert json.loads(json.dumps(best_worst)) == best_worst | |||
|
|||
def str_to_num_decoder(dct): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm concerned by this change as well. I think json.loads(json.dumps(best_worst)) == best_worst
is an axiom we should maintain, rather than patching it over for the sake of the upgrade. What change predicated this, and how can we account for it in prediction explanations instead?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
let's figure out this prediction explanation shenanigans first
Fixes #4062, #4074
This PR updates EvalML tests to be compatible with Woodwork's new numeric logical type inference for incoming
object
dtypes and dependence calculations that treats booleans as numeric.