Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make EvalML compatible with Woodwork changes #4066

Merged
merged 23 commits into from
Mar 14, 2023

Conversation

ParthivNaresh
Copy link
Contributor

@ParthivNaresh ParthivNaresh commented Mar 9, 2023

Fixes #4062, #4074

This PR updates EvalML tests to be compatible with Woodwork's new numeric logical type inference for incoming object dtypes and dependence calculations that treats booleans as numeric.

@codecov
Copy link

codecov bot commented Mar 9, 2023

Codecov Report

Merging #4066 (632b124) into main (f509df4) will increase coverage by 0.1%.
The diff coverage is 100.0%.

@@           Coverage Diff           @@
##            main   #4066     +/-   ##
=======================================
+ Coverage   99.7%   99.7%   +0.1%     
=======================================
  Files        349     349             
  Lines      37509   37514      +5     
=======================================
+ Hits       37391   37396      +5     
  Misses       118     118             
Impacted Files Coverage Δ
...omponents/estimators/regressors/arima_regressor.py 100.0% <ø> (ø)
...tors/regressors/exponential_smoothing_regressor.py 100.0% <ø> (ø)
...ponents/estimators/regressors/prophet_regressor.py 100.0% <ø> (ø)
...ponents/estimators/regressors/xgboost_regressor.py 100.0% <ø> (ø)
...mponent_tests/test_column_selector_transformers.py 100.0% <ø> (ø)
evalml/tests/pipeline_tests/test_pipelines.py 99.9% <ø> (ø)
evalml/pipelines/classification_pipeline.py 100.0% <100.0%> (ø)
...valml/pipelines/components/estimators/estimator.py 100.0% <100.0%> (ø)
...s/components/estimators/regressors/et_regressor.py 100.0% <100.0%> (ø)
...s/components/estimators/regressors/rf_regressor.py 100.0% <100.0%> (ø)
... and 11 more

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

@ParthivNaresh ParthivNaresh changed the title [DO NOT MERGE] Make EvalML compatible with new numeric object dtype inference [DO NOT MERGE] Make EvalML compatible with Woodwork changes Mar 14, 2023
@ParthivNaresh ParthivNaresh changed the title [DO NOT MERGE] Make EvalML compatible with Woodwork changes Make EvalML compatible with Woodwork changes Mar 14, 2023
Copy link
Contributor

@bchen1116 bchen1116 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice!

Copy link
Collaborator

@jeremyliweishih jeremyliweishih left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM pending prediction explanations discussion

@@ -131,6 +131,7 @@
"outputs": [],
"source": [
"X[\"Categorical\"] = [str(i % 4) for i in range(len(X))]\n",
"X[\"Categorical\"] = X[\"Categorical\"].astype(\"category\")\n",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do you want to add a comment here and below explaining why we're setting it as category instead of leaving it as object? Might be good to remember.

@@ -1148,7 +1148,20 @@ def test_json_serialization(
num_to_explain=1,
output_format="dict",
)
assert json.loads(json.dumps(best_worst)) == best_worst

def str_to_num_decoder(dct):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you explain why we need this now? do we need to add support directly into prediction explanations?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm concerned by this change as well. I think json.loads(json.dumps(best_worst)) == best_worst is an axiom we should maintain, rather than patching it over for the sake of the upgrade. What change predicated this, and how can we account for it in prediction explanations instead?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is because the output of explain_predictions_best_worst results in the keys of predicted_values being integers, since they're numeric values being represented as strings. Woodwork's new inference will infer strings as their appropriate numeric logical type if their incoming dtype is object and no other logical type is specified.

'predicted_values': {'error_name': 'Cross Entropy',
                                   'error_value': 9.967001210947085e-07,
                                   'index_id': 33,
                                   'predicted_value': 1,
                                   'probabilities': {0: 0.0, 1: 1.0},
                                   'target_value': 1},

but JSON doesn't allow non-string keys, so it replaces them with

'predicted_values': {'error_name': 'Cross Entropy',
                                   'error_value': 9.967001210947085e-07,
                                   'index_id': 33,
                                   'predicted_value': 1,
                                   'probabilities': {'0': 0.0, '1': 1.0}, # String keys
                                   'target_value': 1},

What I added here was an object_hook to be called instead of the standard dict which converts the string key back to an int just so they can be compared for the test.

I think what's happening is that this is creating keys and assigning them to the probabilities key even if they're integers. I can modify this to stringify all keys since JSON will end up converting them anyways. I just thought it would be better for the output of explain_predictions_best_worst to represent the actual values even if JSON would end up stringifying them. @eccabay @jeremyliweishih does that make sense?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

wait @ParthivNaresh why does woodwork inference matter when we're testing if dumping and loading the same json will result in the same dictionary?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jeremyliweishih Because JSON only allows key names to be strings, so if the values themselves aren't strings, then they get dumped as integers and read as strings. I think somewhere in the prediction explanation process, the original dtype of y is lost and reinferred, which means if the original dtype was a string, it would be reinferred as an integer.

I tried replacing

elif problem_type == problem_type.BINARY:
        X, y = X_y_binary
        y = pd.Series(y).astype("str")
        pipeline = logistic_regression_binary_pipeline

with

elif problem_type == problem_type.BINARY:
        X, y = X_y_binary
        y = ww.init_series(pd.Series(y), logical_type="Unknown")
        pipeline = logistic_regression_binary_pipeline

or explicitly setting it as string dtype (string instead of str), but it still writes out the dictionary in best_worst as an integer

Copy link
Contributor

@eccabay eccabay Mar 14, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just thought it would be better for the output of explain_predictions_best_worst to represent the actual values even if JSON would end up stringifying them

Doesn't your second comment contradict this, so explain_predictions_best_worst is still modifying the actual values, but in the wrong direction?

My preference remains that we keep the assertion that json.loads(json.dumps(best_worst)) == best_worst, and we update the code to maintain this rather than the test.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

explain_predictions_best_worst is modifying this somewhere but that's not the expectation. If I pass in a string dtype (which Woodwork won't infer as numeric, but Categorical in this case) then that would be the expected output.

I'm taking a look into explain_predictions_best_worst to see where this unexpected reinference is happening

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@eccabay I think the issue was that ww.init_series was being used to determine the unique classes that existed in a classification pipeline. If the column names were strings then they got inferred as integers because of this.

docs/source/release_notes.rst Outdated Show resolved Hide resolved
Comment on lines 296 to 302
if input_type == "string":
order = ["col_2", "col_3_id", "col_1_id"]
order_msg = "Columns 'col_2', 'col_3_id', 'col_1_id' are 95.0% or more likely to be an ID column"
else:
order = ["col_2", "col_1_id", "col_3_id"]
order_msg = "Columns 'col_2', 'col_1_id', 'col_3_id' are 95.0% or more likely to be an ID column"

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What change caused this? It's a surprising distinction to make

Also, a nitpicky clarity change - we can replace the separate order messages here with order_msg = f"Columns '{order[0]}', '{order[1]}', '{order[2]}' are 95% or more..." just to make it clearer what the difference is!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In this case, col_3_id is a random series of string-represented numbers, which under the new inference will become the Integer ltype. Following this logic in validate, since they're all unique values and since the column ends in id, it gets identified as a possible ID column.

Previously however, since this column wasn't a numeric logical type, the logic here didn't catch it, but this did.

This just ends up resulting in a different ordering by which these columns are caught as possible ID columns by the data check

@@ -1148,7 +1148,20 @@ def test_json_serialization(
num_to_explain=1,
output_format="dict",
)
assert json.loads(json.dumps(best_worst)) == best_worst

def str_to_num_decoder(dct):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm concerned by this change as well. I think json.loads(json.dumps(best_worst)) == best_worst is an axiom we should maintain, rather than patching it over for the sake of the upgrade. What change predicated this, and how can we account for it in prediction explanations instead?

Copy link
Collaborator

@jeremyliweishih jeremyliweishih left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

let's figure out this prediction explanation shenanigans first

@ParthivNaresh ParthivNaresh enabled auto-merge (squash) March 14, 2023 21:08
@ParthivNaresh ParthivNaresh merged commit 1412fc3 into main Mar 14, 2023
@ParthivNaresh ParthivNaresh deleted the make_compat_with_object_inference branch March 14, 2023 21:15
@chukarsten chukarsten mentioned this pull request Mar 15, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Update targetLeakageDataCheck to handle boolean targets
4 participants