Skip to content

Commit

Permalink
chore(docs): fix weave evaluation example (#1367)
Browse files Browse the repository at this point in the history
  • Loading branch information
jlzhao27 authored Mar 26, 2024
1 parent d5501c3 commit bafc675
Showing 1 changed file with 1 addition and 4 deletions.
5 changes: 1 addition & 4 deletions docs/docs/tutorial-eval.md
Original file line number Diff line number Diff line change
Expand Up @@ -91,7 +91,6 @@ Here, we'll use a default scoring function `MulticlassF1Score` and we'll also de
Here `sentence` is passed to the model's predict function, and `target` is used in the scoring function, these are inferred based on the argument names of the `predict` and scoring functions.

```python
from weave import evaluate
import weave
from weave.flow.scorer import MulticlassF1Score

Expand All @@ -100,7 +99,7 @@ def fruit_name_score(target: dict, prediction: dict) -> dict:
return {'correct': target['fruit'] == prediction['fruit']}

# highlight-next-line
evaluation = evaluate.Evaluation(
evaluation = weave.Evaluation(
# highlight-next-line
dataset=dataset, scorers=[MulticlassF1Score(class_names=["fruit", "color", "flavor"]), fruit_name_score],
# highlight-next-line
Expand All @@ -115,8 +114,6 @@ print(asyncio.run(evaluation.evaluate(model)))
import json
import asyncio
# highlight-next-line
from weave import evaluate
# highlight-next-line
import weave
# highlight-next-line
from weave.flow.scorer import MulticlassF1Score
Expand Down

0 comments on commit bafc675

Please sign in to comment.