diff --git a/docs/docs/guides/core-types/datasets.md b/docs/docs/guides/core-types/datasets.md index d68f2c1e04a..6266f5f4c31 100644 --- a/docs/docs/guides/core-types/datasets.md +++ b/docs/docs/guides/core-types/datasets.md @@ -22,14 +22,14 @@ from weave import Dataset weave.init('intro-example') # Create a dataset -dataset = Dataset([ +dataset = Dataset(name='grammar', rows=[ {'id': '0', 'sentence': "He no likes ice cream.", 'correction': "He doesn't like ice cream."}, {'id': '1', 'sentence': "She goed to the store.", 'correction': "She went to the store."}, {'id': '2', 'sentence': "They plays video games all day.", 'correction': "They play video games all day."} ]) # Publish the dataset -weave.publish(dataset, 'grammar') +weave.publish(dataset) # Retrieve the dataset dataset_ref = weave.ref('grammar').get() diff --git a/docs/docs/guides/core-types/evaluations.md b/docs/docs/guides/core-types/evaluations.md index 8b5ee22da3b..a6e77b96e2b 100644 --- a/docs/docs/guides/core-types/evaluations.md +++ b/docs/docs/guides/core-types/evaluations.md @@ -11,7 +11,7 @@ Evaluation-driven development helps you reliably iterate on an application. The from weave import Evaluation evaluation = Evaluation( - dataset, scores=[score] + dataset=dataset, scorers=[score] ) evaluation.evaluate(model) ``` @@ -28,4 +28,4 @@ Then, define a list of scoring functions. Each function should take an example a Finally, create a model and pass this to `evaluation.evaluate`, which will run `predict` on each example and score the output with each scoring function. -To see this in action, follow the '[Build an Evaluation pipeline](/tutorial-eval)' tutorial. \ No newline at end of file +To see this in action, follow the '[Build an Evaluation pipeline](/tutorial-eval)' tutorial. diff --git a/docs/docs/guides/core-types/models.md b/docs/docs/guides/core-types/models.md index 1f2ce330c9e..d35ac4aeb03 100644 --- a/docs/docs/guides/core-types/models.md +++ b/docs/docs/guides/core-types/models.md @@ -32,7 +32,7 @@ You can call the model as usual with: import weave weave.init('project-name') -model = YourModel('hello', 5) +model = YourModel(attribute1='hello', attribute2=5) model.predict('world') ``` @@ -48,7 +48,7 @@ For example, here we create a new model: import weave weave.init('project-name') -model = YourModel('howdy', 10) +model = YourModel(attribute1='howdy', attribute2=10) model.predict('world') ``` diff --git a/docs/docs/tutorial-eval.md b/docs/docs/tutorial-eval.md index af66cbb4be2..236d784f23b 100644 --- a/docs/docs/tutorial-eval.md +++ b/docs/docs/tutorial-eval.md @@ -102,7 +102,7 @@ def fruit_name_score(target: dict, prediction: dict) -> dict: # highlight-next-line evaluation = evaluate.Evaluation( # highlight-next-line - dataset, scores=[MulticlassF1Score(class_names=["fruit", "color", "flavor"]), fruit_name_score], + dataset=dataset, scorers=[MulticlassF1Score(class_names=["fruit", "color", "flavor"]), fruit_name_score], # highlight-next-line ) # highlight-next-line