Skip to content

Commit

Permalink
[docs] 1/n Update getting started Code Snippets
Browse files Browse the repository at this point in the history
Small changes that make it easier to copy paste and run (and have it work)
  • Loading branch information
Ankush Pala [email protected] committed Dec 15, 2023
1 parent 59cb485 commit 737f6c3
Showing 1 changed file with 26 additions and 11 deletions.
37 changes: 26 additions & 11 deletions aiconfig-docs/docs/getting-started.md
Original file line number Diff line number Diff line change
Expand Up @@ -142,10 +142,11 @@ from aiconfig import AIConfigRuntime, InferenceOptions

async def main():
# Load the aiconfig
config = AIConfigRuntime.load('travel.aiconfig.json')
aiconfig = AIConfigRuntime.load('travel.aiconfig.json')

# Run a single prompt
await config.run("get_activities")
result = await aiconfig.run("get_activities")
print(result)

asyncio.run(main())
```
Expand All @@ -165,8 +166,11 @@ async function travelWithGPT() {
);

// Run a single prompt
await aiConfig.run("get_activities");
const result = await aiConfig.run("get_activities");
console.log(result);
}

travelWithGPT();
```

</TabItem>
Expand All @@ -180,12 +184,22 @@ You can enable streaming for your prompt responses by passing in a streaming cal
<TabItem value="python">

```python title="app.py"
import asyncio
from aiconfig import AIConfigRuntime, InferenceOptions
config = AIConfigRuntime.load('travel.aiconfig.json')

# Run a single prompt (with streaming)
inference_options = InferenceOptions(stream=True)
await config.run("get_activities", options=inference_options)
async def travelWithGPT():
aiconfig = AIConfigRuntime.load("travel.aiconfig.json")

# Run a single prompt (with streaming)
options = InferenceOptions(
stream=True,
# Write stream data to stdout
stream_callback=lambda data, _acc, _idx: print(data.get("content", ""), end=""),
)
await aiconfig.run("get_activities", options=options)

if __name__ == "__main__":
asyncio.run(travelWithGPT())
```

</TabItem>
Expand All @@ -212,6 +226,8 @@ async function travelWithGPT() {
// Run a single prompt
await aiConfig.run("get_activities", /*params*/ undefined, options);
}

travelWithGPT();
```

</TabItem>
Expand Down Expand Up @@ -258,11 +274,10 @@ Let's run this with AIConfig:
Replace `config.run` above with this:

```python
inference_options = InferenceOptions(stream=True)
await config.run(
await aiconfig.run(
"gen_itinerary",
params={"order_by": "duration"},
options=inference_options,
options=options,
run_with_dependencies=True)
```

Expand Down Expand Up @@ -298,7 +313,7 @@ Let's save the AIConfig back to disk, and serialize the outputs from the latest

```python
# Save the aiconfig to disk. and serialize outputs from the model run
config.save('updated.aiconfig.json', include_outputs=True)
aiconfig.save('updated.aiconfig.json', include_outputs=True)
```

</TabItem>
Expand Down

0 comments on commit 737f6c3

Please sign in to comment.