diff --git a/fern/docs.yml b/fern/docs.yml
index 17a1fef..2a4dfa6 100644
--- a/fern/docs.yml
+++ b/fern/docs.yml
@@ -20,7 +20,7 @@ navigation:
- page: Welcome
path: ./docs/pages/welcome.mdx
- page: Quick Start
- path: ./docs/pages/gettingstarted.mdx
+ path: ./docs/pages/quickstart.mdx
- page: SDKs
path: ./docs/pages/sdks.mdx
- section: Input Requirements
diff --git a/fern/docs/pages/gettingstarted.mdx b/fern/docs/pages/quickstart.mdx
similarity index 86%
rename from fern/docs/pages/gettingstarted.mdx
rename to fern/docs/pages/quickstart.mdx
index 093d5cc..1f77900 100644
--- a/fern/docs/pages/gettingstarted.mdx
+++ b/fern/docs/pages/quickstart.mdx
@@ -3,28 +3,43 @@ title: Quick Start
description: Reliable, future proof AI predictions
---
-Technical teams need to figure out how to integrate the latest Large Language Models (LLMs), but:
+Technical teams need to figure out how to integrate the latest Large Language
+Models (LLMs), but:
- You can’t build robust systems with inconsistent, unvalidated outputs; and
-- LLM integrations scare corporate lawyers, finance departments, and security professionals due to hallucinations, cost, lack of compliance (e.g., HIPAA), leaked IP/PII, and “injection” vulnerabilities.
+- LLM integrations scare corporate lawyers, finance departments, and security
+professionals due to hallucinations, cost, lack of compliance (e.g., HIPAA),
+leaked IP/PII, and “injection” vulnerabilities.
-Some companies are moving forward anyway by investing tons of engineering time/money in their own wrappers around LLMs and expensive hosting with OpenAI/Azure. Others are ignoring these issues and pressing forward with fragile and risky LLM integrations.
+Some companies are moving forward anyway by investing tons of engineering time/money
+in their own wrappers around LLMs and expensive hosting with OpenAI/Azure. Others
+are ignoring these issues and pressing forward with fragile and risky LLM integrations.
-At Prediction Guard, we think that you should get useful output from compliant AI systems (without crazy implementation/ hosting costs), so our solution lets you:
+At Prediction Guard, we think that you should get useful output from compliant
+AI systems (without crazy implementation/ hosting costs), so our solution lets you:
1. **De-risk LLM inputs** to remove PII and prompt injections;
-2. **Validate and check LLM outputs** to guard against hallucination, toxicity and inconsistencies; and
-3. **Implement private and compliant LLM systems** (HIPAA and self-hosted) that give your legal counsel warm fuzzy feeling while still delighting your customers with AI features.
+2. **Validate and check LLM outputs** to guard against hallucination, toxicity and
+inconsistencies; and
+3. **Implement private and compliant LLM systems** (HIPAA and self-hosted) that
+give your legal counsel warm fuzzy feeling while still delighting your customers
+with AI features.
-Sounds pretty great right? Follow the steps below to starting leveraging trustworthy LLMs:
+Sounds pretty great right? Follow the steps below to starting leveraging
+trustworthy LLMs:
## 1. Get access to Prediction Guard Enterprise
-We host and control the latest LLMs for you in our secure and privacy-conserving enterprise platform, so you can focus on your prompts and chains. To access the hosted LLMs, contact us [here](https://mailchi.mp/predictionguard/getting-started) to get an enterprise access token. You will need this access token to continue.
+We host and control the latest LLMs for you in our secure and privacy-conserving
+enterprise platform, so you can focus on your prompts and chains. To access the
+hosted LLMs, contact us [here](https://mailchi.mp/predictionguard/getting-started)
+to get an enterprise access token. You will need this access token to continue.
## 2. Start using one of our LLMs!
-Suppose you want to prompt an LLM to answer a user query from a chat application. You can setup a message thread, which includes a system prompt (that instructs the LLM how to behave in responding) as follows:
+Suppose you want to prompt an LLM to answer a user query from a chat application.
+You can setup a message thread, which includes a system prompt (that instructs
+the LLM how to behave in responding) as follows:
```
[
diff --git a/fern/docs/pages/sdks.mdx b/fern/docs/pages/sdks.mdx
index 784c23e..c8aa32a 100644
--- a/fern/docs/pages/sdks.mdx
+++ b/fern/docs/pages/sdks.mdx
@@ -1,5 +1,5 @@
-We provide official open-source SDKs (client libraries) for your favorite platforms. These clients make connecting to our API faster and help avoid errors.
-
+We provide official open-source SDKs (client libraries) for your favorite platforms.
+These clients make connecting to our API faster and help avoid errors.
## Official SDK Documentation
@@ -10,16 +10,16 @@ We provide official open-source SDKs (client libraries) for your favorite platfo
-
## Request a New SDK
-If you'd like to request an SDK for a language that we don't currently support, please reach out to us on [Discord](https://discord.gg/TFHgnhAFKd). We prioritize languages based on demand.
-
+If you'd like to request an SDK for a language that we don't currently support,
+please reach out to us on [Discord](https://discord.gg/TFHgnhAFKd). We prioritize
+languages based on demand.
## Access Tokens
-To access the API, contact us [here](https://mailchi.mp/predictionguard/getting-started) to get an access token.
-
+To access the API, contact us [here](https://mailchi.mp/predictionguard/getting-started)
+to get an access token.
## SDK Quick Start
@@ -72,7 +72,8 @@ print(json.dumps(
#### More Python Examples
-Take a look at the [examples](https://github.com/predictionguard/python-client/tree/master/examples) directory for more Python examples.
+Take a look at the [examples](https://github.com/predictionguard/python-client/tree/master/examples)
+directory for more Python examples.
---
@@ -123,7 +124,8 @@ Chat();
#### More JS Examples
-Take a look at the [examples](https://github.com/predictionguard/js-client/tree/main/examples) directory for more JS examples.
+Take a look at the [examples](https://github.com/predictionguard/js-client/tree/main/examples)
+directory for more JS examples.
---
@@ -203,7 +205,8 @@ func run() error {
#### More Go Examples
-Take a look at the [examples](https://github.com/predictionguard/go-client/tree/main/examples) directory for more Go examples.
+Take a look at the [examples](https://github.com/predictionguard/go-client/tree/main/examples)
+directory for more Go examples.
---
@@ -246,4 +249,5 @@ async fn main() {
#### More Rust Examples
-Take a look at the [examples](https://github.com/predictionguard/rust-client/tree/main/examples) directory for more Rust examples.
+Take a look at the [examples](https://github.com/predictionguard/rust-client/tree/main/examples)
+directory for more Rust examples.
diff --git a/fern/docs/pages/usingllms/accessing.mdx b/fern/docs/pages/usingllms/accessing.mdx
index 5c7b1e5..0a7268f 100644
--- a/fern/docs/pages/usingllms/accessing.mdx
+++ b/fern/docs/pages/usingllms/accessing.mdx
@@ -33,13 +33,13 @@ os.environ["PREDICTIONGUARD_API_KEY"] = ""
client = PredictionGuard()
```
-You can find out more about the models available via the Prediction Guard API [in the docs](https://docs.predictionguard.com/models), and you can list out the model names via this command:
+You can find out more about the models available via the Prediction Guard API [in the docs](https://docs.predictionguard.com/options/enumerations), and you can list out the model names via this command:
```python copy
print(client.completions.list_models())
```
-Generating text with one of these models is then just single request for a "Completion" (note, we also support chat completions). Here we will call the `Notus-7B` model and try to have it autocomplete a joke.
+Generating text with one of these models is then just single request for a "Completion" (note, we also support chat completions). Here we will call the `Neural-Chat-7B"` model and try to have it autocomplete a joke.
```python copy
response = client.completions.create(model="Neural-Chat-7B",