Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Doc Refresh #36

Merged
merged 3 commits into from
Sep 12, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
86 changes: 69 additions & 17 deletions fern/docs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -5,24 +5,34 @@ instances:
title: Prediction Guard | Docs

tabs:
help:
display-name: Docs
home:
display-name: Home
icon: home
Guides:
display-name: Guides and concepts
icon: book
api:
display-name: API Reference
icon: puzzle
sdk:
display-name: SDK Docs
icon: fa-brands fa-codepen

navigation:
- tab: help
- tab: home
layout:
- section: Getting Started
contents:
- page: Welcome
path: ./docs/pages/welcome.mdx
- page: Security
path: ./docs/pages/security.mdx
- tab: Guides
layout:
- section: Getting Started
contents:
- page: Quick Start
path: ./docs/pages/quick_start.mdx
- page: SDKs
path: ./docs/pages/sdks.mdx
- section: Input Requirements
contents:
- page: Models
Expand Down Expand Up @@ -75,11 +85,20 @@ navigation:
path: ./docs/pages/guides/langchainllm.mdx
- page: Many Chat with LLMs
path: ./docs/pages/guides/ManyChat.mdx
- section: Reference
- section: Support
contents:
- page: Contact Us
path: ./docs/pages/support.mdx

- tab: api
layout:
- api: API Reference
display-errors: true
- section: SDK Reference
contents:
- page: Chat
path: ./docs/pages/reference/chat.mdx
- page: Chat SSE
- page: Chat SSE (Streaming)
path: ./docs/pages/reference/chat_sse.mdx
- page: Chat Vision
path: ./docs/pages/reference/chat_vision.mdx
Expand All @@ -97,31 +116,64 @@ navigation:
path: ./docs/pages/reference/toxicity.mdx
- page: Translate
path: ./docs/pages/reference/translate.mdx
- section: Support
contents:
- page: Contact Us
path: ./docs/pages/support.mdx
- tab: api

- tab: sdk
layout:
- api: API Reference
display-errors: true
- section: Software Development Kits
contents:
- page: SDKs
path: ./docs/pages/sdks.mdx
- page: Quick Start
path: ./docs/pages/sdkquickstart.mdx
- section: SDK Reference
contents:
- page: Chat
path: ./docs/pages/reference/chat.mdx
- page: Chat SSE (Streaming)
path: ./docs/pages/reference/chat_sse.mdx
- page: Chat Vision
path: ./docs/pages/reference/chat_vision.mdx
- page: Completions
path: ./docs/pages/reference/completions.mdx
- page: Embeddings
path: ./docs/pages/reference/embeddings.mdx
- page: Factuality
path: ./docs/pages/reference/factuality.mdx
- page: Injection
path: ./docs/pages/reference/injection.mdx
- page: PII
path: ./docs/pages/reference/PII.mdx
- page: Toxicity
path: ./docs/pages/reference/toxicity.mdx
- page: Translate
path: ./docs/pages/reference/translate.mdx

navbar-links:
- type: secondary
text: Contact us
url: "mailto:contact@predictionguard.com"
url: "https://predictionguard.com/get-started"
- type: primary
text: Join our Discord
url: https://discord.gg/TFHgnhAFKd

colors:
accentPrimary:
dark: "#212121"
light: "#376d20"
dark: "#009959"
light: "#009959"

logo:
dark: ./docs/assets/logo_pg_dark.png
light: ./docs/assets/logo_pg_light.png
height: 85
href: https://predictionguard.com/

favicon: ./docs/assets/favicon.png

layout:
page-width: 108rem
header-height: 75px
searchbar-placement: header
tabs-placement: header
content-width: 54rem

css: ./styles.css
Binary file modified fern/docs/assets/logo_pg_dark.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified fern/docs/assets/logo_pg_light.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added fern/docs/assets/old_logo.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added fern/docs/assets/oldlightlogo.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
120 changes: 58 additions & 62 deletions fern/docs/pages/quick_start.mdx
Original file line number Diff line number Diff line change
@@ -1,60 +1,48 @@
---
title: Quick Start
description: Reliable, future proof AI predictions
subtitle: Reliable, future-proof AI predictions
description: Reliable, future-proof AI predictions
---

Technical teams need to figure out how to integrate the latest Large Language
Models (LLMs), but:
Technical teams need to figure out how to integrate the latest Large Language Models (LLMs), but:

- You can’t build robust systems with inconsistent, unvalidated outputs; and
- LLM integrations scare corporate lawyers, finance departments, and security
professionals due to hallucinations, cost, lack of compliance (e.g., HIPAA),
leaked IP/PII, and “injection” vulnerabilities.
- LLM integrations scare corporate lawyers, finance departments, and security professionals due to hallucinations, cost, lack of compliance (e.g., HIPAA), leaked IP/PII, and “injection” vulnerabilities.

Some companies are moving forward anyway by investing tons of engineering time/money
in their own wrappers around LLMs and expensive hosting with OpenAI/Azure. Others
are ignoring these issues and pressing forward with fragile and risky LLM integrations.
Some companies are moving forward anyway by investing tons of engineering time/money in their own wrappers around LLMs and expensive hosting with OpenAI/Azure. Others are ignoring these issues and pressing forward with fragile and risky LLM integrations.

At Prediction Guard, we think that you should get useful output from compliant
AI systems (without crazy implementation/ hosting costs), so our solution lets you:
At Prediction Guard, we think that you should get useful output from compliant AI systems (without crazy implementation/hosting costs), so our solution lets you:

1. **De-risk LLM inputs** to remove PII and prompt injections;
2. **Validate and check LLM outputs** to guard against hallucination, toxicity and
inconsistencies; and
3. **Implement private and compliant LLM systems** (HIPAA and self-hosted) that
give your legal counsel warm fuzzy feeling while still delighting your customers
with AI features.
2. **Validate and check LLM outputs** to guard against hallucination, toxicity, and inconsistencies; and
3. **Implement private and compliant LLM systems** (HIPAA and self-hosted) that give your legal counsel a warm fuzzy feeling while still delighting your customers with AI features.

Sounds pretty great right? Follow the steps below to starting leveraging
trustworthy LLMs:
## Sounds pretty great, right? Follow the steps below to start leveraging trustworthy LLMs:

## 1. Get access to Prediction Guard Enterprise
<Steps toc={true}>

### Get access to Prediction Guard Enterprise

We host and control the latest LLMs for you in our secure and privacy-conserving
enterprise platform, so you can focus on your prompts and chains. To access the
hosted LLMs, contact us [here](https://mailchi.mp/predictionguard/getting-started)
to get an enterprise access token. You will need this access token to continue.
We host and control the latest LLMs for you in our secure and privacy-conserving enterprise platform, so you can focus on your prompts and chains. To access the hosted LLMs, contact us [here](https://predictionguard.com/get-started) to get an enterprise access token. You will need this access token to continue.

## 2. Start using one of our LLMs!
### Start using one of our LLMs!

Suppose you want to prompt an LLM to answer a user query from a chat application.
You can setup a message thread, which includes a system prompt (that instructs
the LLM how to behave in responding) as follows:
Suppose you want to prompt an LLM to answer a user query from a chat application. You can set up a message thread, which includes a system prompt (that instructs the LLM how to behave in responding) as follows:

```
```json
[
{
"role": "system",
"content": "You are a helpful assistant. Your model is hosted by Prediction Guard, a leading AI company."
},
{
"role": "user",
"content": "Where can I access the LLMs in a safe and secure environment?"
}
{
"role": "system",
"content": "You are a helpful assistant. Your model is hosted by Prediction Guard, a leading AI company."
},
{
"role": "user",
"content": "Where can I access the LLMs in a safe and secure environment?"
}
]
```

## 3. Download the SDK for your favorite language
### Download the SDK for your favorite language

You can then use any of our official SDKs or REST API to prompt one of our LLMs!

Expand All @@ -67,36 +55,38 @@ import os

from predictionguard import PredictionGuard

# You can set you Prediction Guard API Key as an env variable named "PREDICTIONGUARD_API_KEY",
# You can set your Prediction Guard API Key as an env variable named "PREDICTIONGUARD_API_KEY",

# or when creating the client object

client = PredictionGuard(
api_key="<api key>"
api_key="<api key>"
)

messages = [
{
"role": "system",
"content": "You are a helpful chatbot that helps people learn."
},
{
"role": "user",
"content": "What is a good way to learn to code?"
}
{
"role": "system",
"content": "You are a helpful chatbot that helps people learn."
},
{
"role": "user",
"content": "What is a good way to learn to code?"
}
]

result = client.chat.completions.create(
model="Hermes-2-Pro-Llama-3-8B",
messages=messages,
max_tokens=100
model="Hermes-2-Pro-Llama-3-8B",
messages=messages,
max_tokens=100
)

print(json.dumps(
result,
sort_keys=True,
indent=4,
separators=(',', ': ')
result,
sort_keys=True,
indent=4,
separators=(',', ': ')
))

````
</CodeBlock>

Expand Down Expand Up @@ -171,6 +161,7 @@ func run() error {
return nil
}
````

</CodeBlock>

<CodeBlock title="Rust">
Expand All @@ -182,7 +173,7 @@ use pg_client::{client, chat, models};

#[tokio::main]
async fn main() {
let pg_env = client::PgEnvironment::from_env().expect("env keys");
let pg_env = client::PgEnvironment::from_env().expect("env keys");

let clt = client::Client::new(pg_env).expect("client value");

Expand All @@ -203,9 +194,14 @@ async fn main() {
.await
.expect("error from generate chat completion");

println!("\nchat completion response:\n\n {:?}", result);
println!("

chat completion response:

{:?}", result);
}
```

````
</CodeBlock>

<CodeBlock title="NodeJS">
Expand Down Expand Up @@ -250,14 +246,12 @@ async function Chat() {

Chat();
````

</CodeBlock>

<CodeBlock title="cURL">
```bash
curl -i -X POST https://api.predictionguard.com/chat/completions \
-H "Authorization: Bearer ${PREDICTIONGUARD_API_KEY}" \
-H "Content-Type: application/json" \
-d '{
curl -i -X POST https://api.predictionguard.com/chat/completions -H "Authorization: Bearer ${PREDICTIONGUARD_API_KEY}" -H "Content-Type: application/json" -d '{
"model": "Neural-Chat-7B",
"messages": [
{
Expand All @@ -282,11 +276,13 @@ curl -i -X POST https://api.predictionguard.com/chat/completions \
"pii_replace_method": "random"
}
}'
````
```
</CodeBlock>

</CodeBlocks>

<Callout type="info" emoji="ℹ️">
Note, you will need to replace `<your api key>` in the above examples with your actual access token.
</Callout>

</Steps>
Loading
Loading