Skip to content

Commit

Permalink
saving work
Browse files Browse the repository at this point in the history
  • Loading branch information
ardan-bkennedy committed Jun 6, 2024
1 parent 012089c commit c27c16c
Show file tree
Hide file tree
Showing 4 changed files with 42 additions and 23 deletions.
2 changes: 1 addition & 1 deletion fern/docs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ navigation:
- page: Welcome
path: ./docs/pages/welcome.mdx
- page: Quick Start
path: ./docs/pages/gettingstarted.mdx
path: ./docs/pages/quickstart.mdx
- page: SDKs
path: ./docs/pages/sdks.mdx
- section: Input Requirements
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,28 +3,43 @@ title: Quick Start
description: Reliable, future proof AI predictions
---

Technical teams need to figure out how to integrate the latest Large Language Models (LLMs), but:
Technical teams need to figure out how to integrate the latest Large Language
Models (LLMs), but:

- You can’t build robust systems with inconsistent, unvalidated outputs; and
- LLM integrations scare corporate lawyers, finance departments, and security professionals due to hallucinations, cost, lack of compliance (e.g., HIPAA), leaked IP/PII, and “injection” vulnerabilities.
- LLM integrations scare corporate lawyers, finance departments, and security
professionals due to hallucinations, cost, lack of compliance (e.g., HIPAA),
leaked IP/PII, and “injection” vulnerabilities.

Some companies are moving forward anyway by investing tons of engineering time/money in their own wrappers around LLMs and expensive hosting with OpenAI/Azure. Others are ignoring these issues and pressing forward with fragile and risky LLM integrations.
Some companies are moving forward anyway by investing tons of engineering time/money
in their own wrappers around LLMs and expensive hosting with OpenAI/Azure. Others
are ignoring these issues and pressing forward with fragile and risky LLM integrations.

At Prediction Guard, we think that you should get useful output from compliant AI systems (without crazy implementation/ hosting costs), so our solution lets you:
At Prediction Guard, we think that you should get useful output from compliant
AI systems (without crazy implementation/ hosting costs), so our solution lets you:

1. **De-risk LLM inputs** to remove PII and prompt injections;
2. **Validate and check LLM outputs** to guard against hallucination, toxicity and inconsistencies; and
3. **Implement private and compliant LLM systems** (HIPAA and self-hosted) that give your legal counsel warm fuzzy feeling while still delighting your customers with AI features.
2. **Validate and check LLM outputs** to guard against hallucination, toxicity and
inconsistencies; and
3. **Implement private and compliant LLM systems** (HIPAA and self-hosted) that
give your legal counsel warm fuzzy feeling while still delighting your customers
with AI features.

Sounds pretty great right? Follow the steps below to starting leveraging trustworthy LLMs:
Sounds pretty great right? Follow the steps below to starting leveraging
trustworthy LLMs:

## 1. Get access to Prediction Guard Enterprise

We host and control the latest LLMs for you in our secure and privacy-conserving enterprise platform, so you can focus on your prompts and chains. To access the hosted LLMs, contact us [here](https://mailchi.mp/predictionguard/getting-started) to get an enterprise access token. You will need this access token to continue.
We host and control the latest LLMs for you in our secure and privacy-conserving
enterprise platform, so you can focus on your prompts and chains. To access the
hosted LLMs, contact us [here](https://mailchi.mp/predictionguard/getting-started)
to get an enterprise access token. You will need this access token to continue.

## 2. Start using one of our LLMs!

Suppose you want to prompt an LLM to answer a user query from a chat application. You can setup a message thread, which includes a system prompt (that instructs the LLM how to behave in responding) as follows:
Suppose you want to prompt an LLM to answer a user query from a chat application.
You can setup a message thread, which includes a system prompt (that instructs
the LLM how to behave in responding) as follows:

```
[
Expand Down
26 changes: 15 additions & 11 deletions fern/docs/pages/sdks.mdx
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
We provide official open-source SDKs (client libraries) for your favorite platforms. These clients make connecting to our API faster and help avoid errors.

We provide official open-source SDKs (client libraries) for your favorite platforms.
These clients make connecting to our API faster and help avoid errors.

## Official SDK Documentation

Expand All @@ -10,16 +10,16 @@ We provide official open-source SDKs (client libraries) for your favorite platfo
<Card title="Rust" icon="fa-brands fa-rust" href="#rustclient" />
</Cards>


## Request a New SDK

If you'd like to request an SDK for a language that we don't currently support, please reach out to us on [Discord](https://discord.gg/TFHgnhAFKd). We prioritize languages based on demand.

If you'd like to request an SDK for a language that we don't currently support,
please reach out to us on [Discord](https://discord.gg/TFHgnhAFKd). We prioritize
languages based on demand.

## Access Tokens

To access the API, contact us [here](https://mailchi.mp/predictionguard/getting-started) to get an access token.

To access the API, contact us [here](https://mailchi.mp/predictionguard/getting-started)
to get an access token.

## SDK Quick Start

Expand Down Expand Up @@ -72,7 +72,8 @@ print(json.dumps(

#### More Python Examples

Take a look at the [examples](https://github.com/predictionguard/python-client/tree/master/examples) directory for more Python examples.
Take a look at the [examples](https://github.com/predictionguard/python-client/tree/master/examples)
directory for more Python examples.

---

Expand Down Expand Up @@ -123,7 +124,8 @@ Chat();

#### More JS Examples

Take a look at the [examples](https://github.com/predictionguard/js-client/tree/main/examples) directory for more JS examples.
Take a look at the [examples](https://github.com/predictionguard/js-client/tree/main/examples)
directory for more JS examples.

---

Expand Down Expand Up @@ -203,7 +205,8 @@ func run() error {

#### More Go Examples

Take a look at the [examples](https://github.com/predictionguard/go-client/tree/main/examples) directory for more Go examples.
Take a look at the [examples](https://github.com/predictionguard/go-client/tree/main/examples)
directory for more Go examples.

---

Expand Down Expand Up @@ -246,4 +249,5 @@ async fn main() {

#### More Rust Examples

Take a look at the [examples](https://github.com/predictionguard/rust-client/tree/main/examples) directory for more Rust examples.
Take a look at the [examples](https://github.com/predictionguard/rust-client/tree/main/examples)
directory for more Rust examples.
4 changes: 2 additions & 2 deletions fern/docs/pages/usingllms/accessing.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -33,13 +33,13 @@ os.environ["PREDICTIONGUARD_API_KEY"] = "<api key>"
client = PredictionGuard()
```

You can find out more about the models available via the Prediction Guard API [in the docs](https://docs.predictionguard.com/models), and you can list out the model names via this command:
You can find out more about the models available via the Prediction Guard API [in the docs](https://docs.predictionguard.com/options/enumerations), and you can list out the model names via this command:

```python copy
print(client.completions.list_models())
```

Generating text with one of these models is then just single request for a "Completion" (note, we also support chat completions). Here we will call the `Notus-7B` model and try to have it autocomplete a joke.
Generating text with one of these models is then just single request for a "Completion" (note, we also support chat completions). Here we will call the `Neural-Chat-7B"` model and try to have it autocomplete a joke.

```python copy
response = client.completions.create(model="Neural-Chat-7B",
Expand Down

0 comments on commit c27c16c

Please sign in to comment.