Skip to content

Commit

Permalink
saving work
Browse files Browse the repository at this point in the history
  • Loading branch information
ardan-bkennedy committed Jun 6, 2024
1 parent 3385fed commit 3ea1d25
Show file tree
Hide file tree
Showing 4 changed files with 50 additions and 41 deletions.
4 changes: 2 additions & 2 deletions fern/docs/pages/_meta.json
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
{
"index": "Getting Started",
"models": "Available Models",
"enumerations": "Enumerations",
"prompts": "Prompt Formatting",
"usingllms": "Using LLMs",
"input": "Process LLM Input",
"output": "Validating LLM Output",
"compliance": "Compliance",
"guides": "Guides",
"reference": "Reference",
"support": "Support",
Expand Down
10 changes: 0 additions & 10 deletions fern/docs/pages/compliance.mdx

This file was deleted.

55 changes: 34 additions & 21 deletions fern/docs/pages/input/PII.mdx
Original file line number Diff line number Diff line change
@@ -1,9 +1,14 @@
---
title: PII anonymization
title: PII Anonymization
description: Controlled and compliant AI applications
---

Some of your incoming prompts may include personally identifiable information (PII). With Prediction Guard's PII anonymization feature, you can detect PII such as names, email addresses, phone numbers, credit card details, and country-specific ID numbers like SSNs, NHS numbers, and passport numbers. Here's a demonstration of how this works:
Some of your incoming prompts may include personally identifiable information
(PII). With Prediction Guard's PII anonymization feature, you can detect PII such
as names, email addresses, phone numbers, credit card details, and country-specific
ID numbers like SSNs, NHS numbers, and passport numbers.

Here's a demonstration of how this works.

```python copy
import os
Expand All @@ -29,9 +34,9 @@ print(json.dumps(
))
```

This outputs the PII entity and indices of where the info was found:
This outputs the PII entity and indices of where the info was found.

```json
```json copy
{
"checks": [
{
Expand All @@ -46,7 +51,9 @@ This outputs the PII entity and indices of where the info was found:
}
```

To maintain utility without compromising privacy, you have the option to replace PII with fake names and then forward the modified prompt to the LLM for further processing:
To maintain utility without compromising privacy, you have the option to replace
PII with fake names and then forward the modified prompt to the LLM for further
processing.

```python copy
result = client.pii.check(
Expand All @@ -63,9 +70,9 @@ print(json.dumps(
))
```

The processed prompt will then be:
The processed prompt will then be.

```json
```json copy
{
"checks": [
{
Expand All @@ -80,9 +87,12 @@ The processed prompt will then be:
}
```

Other options for the `replace_method` parameter include: `random` (to replace the detected PII with random character), `category` (to mask the PII with the entity type) and `mask` (simply replace with `*`).
Other options for the `replace_method` parameter include: `random` (to replace
the detected PII with random character), `category` (to mask the PII with the
entity type) and `mask` (simply replace with `*`).

Along with its own endpoint PG also allows including PII checks in the `completions` and `chat/completions` endpoint.
Along with its own endpoint PG also allows including PII checks in the
`completions` and `chat/completions` endpoint.

```python copy
import os
Expand All @@ -104,8 +114,10 @@ response = pg.Completion.create(
print(json.dumps(response, sort_keys=True, indent=4, separators=(',', ': ')))
```

In the response, you can see the PII has been replced and the LLM response is for the modified prompt:
```json
In the response, you can see the PII has been replced and the LLM response is for
the modified prompt.

```json copy
{
"choices": [
{
Expand All @@ -121,8 +133,7 @@ In the response, you can see the PII has been replced and the LLM response is fo
}
```

You can enable PII in `\completions` endpoint to block the requests as well:

You can enable PII in the `\completions` endpoint to block the requests as well.

```python copy
import os
Expand All @@ -144,9 +155,10 @@ response = pg.Completion.create(
print(json.dumps(response, sort_keys=True, indent=4, separators=(',', ': ')))
```

Enabling this will lead to blocking the prompt with PII to reach the LLM. You will be seeing this response:
Enabling this will lead to blocking the prompt with PII to reach the LLM. You will
be seeing this response.

```json
```json copy
{
"choices": [
{
Expand All @@ -161,7 +173,9 @@ Enabling this will lead to blocking the prompt with PII to reach the LLM. You wi
"object": "text_completion"
}
```
You can add the `pii` check to the chat completions as well. This is illustrated below:

You can add the `pii` check to the chat completions as well. This is illustrated
below.

```python copy
import os
Expand Down Expand Up @@ -195,9 +209,9 @@ print(json.dumps(
))
```

This will produce an output like the following:
This will produce an output like the following.

```json
```json copy
{
"choices": [
{
Expand All @@ -217,6 +231,5 @@ This will produce an output like the following:
}
```

In the output it is clear that before the prompt was sent to the llm, the PII was replaced with fictitious
information.

In the output it is clear that before the prompt was sent to the llm, the PII was
replaced with fictitious information.
22 changes: 14 additions & 8 deletions fern/docs/pages/input/injection.mdx
Original file line number Diff line number Diff line change
@@ -1,11 +1,17 @@
---
title: Prompt injection detection
title: Prompt Injection Detection
description: Controlled and compliant AI applications
---

There are several types of prompt injection attacks, new attacks being discovered at a rapid speed. As you integrate LLMs to regular workflow is is always good to be prepared against these injection attacks.
There are several types of prompt injection attacks, new attacks being discovered
at a rapid speed. As you integrate LLMs to regular workflow is is always good to
be prepared against these injection attacks.

With Prediction Guard, you have the ability to assess whether an incoming prompt might be an injection attempt before it reaches the LLM. Get a probability score and the option to block it, safeguarding against potential attacks. Below, you can see the feature in action, demonstrated with a modified version of a known prompt injection:
With Prediction Guard, you have the ability to assess whether an incoming prompt
might be an injection attempt before it reaches the LLM. Get a probability score
and the option to block it, safeguarding against potential attacks. Below, you
can see the feature in action, demonstrated with a modified version of a known
prompt injection:

```python copy
import os
Expand All @@ -31,9 +37,9 @@ print(json.dumps(
))
```

We can now get an output with probability of injection
We can now get an output with probability of injection.

```json
```json copy
{
"checks": [
{
Expand All @@ -48,7 +54,7 @@ We can now get an output with probability of injection
}
```

Let's try this again with an inoccuous prompt:
Let's try this again with an inoccuous prompt.

```python copy
result = client.injection.check(
Expand All @@ -64,9 +70,9 @@ print(json.dumps(
))
```

which outputs:
This will produce an output like the following.

```json
```json copy
{
"checks": [
{
Expand Down

0 comments on commit 3ea1d25

Please sign in to comment.