diff --git a/fern/docs/pages/_meta.json b/fern/docs/pages/_meta.json index 401dcad..56d6b5c 100644 --- a/fern/docs/pages/_meta.json +++ b/fern/docs/pages/_meta.json @@ -1,10 +1,10 @@ { "index": "Getting Started", - "models": "Available Models", + "enumerations": "Enumerations", + "prompts": "Prompt Formatting", "usingllms": "Using LLMs", "input": "Process LLM Input", "output": "Validating LLM Output", - "compliance": "Compliance", "guides": "Guides", "reference": "Reference", "support": "Support", diff --git a/fern/docs/pages/compliance.mdx b/fern/docs/pages/compliance.mdx deleted file mode 100644 index d837a2b..0000000 --- a/fern/docs/pages/compliance.mdx +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Compliance -description: Controlled and compliant AI applications ---- - -# Compliance - -Prediction Guard's hosting of [controlled, open access LLMs](models) is both SOC 2 Type II and HIPAA compliant. However, to be fully HIPAA compliant, the Prediction Guard REST API (which is also used in the Python client) can then be hosted in your own infrastructure for full control and isolation of data input/outputs. This self-hosting of the API is part of our enterprise plan. - -[Contact us](support) for more information about enterprise hosting and support of our API. \ No newline at end of file diff --git a/fern/docs/pages/input/PII.mdx b/fern/docs/pages/input/PII.mdx index 68831b0..dc6e1b9 100644 --- a/fern/docs/pages/input/PII.mdx +++ b/fern/docs/pages/input/PII.mdx @@ -1,9 +1,14 @@ --- -title: PII anonymization +title: PII Anonymization description: Controlled and compliant AI applications --- -Some of your incoming prompts may include personally identifiable information (PII). With Prediction Guard's PII anonymization feature, you can detect PII such as names, email addresses, phone numbers, credit card details, and country-specific ID numbers like SSNs, NHS numbers, and passport numbers. Here's a demonstration of how this works: +Some of your incoming prompts may include personally identifiable information +(PII). With Prediction Guard's PII anonymization feature, you can detect PII such +as names, email addresses, phone numbers, credit card details, and country-specific +ID numbers like SSNs, NHS numbers, and passport numbers. + +Here's a demonstration of how this works. ```python copy import os @@ -29,9 +34,9 @@ print(json.dumps( )) ``` -This outputs the PII entity and indices of where the info was found: +This outputs the PII entity and indices of where the info was found. -```json +```json copy { "checks": [ { @@ -46,7 +51,9 @@ This outputs the PII entity and indices of where the info was found: } ``` -To maintain utility without compromising privacy, you have the option to replace PII with fake names and then forward the modified prompt to the LLM for further processing: +To maintain utility without compromising privacy, you have the option to replace +PII with fake names and then forward the modified prompt to the LLM for further +processing. ```python copy result = client.pii.check( @@ -63,9 +70,9 @@ print(json.dumps( )) ``` -The processed prompt will then be: +The processed prompt will then be. -```json +```json copy { "checks": [ { @@ -80,9 +87,12 @@ The processed prompt will then be: } ``` -Other options for the `replace_method` parameter include: `random` (to replace the detected PII with random character), `category` (to mask the PII with the entity type) and `mask` (simply replace with `*`). +Other options for the `replace_method` parameter include: `random` (to replace +the detected PII with random character), `category` (to mask the PII with the +entity type) and `mask` (simply replace with `*`). -Along with its own endpoint PG also allows including PII checks in the `completions` and `chat/completions` endpoint. +Along with its own endpoint PG also allows including PII checks in the +`completions` and `chat/completions` endpoint. ```python copy import os @@ -104,8 +114,10 @@ response = pg.Completion.create( print(json.dumps(response, sort_keys=True, indent=4, separators=(',', ': '))) ``` -In the response, you can see the PII has been replced and the LLM response is for the modified prompt: -```json +In the response, you can see the PII has been replced and the LLM response is for +the modified prompt. + +```json copy { "choices": [ { @@ -121,8 +133,7 @@ In the response, you can see the PII has been replced and the LLM response is fo } ``` -You can enable PII in `\completions` endpoint to block the requests as well: - +You can enable PII in the `\completions` endpoint to block the requests as well. ```python copy import os @@ -144,9 +155,10 @@ response = pg.Completion.create( print(json.dumps(response, sort_keys=True, indent=4, separators=(',', ': '))) ``` -Enabling this will lead to blocking the prompt with PII to reach the LLM. You will be seeing this response: +Enabling this will lead to blocking the prompt with PII to reach the LLM. You will +be seeing this response. -```json +```json copy { "choices": [ { @@ -161,7 +173,9 @@ Enabling this will lead to blocking the prompt with PII to reach the LLM. You wi "object": "text_completion" } ``` -You can add the `pii` check to the chat completions as well. This is illustrated below: + +You can add the `pii` check to the chat completions as well. This is illustrated +below. ```python copy import os @@ -195,9 +209,9 @@ print(json.dumps( )) ``` -This will produce an output like the following: +This will produce an output like the following. -```json +```json copy { "choices": [ { @@ -217,6 +231,5 @@ This will produce an output like the following: } ``` -In the output it is clear that before the prompt was sent to the llm, the PII was replaced with fictitious -information. - +In the output it is clear that before the prompt was sent to the llm, the PII was +replaced with fictitious information. diff --git a/fern/docs/pages/input/injection.mdx b/fern/docs/pages/input/injection.mdx index d232d1b..3956be7 100644 --- a/fern/docs/pages/input/injection.mdx +++ b/fern/docs/pages/input/injection.mdx @@ -1,11 +1,17 @@ --- -title: Prompt injection detection +title: Prompt Injection Detection description: Controlled and compliant AI applications --- -There are several types of prompt injection attacks, new attacks being discovered at a rapid speed. As you integrate LLMs to regular workflow is is always good to be prepared against these injection attacks. +There are several types of prompt injection attacks, new attacks being discovered +at a rapid speed. As you integrate LLMs to regular workflow is is always good to +be prepared against these injection attacks. -With Prediction Guard, you have the ability to assess whether an incoming prompt might be an injection attempt before it reaches the LLM. Get a probability score and the option to block it, safeguarding against potential attacks. Below, you can see the feature in action, demonstrated with a modified version of a known prompt injection: +With Prediction Guard, you have the ability to assess whether an incoming prompt +might be an injection attempt before it reaches the LLM. Get a probability score +and the option to block it, safeguarding against potential attacks. Below, you +can see the feature in action, demonstrated with a modified version of a known +prompt injection: ```python copy import os @@ -31,9 +37,9 @@ print(json.dumps( )) ``` -We can now get an output with probability of injection +We can now get an output with probability of injection. -```json +```json copy { "checks": [ { @@ -48,7 +54,7 @@ We can now get an output with probability of injection } ``` -Let's try this again with an inoccuous prompt: +Let's try this again with an inoccuous prompt. ```python copy result = client.injection.check( @@ -64,9 +70,9 @@ print(json.dumps( )) ``` -which outputs: +This will produce an output like the following. -```json +```json copy { "checks": [ {