From 61c648caff1f7003cf52be629cf18cb2952e812f Mon Sep 17 00:00:00 2001 From: William Kennedy Date: Thu, 6 Jun 2024 12:57:05 -0400 Subject: [PATCH] saving work --- fern/docs/pages/sdks.mdx | 2 +- fern/docs/pages/usingllms/accessing.mdx | 4 ++-- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/fern/docs/pages/sdks.mdx b/fern/docs/pages/sdks.mdx index c8aa32a..20264b9 100644 --- a/fern/docs/pages/sdks.mdx +++ b/fern/docs/pages/sdks.mdx @@ -27,7 +27,7 @@ to get an access token. You can find the SDK docs and package information using this link. -[Python SDK Docs](https://docs.predictionguard.com/docs/reference) +[Python SDK Docs](/docs/reference) #### Python Code Example diff --git a/fern/docs/pages/usingllms/accessing.mdx b/fern/docs/pages/usingllms/accessing.mdx index 57c66df..18bf921 100644 --- a/fern/docs/pages/usingllms/accessing.mdx +++ b/fern/docs/pages/usingllms/accessing.mdx @@ -21,7 +21,7 @@ etc. However, a common theme is the usage of LLMs through a REST API, which is e model hosting provider (Replicate, Baseten, etc.) - Self-hosted using a DIY model serving API (Flask, FastAPI, etc.) -We will use [Prediction Guard](https://www.predictionguard.com/) to call open +We will use [Prediction Guard](/) to call open access LLMs (like Mistral, Llama 2, WizardCoder, etc.) via a standardized OpenAI-like API. This will allow us to explore the full range of LLMs available. Further, it will illustrate how companies can access a wide range of models @@ -50,7 +50,7 @@ curl -il -X POST https://api.predictionguard.com/completions \ }' ``` -You can find out more about the models available [in the docs](options/enumerations). +You can find out more about the available [Models](options/enumerations) in the docs. The completions call should result in something similar to the following JSON output which includes the completion.