diff --git a/fern/docs/pages/models/details.mdx b/fern/docs/pages/models/details.mdx index d9398a7..115b366 100644 --- a/fern/docs/pages/models/details.mdx +++ b/fern/docs/pages/models/details.mdx @@ -1,6 +1,7 @@ --- title: Model Options description: Reliable, future proof AI predictions +slug: models/details --- Using Prediction Guard gives you quick and easy access to state-of-the-art LLMs, without you needing to spend weeks figuring out all of the implementation details, managing a bunch of different API specs, and setting up a secure infrastructure for model deployments. @@ -22,9 +23,9 @@ Open access models are amazing these days! Each of these models was trained by a | Nous-Hermes-Llama2-13B | Text Generation | Generating output in response to arbitrary instructions | [Alpaca](prompts#alpaca) | 4096 | [link](https://huggingface.co/NousResearch/Nous-Hermes-Llama2-13b) | | Hermes-2-Pro-Mistral-7B | Chat | Instruction following or chat-like applications | [ChatML](prompts#chatml) | 4096 | [link](https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B) | | Neural-Chat-7B | Chat | Instruction following or chat-like applications | [Neural Chat](prompts#neural-chat) | 4096 | [link](https://huggingface.co/Intel/neural-chat-7b-v3-1) | -| Yi-34B-Chat | Chat | Instruction following in English or Chinese | [ChatML](prompts#chatml) | 2048 | [link](https://huggingface.co/01-ai/Yi-34B-Chat) | +| Yi-34B-Chat | Chat | Instruction following in English or Chinese | [ChatML](prompts#chatml) | 2048 | [link](https://huggingface.co/01-ai/Yi-34B-Chat) | | deepseek-coder-6.7b-instruct | Code Generation | Generating computer code or answering tech questions | [Deepseek](prompts#deepseek) | 4096 | [link](https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-instruct) | -| sqlcoder-7b-2 | Code Generation | Generating SQL queries from natural language prompts | [SQLCoder](prompts#sqlcoder) | 4096 | [link](https://huggingface.co/defog/sqlcoder-7b-2) | +| sqlcoder-7b-2 | Code Generation | Generating SQL queries from natural language prompts | [SQLCoder](prompts#sqlcoder) | 4096 | [link](https://huggingface.co/defog/sqlcoder-7b-2) | ### Other models available diff --git a/fern/docs/pages/usingllms/accessing.mdx b/fern/docs/pages/usingllms/accessing.mdx index b65d472..46326a5 100644 --- a/fern/docs/pages/usingllms/accessing.mdx +++ b/fern/docs/pages/usingllms/accessing.mdx @@ -1,6 +1,7 @@ --- title: Accessing LLMs description: Prediction Guard in action +slug: usingllms/accessing --- (Run this example in Google Colab [here](https://drive.google.com/file/d/1H5YAkQKzCwG70eK9fRSBBngytHQvlPXZ/view?usp=sharing)) diff --git a/fern/docs/pages/usingllms/agents.mdx b/fern/docs/pages/usingllms/agents.mdx index bf3596e..b871a7d 100644 --- a/fern/docs/pages/usingllms/agents.mdx +++ b/fern/docs/pages/usingllms/agents.mdx @@ -1,6 +1,7 @@ --- title: Agents description: Prediction Guard in action +slug: usingllms/agents --- (Run this example in Google Colab [here](https://drive.google.com/file/d/1w_pDpWh-J9N3pUXAvI2QyIDT2N4W62Gf/view?usp=sharing)) @@ -42,7 +43,6 @@ agent.run("How are Domino's gift cards delivered?") This will verbosely log the agents activities until it reaching a final answer and generates the response: - ``` > Entering new AgentExecutor chain... We need to find information about Domino's gift card delivery process @@ -54,4 +54,4 @@ Final Answer: Domino's gift cards can be delivered electronically or physically, > Finished chain. Domino's gift cards can be delivered electronically or physically, and can be used for online, phone, or in-store orders. They can be sent as eGift cards or physical cards, and can be used for both delivery and carryout orders. -``` \ No newline at end of file +``` diff --git a/fern/docs/pages/usingllms/augmentation.mdx b/fern/docs/pages/usingllms/augmentation.mdx index 760b1ad..09c6344 100644 --- a/fern/docs/pages/usingllms/augmentation.mdx +++ b/fern/docs/pages/usingllms/augmentation.mdx @@ -1,6 +1,7 @@ --- title: Chaining and Retrieval description: Prediction Guard in action +slug: usingllms/augmentation --- (Run this example in Google Colab [here](https://drive.google.com/file/d/1ukAPgQf39POwHhirIknTwKN7A9NLlFyt/view?usp=sharing)) diff --git a/fern/docs/pages/usingllms/chat.mdx b/fern/docs/pages/usingllms/chat.mdx index c3eb1bb..e7088a0 100644 --- a/fern/docs/pages/usingllms/chat.mdx +++ b/fern/docs/pages/usingllms/chat.mdx @@ -1,6 +1,7 @@ --- title: Chat Completions description: Prediction Guard in action +slug: usingllms/chat --- (Run this example in Google Colab [here](https://drive.google.com/file/d/15EBnc9aSC_KWeL677b_QHEOO7hIWDn1e/view?usp=sharing)) @@ -35,6 +36,7 @@ pg.Chat.list_models() ``` To perform a chat completion, you need to create an array of `messages`. Each message object should have a: + - `role` - "system", "user", or "assistant" - `content` - the text associated with the message @@ -91,15 +93,15 @@ while True: "role": "user", "content": request }) - + response = pg.Chat.create( model="Neural-Chat-7B", messages=messages )['choices'][0]['message']['content'].split('\n')[0].strip() - + messages.append({ "role": "assistant", "content": response }) print('Bot: ', response) -``` \ No newline at end of file +``` diff --git a/fern/fern.config.json b/fern/fern.config.json index fee71a8..1b51a79 100644 --- a/fern/fern.config.json +++ b/fern/fern.config.json @@ -1,4 +1,4 @@ { "organization": "Prediction-Guard", - "version": "0.19.11" + "version": "0.19.15" } \ No newline at end of file