-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge pull request #5 from dimagi/smh/llm
LLM Guide
- Loading branch information
Showing
1 changed file
with
48 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,48 @@ | ||
# LLMs (Large Language Models) | ||
An *LLM* is a type of artificial intelligence model trained on vast amounts of text data. It can generate human-like text, answer questions, and perform various language-based tasks. Examples include OpenAI's GPT models. | ||
When building chatbots, an LLM powers the chatbot's ability to understand and respond to user inputs, effectively acting as the brain behind your chatbot. | ||
|
||
--- | ||
|
||
## Temperature | ||
Temperature controls the creativity or randomness of the chatbot's responses. | ||
|
||
- A low temperature (e.g., 0.1) makes the chatbot more deterministic, providing straightforward and predictable answers. | ||
- A high temperature (e.g., 0.9) makes responses more creative, varied, or even surprising. | ||
|
||
####Example: | ||
- Low temperature: *What's a dog?* → A dog is a domesticated animal. | ||
- High temperature: *What's a dog?* → A dog is a loyal companion, a furry friend who fills your life with wagging tails and boundless joy. | ||
|
||
The default temperature of 0.7 is a balanced choice designed to provide responses that are both varied and interesting, while still being coherent. | ||
|
||
--- | ||
|
||
## Prompt | ||
A prompt is the input or instructions given to the LLM to guide its response. It sets the context for the chatbot. Prompts can be as simple as a user question or as detailed as a conversation framework or role-play setup. | ||
|
||
#### Example: | ||
You are a helpful assistant. Answer questions clearly and concisely. | ||
|
||
|
||
## Tokens | ||
Tokens are the building blocks of text that the LLM processes. A token might be a word, a part of a word, or even just punctuation. | ||
|
||
#### Example: | ||
The sentence *"Chatbots are cool."* is broken into 4 tokens: | ||
`Chatbots | are | cool | .` | ||
|
||
Tokens are important because they determine the cost and the processing complexity of an LLM's response. | ||
|
||
--- | ||
|
||
## Max Token Limit | ||
The max token limit is the maximum number of tokens the LLM can handle in a single interaction, including both the input (prompt) and output (response). | ||
|
||
#### Example: | ||
If the max token limit is 4096 tokens: | ||
- A long prompt with 2000 tokens leaves 2096 tokens available for the response. | ||
|
||
Understanding the token limit helps you create effective prompts without truncating responses. | ||
|
||
--- |